<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Machine Learning (Theory)</title>
	<atom:link href="https://hunch.net/?feed=rss2" rel="self" type="application/rss+xml" />
	<link>https://hunch.net</link>
	<description>Machine learning and learning theory research</description>
	<lastBuildDate>Wed, 05 Mar 2025 20:16:49 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.8.2</generator>
	<item>
		<title>Headroom for AI development</title>
		<link>https://hunch.net/?p=13763046</link>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Wed, 05 Mar 2025 20:16:49 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<guid isPermaLink="false">https://hunch.net/?p=13763046</guid>

					<description><![CDATA[(Dylan Foster and Alex Lamb both helped in creating this.) In thinking about what are good research problems, it’s sometimes helpful to switch from what is understood to what is clearly possible. This encourages us to think beyond simply improving the existing system. For example, we have seen instances throughout the history of machine learning &#8230; <p class="link-more"><a href="https://hunch.net/?p=13763046" class="more-link">Continue reading<span class="screen-reader-text"> "Headroom for AI development"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>(<a href="https://dylanfoster.net/">Dylan Foster</a> and <a href="https://www.microsoft.com/en-us/research/people/lambalex/">Alex Lamb</a> both helped in creating this.)</p>
<p>In thinking about what are good research problems, it’s sometimes helpful to switch from what is understood to what is clearly possible.  This encourages us to think beyond simply improving the existing system.  For example, we have seen instances throughout the history of machine learning where researchers have argued for fixing an architecture and using it for short-term success, ignoring potential for long-term disruption. As an example, the speech recognition community spent decades focusing on Hidden Markov Models at the expense of other architectures, before eventually being disrupted by advancements in deep learning. Support Vector Machines were disrupted by deep learning, and convolutional neural networks were displaced by transformers. This pattern may repeat for the current transformer/large language model (LLM) paradigm.  Here are some quick calculations suggesting it may be possible to do significantly better along multiple axes.  Examples include the following:</p>
<ul>
<li>Language learning efficiency: A human baby can learn a good model for human language after observing 0.01% of the language tokens typically used to train a large language model.</li>
<li>Representational efficiency: A tiny <a href="https://en.wikipedia.org/wiki/Portia_(spider)">Portia spider</a> with a brain a million times smaller than a human can plan a course of action and execute it over the course of an hour to catch prey.</li>
<li>Long-term planning and memory: A squirrel <a href="https://en.wikipedia.org/wiki/Squirrel#Behavior">caches nuts</a> and returns to them after months of experience, which would correspond to keeping billions of visual tokens in context using current techniques.</li>
</ul>
<p>The core of this argument is that it is manifestly viable to do better along multiple axes, including  sample efficiency and the ability to perform complex tasks requiring memory.  All these examples highlight advanced capabilities that can be achieved at scales well below what is required by existing transformer architectures and training methodologies (in terms of either data or compute). This is in no way meant as an attack on transformer architectures; they are a highly disruptive technology, accomplishing what other types of architectures have not, and they will likely serve as a foundation for further advances. However, there is much more to do.</p>
<p>Next, we delve into each of the examples above in greater detail.</p>
<h2>Sample complexity: The language learning efficiency gap</h2>
<p>The sample efficiency gap is perhaps best illustrated by considering the core problem of language modeling where a <a href="https://en.wikipedia.org/wiki/Generative_pre-trained_transformer">transformer</a> is trained to learn language.  A human baby starts with no appreciable language but learns it well before adulthood.  Reading at 300 words per second with 1.3 tokens/word on average implies 6.5 tokens/second. Speaking is typically about half of reading speed, implying three tokens per second. Sleeping and many other daily activities of course involve no tokens per second. Overall, one language token per second is a reasonable rough estimate of what a child observes.  At this rate, 31 years must pass before they observe a billion tokens. Yet speculations <a href="https://patmcguinness.substack.com/p/gpt-4-details-revealed">about GPT-4</a> suggest four orders of magnitude more than a human observes in the process of learning. Closing this language learning efficiency gap (or more generally, sample efficiency gap) can have significant impact at multiple scales:</p>
<ul>
<li>Large models: Organizations have already scraped most of the internet and exhausted natural sources for high-quality tokens (e.g., arXiv, Wikipedia). To continue improving the largest models, better sample efficiency may be required.</li>
<li>Small models: In spite of significant advances, further improvements to sample efficiency may be required if we want small language models (e.g., at the 3B scale) to reach the same level of performance as frontier models like GPT-4.</li>
</ul>
<p>There are common arguments against the existence of a language efficiency gap which appear unconvincing.</p>
<h3>Maybe a better choice of tokens is all you need?</h3>
<p>This can’t be entirely ruled out, but the <a href="https://arxiv.org/pdf/2404.14219">Phi</a> <a href="https://arxiv.org/pdf/2412.08905">series</a> was an effort in this direction with the latest model trained on 10T tokens, implying there’s still a four-orders-of-magnitude efficiency gap between a human and a model which is still generally weaker than GPT-4 along most axes. It is possible that more sophisticated interactive data collection approaches could help close this gap, but this is largely unexplored.</p>
<h3>Maybe language learning is evolutionary?</h3>
<p>The chimpanzee-human split is estimated to have occurred between <a href="https://en.wikipedia.org/wiki/Chimpanzee%E2%80%93human_last_common_ancestor">5M and 13M years ago</a>, resulting in a <a href="https://www.amnh.org/exhibitions/permanent/human-origins/understanding-our-past/dna-comparing-humans-and-chimps">35 million base-pair difference</a>. The timeline for the appearance of language is estimated to have occurred between <a href="https://en.wikipedia.org/wiki/Origin_of_language">2.5M and 150K years ago</a>. Estimating divergence at 10M years ago and language occurring 1M years ago, with a stable rate of evolution on both sides. This suggests a crude upper bound of 35M/10/2 = 1.75M base pairs (or, around 3.5M bits) on the number of DNA bits encoding language inheritance. That’s around 5 orders of magnitude less than the number of parameters in a modern LLM, so this is not a viable explanation for the language efficiency gap.</p>
<p>On the other hand, it could be the case that the evolutionary lineage of humans evolved most language precursors long before actual language. The <a href="https://en.wikipedia.org/wiki/Human_genome#Size_of_the_human_genome">human genome has about 3.1B base pairs</a>, with about <a href="https://www.ninds.nih.gov/health-information/patient-caregiver-education/brain-basics-genes-work-brain#:~:text=At%20least%20a%20third%20of%20the%20approximately%2020%2C000,control%20how%20we%20move%2C%20think%2C%20feel%2C%20and%20behave.">one-third of proteins primarily expressed in the brain</a>. Using an estimate of 1B base pairs (around 2B bits) that are brain related. This is still around two orders of magnitude smaller than the LLMs in use today, so it’s not a viable explanation for the language learning efficiency gap.  It <em>is</em> plausible that the structure of neurons in a human brain, which strongly favors sparse connections over the dense connections favored by a GPU, are advantageous for learning purposes.</p>
<h3>Maybe human language learning is accelerated by multimodality?</h3>
<p>Humans benefit from a rich sensory world, notably through visual perception, which extends far beyond language. Estimating a “token count” for this additional information is difficult, however. For example, if someone is reading a book at 6.5 tokens per second, are they benefiting from all the extra sensory information? A <a href="https://www.cell.com/neuron/abstract/S0896-6273(24)00808-0?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS0896627324008080%3Fshowall%3Dtrue">recent paper</a> puts the rate at which information is consciously processed in a human brain at effectively 10 bits/second which is only modestly more than the cross entropy of a language model.  More generously, we could work from the common saying that “a picture is worth a thousand words” which is not radically different from techniques for encoding images into transformers.  Using this, we could estimate that extra modalities increase the number of tokens by three orders of magnitude, resulting in 1T tokens observed by age 31.  Given this, there is still an order-of-magnitude learning efficiency gap between humans and language models of the same class as GPT-4.</p>
<h3>Maybe the learning efficiency gap does not matter?</h3>
<p>In some domains, it may be possible to overcome the inefficiencies of a learning architecture by simply gathering more and more data as needed. At a scientific level, this is not a compelling argument, since understanding the fundamental limits of what is possible is the core purpose of science. Hence, this is a business argument, which may indeed be valid in some cases. A business response is that learning efficiency matters in domains where it is difficult or impossible to collect sufficient data: think of robot demonstrations, personalizing models, problems with long range structure, a universal translator encountering a new language, and so on. In addition, improving learning efficiency may lead to improvement in other forms of efficiency (e.g., memory and compute) via architectural improvements.</p>
<h2>Model size: The representational efficiency gap</h2>
<p>A second direction in which transformer-based models can be improved lies in model size, or representational efficiency. This is perhaps best illustrated by considering the problem of designing models or agents capable of physical or animal-like intelligence. This includes capabilities like 1) understanding one’s environment (perception); 2) moving oneself around (locomotion); and 3) planning to reach goals or accomplish basic tasks (e.g., navigation and manipulation). Naturally, this is very relevant if our goal is to build foundation models for embodied decision making.</p>
<p>The Portia spider has a brain one million times smaller than that of a human, yet it is <a href="https://en.wikipedia.org/wiki/Portia_(spider)#Intelligence">observed to plan a course of action and execute it successfully over durations as long as an hour</a>. Stated another way, it is possible to engage in significant physical intelligence behavior with 100M floats representing the neural connections and a modest gigaflop CPU capable of executing them in real time. This provides a strong case that much animal intelligence can be radically more representationally efficient than what has been observed in lingual domains, or yet implemented in software. A concrete question along these lines is:</p>
<p>Can we design a model with 100M floats that can effectively navigate and accomplish physical-intelligence tasks in the real world?</p>
<p>It is not clear whether there is an existing model of any size that can effectively do this. The most famous examples in this direction are game agents, which only function in relatively simple environments.</p>
<h3>Are transformer models for language representationally inefficient?</h3>
<p>While the discussion above concerns representational efficiency for physical intelligence, it is also interesting to consider representational efficiency for language.  That is, <em>are existing language models representationally efficient, or can similar capabilities be achieved with substantially smaller models?</em> On the one hand, it is possible that language is an inherently complex process to both produce and understand. On the other hand, it might be possible to represent human level language in a radically more size-efficient manner, as in the case of physical intelligence.</p>
<p>To this end, one interesting example is given by <a href="https://en.wikipedia.org/wiki/Alex_(parrot)">Alex</a>, a grey parrot that managed to learn and meaningfully use a modest vocabulary with a brain one-hundredth the size of a human brain by weight. If we accept the computational model of a neuron as a nonlinearity on a linear integration, Alex might have 1B neurons operating at 1T flops. Given Alex’s limited language ability, this isn’t constraining enough to decisively argue that language models that are substantially smaller than current models can be achieved. At the same time, it is plausible that most of Alex’s brain was not devoted to human language, offering some hope.</p>
<h2>The long-term memory and planning gap</h2>
<p>A third direction concerns developing models and agents suitable for domains that involve complex long-term interactions, which may necessitate the following capabilities:</p>
<p><strong>Memory</strong>: Effectively summarizing the history of interaction into a succinct representation and using it in context.</p>
<p><strong>Planning</strong>: Choosing the next actions or tokens deliberately to achieve a long range goal.</p>
<p>Recent advances like <a href="https://en.wikipedia.org/wiki/OpenAI_o1">O1</a> and <a href="https://en.wikipedia.org/wiki/DeepSeek#R1">R1</a> handle relatively short range planning but are significant advancements in this vein.  Existing applications of transformer language models largely avoid long-term interactions, since they can deviate from instructions. To highlight why we might expect to improve this situation, note that humans manage to engage in coherent plans over years-long timescales. Human-level intelligence isn’t required for this, though, as many animals exhibit behaviors that require long-timescale memory and planning. For example, a squirrel with a brain less than  one-hundredth the size of a human brain <a href="https://en.wikipedia.org/wiki/Squirrel#Behavior">stores food and reliably comes back</a> to it after months of experience. Restated in a transformer-relevant way, a squirrel can experience billions of intervening (and potentially distracting) visual tokens before recalling the location of a cache of food and returning to it. How can we develop competitive models and agents with this capability?</p>
<h3>Does it matter?</h3>
<p>A common approach to circumvent memory and planning limitations of existing models is to create an outer-level executor that uses the LLM as a subroutine, combined with other tools for memory or planning systems. These approaches tacitly acknowledges the limits of current architectures by offering an alternative solution. Historically, as for machine vision or speech recognition, it has always been more difficult to create a learning system that accomplishes the task of interest with end-to-end training, but it was worthwhile when done as the results were superior. This pattern may repeat for long-term memory and planning, yielding better solutions.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>An AI Miracle Malcontent</title>
		<link>https://hunch.net/?p=13763005</link>
					<comments>https://hunch.net/?p=13763005#comments</comments>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Wed, 05 Apr 2023 21:44:38 +0000</pubDate>
				<category><![CDATA[AI]]></category>
		<category><![CDATA[Deep]]></category>
		<category><![CDATA[generative]]></category>
		<category><![CDATA[Language]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<guid isPermaLink="false">https://hunch.net/?p=13763005</guid>

					<description><![CDATA[The stark success of OpenAI&#8217;s GPT4 model surprised me shifting my view from &#8220;really good autocomplete&#8221; (roughly inline with intuitions here) to a dialog agent exhibiting a significant scope of reasoning and intelligence. Some of the MSR folks did a fairly thorough study of capabilities which seems like a good reference. I think of GPT4 &#8230; <p class="link-more"><a href="https://hunch.net/?p=13763005" class="more-link">Continue reading<span class="screen-reader-text"> "An AI Miracle Malcontent"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>The stark success of OpenAI&#8217;s <a href="https://en.wikipedia.org/wiki/GPT-4">GPT4 model</a> surprised me shifting my view from &#8220;really good autocomplete&#8221; (roughly inline with intuitions <a href="https://arxiv.org/abs/2301.06627">here</a>) to a dialog agent exhibiting a significant scope of reasoning and intelligence.  Some of the MSR folks did a <a href="https://arxiv.org/abs/2303.12712">fairly thorough study of capabilities</a> which seems like a good reference.  I think of GPT4 as an artificial savant: super-John capable in some language-centric tasks like style and summarization with impressive yet more limited abilities in other domains like spatial and reasoning intelligence.</p>
<p>And yet, I&#8217;m unhappy with mere acceptance because there is a feeling that a miracle happened.  How is this not a miracle, at least with hindsight?  And given this, it&#8217;s not surprising to see folks thinking about more miracles.  The difficulty with miracle thinking is that it has no structure upon which to reason for anticipation of the future, prepare for it, and act rationally.  Given that, I wanted to lay out my view in some detail and attempt to understand enough to de-miracle what&#8217;s happening and what may come next.</p>
<p><b>Deconstructing The Autocomplete to Dialog Miracle</b><br />
One of the ironies of the current situation is that an organization called &#8220;OpenAI&#8221; created AI and isn&#8217;t really open about how they did it.  That&#8217;s an interesting statement about economic incentives and focus.  Nevertheless, back when they were publishing, the <a href="https://arxiv.org/abs/2203.02155">Instruct GPT</a> paper suggested something interesting: that reinforcement learning on a generative model substrate was remarkably effective&#8212;good for 2 to 3 orders of magnitude improvement in the quality of response with a tiny (in comparison to language sources for next word prediction) amount of reinforcement learning. My best guess is that this was the first combination of 3 vital ingredients.</p>
<ol>
<li>Learning to predict the next word based on vast amounts of language data from the internet.  I have no idea how much, but wouldn&#8217;t be surprised if it&#8217;s a million lifetimes of reading generated by a billion people.  That&#8217;s a vast amount of information there with deeply intermixed details about the world and language.
<ol>
<li>Why not other objectives?  Well, they wanted something simple so they could maximize scaling.  There may indeed be room for improvement in choice of objective.</li>
<li>Why language? Language is fairy unique amongst information in that it&#8217;s the best expression of conscious thought.  There is thought without language (yes, I believe animals think in various ways), but you can&#8217;t really do language without thought.</li>
</ol>
</li>
<li>The use of a large deep transformer model (<a href="https://arxiv.org/pdf/2207.09238.pdf">pseudocode here</a>) to absorb all of this information.  Large here presumably implies training on many GPUs with both data and model parallelism.  I&#8217;m sure there are many fine engineering tricks here.  I&#8217;m unclear on the scale, but expect the answer is more than thousands and less than millions.
<ol>
<li>Why transformer models?  At a functional level, they embed &#8216;soft attention&#8217; (=ability to look up a value with a key in a gradient friendly way).  At an optimization level, they are GPU-friendly.</li>
<li>Why deep? The drive to minimize word prediction error in the context of differentiable depth creates a pressure to develop useful internal abstractions.</li>
</ol>
</li>
<li>Reinforcement learning on a small amount of data which &#8216;awakens&#8217; a dialog agent.  With the right prompt (=prefix language) engineering a vanilla large language model can address many tasks as the information is there, but it&#8217;s awkward and clearly not a general purpose dialog agent.  At the same time, the learned substrate is an excellent representation upon which to apply RL creating a more active agent while curbing an inherited tendency to mimic internet flamebait.
<ol>
<li>Why reinforcement learning?  One of the oddities of language is that there is more than one way of saying things.  Hence, the supervised learning view that there is a right answer and everything else is wrong sets up inherent conflicts in the optimization. Hence, &#8220;reinforcement learning from human feedback&#8221; pairs inverse reinforcement learning to discover a reward function and basic reinforcement learning to achieve better performance.  What&#8217;s remarkable about this is that the two-step approach is counter to the <a href="https://en.wikipedia.org/wiki/Data_processing_inequality">information processing inequality</a>.</li>
</ol>
</li>
</ol>
<p>The overall impression that I&#8217;m left with is something like the &#8220;ghost of the internet&#8221;.  If you ask the internet for the answer to a question on the best forum available and get an answer, it might be in the ballpark of as useful and as correct as that which GPT4 provides (notably, in seconds).  <a href="https://www.amazon.com/AI-Revolution-Medicine-GPT-4-Beyond/dp/0138200130/ref=sr_1_1?crid=2JMRKC2V7HQW">Peter Lee&#8217;s book</a> on the application to medicine is pretty convincing.  There are pluses and minuses here&#8212;GPT4&#8217;s abstraction of language tasks like summarization and style appear super-human, or at least better than I can manage.  For commonly discussed content (e.g. medicine) it&#8217;s fairly solid, but for less commonly discussed content (say, <a href="https://bg.battletech.com/forums/fan-designs-rules/">Battletech fan designs</a>) it becomes sketchy as the internet gives out. There are obviously times when it errs (often egregiously in a fully confident way), but that&#8217;s also true in internet forums.  I specifically don&#8217;t trust GPT4 with math and often find it&#8217;s reasoning and abstraction abilities shaky, although it&#8217;s deeply impressive that they exist at all.  And driving a car is out because it&#8217;s a task that you can&#8217;t really describe.</p>
<p><b>What about the future?</b><br />
There&#8217;s been a great deal about the danger of AI discussed recently, and quite a mess of misexpectations about where we are.</p>
<ol>
<li>Is GPT4 and future variants the answer to [insert intelligence-requiring problem here]?  GPT4 seems most interesting as a language intelligence.  It&#8217;s clearly useful as an advisor or a brainstormer.  The meaning of &#8220;GPT5&#8221; isn&#8217;t clear, but I would expect substantial shifts in core algorithms/representations are necessary for mastering other forms of intelligence like memory, skill formation, information gathering, and optimized decision making.</li>
<li>Are generative models the end of consensual reality?  Human societies seem to have a systematic weakness in that people often prefer a consistent viewpoint even at the expense of fairly extreme rationalization.  That behavior in large language models is just looking at our collective behavior through a mirror.  Generative model development (both language and video) do have a real potential to worsen this. I  believe we should be making real efforts as a society to harden and defend objective reality in a multiple ways.  This is not specifically about AI, but it would address a class of AI-related concerns and improve society generally.</li>
<li>Is AI about to kill everyone? <a href="https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/">Yudkowski&#8217;s editorial</a> gives the impression that a <a href="https://en.wikipedia.org/wiki/Terminator_(franchise)">Terminator style apocalypse</a> is just around the corner.  I&#8217;m skeptical about the short term (the next several years), but the longer term requires thought.
<ol>
<li>In the short term there are so many limitations of even GPT4 (even though it&#8217;s a giant advance) that I both lack the imagination to see a path to &#8220;everyone dies&#8221; and I expect it would be suicidal for an AI as well.  GPT4, as an AI, is using the borrowed intelligence of the internet.  Without that source it&#8217;s just an amalgamation of parameters of no interesting capabilities.</li>
<li>For the medium term, I think there&#8217;s a credible possibility that drone warfare becomes ultralethal inline with <a href="https://www.youtube.com/watch?v=M7mIX_0VK4g">this imagined future</a>.  You can already see drone warfare in the Ukraine-Russia war significantly increasing the lethality of a battlefield.  This requires some significant advances, but nothing seems outlandish.  Counterdrone technology development and limits on usage inline with other war machines seems prudent.</li>
<li>For the longer term, Vinge&#8217;s classical <a href="https://edoras.sdsu.edu/~vinge/misc/singularity.html">singularity essay</a> is telling here as he lays out the inevitability of developing intelligence for competitive reasons.  Economists are often fond of pointing out how job creation has accompanied previous mechanization induced job losses and yet my daughter points out how we keep increasing the amount of schooling children must absorb to be capable members of society.  It&#8217;s not hard to imagine a desolation of jobs in a decade or two where AIs can simply handle almost all present-day jobs and most humans can&#8217;t skill-up to be economically meaningful.  Our society is not prepared for this situation&#8212;it seems like a quite serious and possibly inevitable possibility.  Positive models for a nearly-fully-automated society are provided by <a href="https://en.wikipedia.org/wiki/Star_Trek">Star Trek</a> and <a href="https://en.wikipedia.org/wiki/Iain_Banks">Iain Banks</a> although science fiction is very far from a working proposal for a working society.</li>
<li>I&#8217;m skeptical about a <a href="https://en.wikipedia.org/wiki/The_Lawnmower_Man_(film)">Lawnmower Man</a> like scenario where a superintelligence suddenly takes over the world.  In essence, cryptographic barriers are plausibly real, even to a superintelligence.  As long as that&#8217;s so, the thing to watch out for is excessive concentrations of power without oversight.  We already have a functioning notion of super-human intelligence in <a href="https://en.wikipedia.org/wiki/Organizational_intelligence">organizational intelligence</a> and are familiar with techniques for restraining organizational intelligence into useful-for-society channels.  Starting with this and improving seems reasonable.</li>
</ol>
</li>
</ol>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13763005</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>ICML 2021 Invited Speakers &#8212; ML for Science</title>
		<link>https://hunch.net/?p=13762980</link>
					<comments>https://hunch.net/?p=13762980#comments</comments>
		
		<dc:creator><![CDATA[Ameet]]></dc:creator>
		<pubDate>Mon, 19 Jul 2021 15:54:42 +0000</pubDate>
				<category><![CDATA[Announcements]]></category>
		<category><![CDATA[Conferences]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[ICML2021]]></category>
		<category><![CDATA[Invited Speakers]]></category>
		<guid isPermaLink="false">https://hunch.net/?p=13762980</guid>

					<description><![CDATA[By: Stefanie Jegelka and Ameet Talwalkar (ICML21 Communication Chairs) With ICML 2021 underway, we wanted to briefly highlight the upcoming invited talks. A general theme of the invited talks this year is “machine learning for science.” The Program Chairs (Marina Meila and Tong Zhang) have invited world-renowned scientists from various disciplines to discuss their problems &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762980" class="more-link">Continue reading<span class="screen-reader-text"> "ICML 2021 Invited Speakers &#8212; ML for Science"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p>By: Stefanie Jegelka and Ameet Talwalkar (ICML21 Communication Chairs)</p>



<p class="has-text-align-left">With ICML 2021 underway, we wanted to briefly highlight the upcoming invited talks. A general theme of the invited talks this year is “<em>machine learning for science</em>.” The Program Chairs (Marina Meila and Tong Zhang) have invited world-renowned scientists from various disciplines to discuss their problems and the corresponding machine learning challenges.  By exposing the machine learning community to these fascinating problems, we hope that we can help to further expand the applicability of machine learning to a wide range of scientific domains.&nbsp;</p>



<ul class="wp-block-list"><li><strong>Daphne Koller</strong> (Tuesday, July 20th at 8am PDT): Dr. Koller is a pioneer in the field of machine learning, and is currently the Founder and CEO of Insitro, which leverages machine learning for drug discovery. She was the Rajeev Motwani Professor of Computer Science at Stanford University, where she served on the faculty for 18 years. She was the co-founder, co-CEO and President of Coursera, and the Chief Computing Officer of Calico, an Alphabet company in the healthcare space. She received the MacArthur Foundation Fellowship in 2004, was awarded the ACM Prize in Computing in 2008, and was recognized as one of TIME Magazine’s 100 most influential people in 2012.<br /></li><li><strong>Xiao Cunde and Dahe Qin</strong> (Tuesday, July 20th at 8pm PDT): Dr. Cunde is a glaciologist and Deputy Director of the Institute of the Climate System, Chinese Academy of Meteorological Sciences. He has worked in the fields of polar glaciology and meteorology since 1997. His major research focus has been ice core studies relating to paleo-climate and paleo-environment, and present day cold region meteorological and glaciological processes that impact environmental and climatic changes. Dr. Qin is the Former Director of the China Meteorological Administration. He is a glaciologist and the first Chinese ever to cross the South Pole. He was a member of the 1989 International Cross South Pole Expedition and has published numerous ground-breaking articles, using evidence gathered from his Antarctic expeditions.<br /></li><li><strong>Esther Duflo</strong> (Wednesday, July 21st at 8am PDT): Dr. Duflo is the Abdul Latif Jameel Professor of Poverty Alleviation and Development Economics in the Department of Economics at MIT and a co-founder and co-director of the Abdul Latif Jameel Poverty Action Lab (J-PAL). In her research, she seeks to understand the economic lives of the poor, with the aim to help design and evaluate social policies. She has worked on health, education, financial inclusion, environment and governance. In 2019, she received a Nobel Prize in Economic Sciences “for their experimental approach to alleviating global poverty”. In particular, she and co-authors have introduced a new approach to obtaining reliable answers about the best ways to fight global poverty.<br /></li><li><strong>Edward Chang</strong> (Wednesday, July 21st at 8pm PDT): Dr. Chang is a Professor in the Department of Neurological Surgery at the UCSF Weill Institute for Neurosciences. He is a neurosurgeon and uses machine learning to understand brain functions. His research focuses on the brain mechanisms for speech, movement and human emotion. He co-directs the Center for Neural Engineering and Prostheses, a collaborative enterprise of UCSF and UC Berkeley. The center brings together experts in engineering, neurology and neurosurgery to develop state-of-the-art biomedical technology to restore function for patients with neurological disabilities such as paralysis and speech disorders.<br /></li><li><strong>Cecilia Clementi</strong> (Thursday, July 22nd at 8am PDT):&nbsp; Dr. Clementi is a Professor of Chemistry, and Chemical and Biomolecular Engineering, and Senior Scientist in the Center for Theoretical Biological Physics at Rice University, and an Einstein Fellow at FU Berlin. She researches strategies to study complex biophysical processes on long timescales, and she is an expert in the simulation of biomolecules using large-scale ML. Her group designs multiscale models, adaptive sampling approaches, and data analysis tools, and uses both data-driven methods and theoretical formulations.</li></ul>



<p>To register for the conference and check out these talks, please visit: <a href="https://icml.cc/">https://icml.cc/</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762980</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>ALT Highlights &#8211; An Interview with Joelle Pineau</title>
		<link>https://hunch.net/?p=13762948</link>
					<comments>https://hunch.net/?p=13762948#comments</comments>
		
		<dc:creator><![CDATA[GautamKamath]]></dc:creator>
		<pubDate>Fri, 23 Apr 2021 14:06:29 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<guid isPermaLink="false">https://hunch.net/?p=13762948</guid>

					<description><![CDATA[Welcome to ALT Highlights, a series of blog posts spotlighting various happenings at the recent conference ALT 2021, including plenary talks, tutorials, trends in learning theory, and more! To reach a broad audience, the series will be disseminated as guest posts on different blogs in machine learning and theoretical computer science. John has been kind &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762948" class="more-link">Continue reading<span class="screen-reader-text"> "ALT Highlights &#8211; An Interview with Joelle Pineau"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p>Welcome to ALT Highlights, a series of blog posts spotlighting various happenings at the recent conference <a href="http://algorithmiclearningtheory.org/alt2021/">ALT 2021</a>, including plenary talks, tutorials, trends in learning theory, and more! To reach a broad audience, the series will be disseminated as guest posts on different blogs in machine learning and theoretical computer science. John has been kind enough to host the first post in the series. This initiative is organized by the <a href="https://www.let-all.com/">Learning Theory Alliance</a>, and overseen by <a href="http://www.gautamkamath.com/">Gautam Kamath</a>. All posts in ALT Highlights are indexed on the official <a href="https://www.let-all.com/blog/2021/04/20/alt-highlights-2021/" data-type="URL" data-id="https://www.let-all.com/blog/2021/04/20/alt-highlights-2021/">Learning Theory Alliance blog</a>.</p>



<p>The first post is an interview with <a href="https://www.cs.mcgill.ca/~jpineau/">Joelle Pineau</a>, by <a href="https://sites.google.com/view/michal-moshkovitz">Michal Moshkovitz</a> and <a href="https://www.ttic.edu/students/">Keziah Naggita</a>.</p>



<hr class="wp-block-separator"/>



<p>We would like you to meet Dr. Joelle Pineau, an astounding leader in AI, based in Montreal, Canada.<br /></p>



<div class="wp-block-columns is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex">
<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow">
<p><strong>Name:</strong> Joelle Pineau</p>



<p><strong>Institutions:</strong> Joelle Pineau is a faculty member at <a href="https://mila.quebec/en/"><strong>Mila</strong></a> and an Associate Professor and William Dawson Scholar at the School of Computer Science at <strong>McGill University</strong>, where she co-directs the Reasoning and Learning Lab. She is a senior fellow of the Canadian Institute for Advanced Research (<strong>CIFAR</strong>), a co-managing director of <strong>Facebook AI Research</strong>, and the Montreal, Canada lab director. Learn more information about&nbsp; Joelle <a href="https://www.cs.mcgill.ca/~jpineau/">here</a> and her talk <a href="https://www.youtube.com/watch?v=e6n-jM1f5_4">here.</a></p>
</div>



<div class="wp-block-column is-vertically-aligned-center is-layout-flow wp-block-column-is-layout-flow">
<div class="wp-block-image"><figure class="aligncenter is-resized"><img fetchpriority="high" decoding="async" src="https://lh6.googleusercontent.com/eoz2hCbpOExOeup_zbB4QwWzvVYrOBM40SIpEvK0VTMd4dFD-eQQj-WoXrP3_b6-QHmc_5wJnobCJFxfAml_8hNrgS2y9luEGJK68WK-pf1XGm-QCNJS2-Mn1481s9uf3yaUaYNl" alt="" width="255" height="338"/></figure></div>



<p></p>
</div>
</div>



<p><strong>Reinforcement Learning (RL)</strong></p>



<p><strong>How and why did you choose to work in reinforcement learning? &nbsp; What are the things that inspired you to choose health as a domain of application for your RL work?</strong></p>



<p>I started working in reinforcement learning at the beginning of my PhD&nbsp; in robotics at CMU.&nbsp; Quite honestly, I was delighted by the elegance of&nbsp; the mathematical formulation.&nbsp; It also had some link to topics I studied previously (in supervised learning &amp; in operations search). &nbsp; It was also useful for decision-making, which was complementary to state tracking &amp; prediction, which was the topic studied by many other members of my lab at the time.</p>



<p>I started working on applications to health-care early in my career as a faculty at McGill.&nbsp; I was curious to explore practical applications, and found some colleagues in health-care who had some interesting decision-making problems with the right characteristics.</p>



<p><strong>How would you recommend a newcomer enter the RL field?&nbsp; For RL researchers interested in safety, is there some literature you can recommend as a starting point?</strong></p>



<p>Get familiar with the basic mathematical formalism &amp; algorithm, try your hand at easy simulation cases.&nbsp; For RL and safety, the literature is very small and quite recent, so it&#8217;s easy enough to get started.&nbsp; Work on Constrained MDPs (Altman, 1999) is a good starting point.&nbsp; See also the work on Seldonian RL, by Phil Tomas and colleagues.</p>



<p><strong>In your talk you mentioned applications of RL to different domains. What do you think is the main achievement of RL?&nbsp;</strong></p>



<p>The AlphaGo result was very impressive!&nbsp; Recently, the work on using RL to control the flight of the Loon balloons is also quite impressive.</p>



<p><strong>What are the big open problems in RL?&nbsp;</strong></p>



<p>Efficient exploration continues to be a major challenge.&nbsp; Stability of learning, even when the data is non-stationary (e.g. due to policy change), is also very important to address.&nbsp; In my talk I also highlighted the importance of development methods for RL with responsible properties (safety, security, transparency, etc.) as a major open problem.</p>



<p><strong>Collaborations</strong></p>



<p><strong>Based on your work in neurostimulation, it appears that people from different fields of expertise were involved.&nbsp;</strong></p>



<p>Yes, this was a close collaboration between researchers in CS (my own lab) and researchers in neuroscience, with expertise in electrophysiology.</p>



<p><strong>What advice would you give researchers in finding interdisciplinary collaborators?</strong></p>



<p>This collaboration was literally started by me picking up the phone and calling a colleague in neuroscience to propose the project.&nbsp; I then wrote a grant proposal and obtained funding to start the project.&nbsp; More generally, these days it&#8217;s actually very easy for researchers in machine learning to find interdisciplinary collaborators.&nbsp; Giving talks, offering office hours, speaking to colleagues you meet in random events &#8211; I&#8217;ve had literally dozens of projects proposed to me in the last few years, from all sorts of disciplines.</p>



<p><strong>What are some of the best ways to foster successful collaborations tackling work cutting across multiple disciplines?</strong></p>



<p>Spend time understanding the problems from the point of view of your collaborator, and commit to solving *that* problem.&nbsp; Don&#8217;t walk in with&nbsp; your own hammer (or pre-selected set of techniques), and expect to find a problem to show-off your techniques. Genuine curiosity about the other field is very valuable!&nbsp; Don&#8217;t hesitate to read the literature &#8211; don&#8217;t expect your collaborator to share all the needed knowledge.&nbsp; Co-supervising a student together is also often an effective way of working closely together.</p>



<p><strong>Academia, industry and everything in between&nbsp;</strong></p>



<p><strong>During the talk, you mentioned variance in freedom of research for theoreticians in industry versus academia. Could you elaborate more about this? Are there certain personality traits or characteristics more likely to make someone more successful in academia versus industry?</strong></p>



<p>For certain more theoretical work, it can be a long time until the impact and value of the work is realized.&nbsp; This is perhaps harder to support in industry, which is better suited to appreciated shorter-term impact.&nbsp; Another big difference is that in Academia, professors work closely with students and junior researchers, and should expect to dedicate a good amount of time and energy to training &amp; developing them (even if it means the work might move along a bit slower).&nbsp; In industry, a researcher will most often work with more senior researchers, and the project is likely to move along faster (also because no one is taking or teaching courses).</p>



<p><strong>How do you balance leadership, for example, at FAIR, with students advising like at McGill, research [CIFAIR, FAIR, McGill, Mila], and personal life?&nbsp;</strong></p>



<p>It&#8217;s useful to have clarity about your priorities.&nbsp; Don&#8217;t let other people dictate what these are &#8211; you should decide for yourself.&nbsp; And then spend your time according to this.&nbsp; I enjoy my work at FAIR a lot, I also really enjoy spending time with my grad students at McGill/Mila, and of course I really enjoy time with my family &amp; friends.&nbsp; So I try to keep a good balance between all of this. I also try to be clear &amp; transparent with other people about my availability &amp; priorities, so they can plan accordingly.</p>



<p><strong>What do you think distinguishes the mindset of an extraordinary researcher?</strong></p>



<p>To be a strong researcher, it helps to be very curious, genuinely want to understand and find out new knowledge. The ability to find new connections between ideas, concepts, is also useful.&nbsp; For scientific research, you also need discipline and good methodology, and a commitment to deep understanding (rather than &#8220;proving&#8221; whatever hypothesis you hold). &nbsp; Frankly, I also don&#8217;t think we need to further cultivate the myth of the &#8220;extraordinary researcher&#8221;.&nbsp; Research is primarily a collective institution, where many people contribute, in ways small and big, and it is through this collective work that we achieve big discoveries and breakthroughs!</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762948</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
			</item>
		<item>
		<title>What is the Right Response to Employer Misbehavior in Research?</title>
		<link>https://hunch.net/?p=13762892</link>
					<comments>https://hunch.net/?p=13762892#comments</comments>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Mon, 14 Dec 2020 20:28:29 +0000</pubDate>
				<category><![CDATA[Conferences]]></category>
		<category><![CDATA[Research]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762892</guid>

					<description><![CDATA[I enjoyed my conversations with Timnit when she was in the MSR-NYC lab, so her situation has been on my mind throughout NeurIPS. Piecing together what happened second-hand is always tricky, but Jeff Dean&#8217;s account and Timnit&#8217;s agree on a basic outline. Timnit and others wrote a paper for FAccT which was approved for submission &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762892" class="more-link">Continue reading<span class="screen-reader-text"> "What is the Right Response to Employer Misbehavior in Research?"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>I enjoyed my conversations with Timnit when she was in the <a href="https://www.microsoft.com/en-us/research/lab/microsoft-research-new-york/">MSR-NYC</a> lab, so <a href="https://twitter.com/timnitGebru/status/1334341991795142667">her situation</a> has been on my mind throughout <a href="https://neurips.cc/">NeurIPS</a>.</p>
<p>Piecing together what happened second-hand is always tricky, but <a href="https://docs.google.com/document/d/1f2kYWDXwhzYnq8ebVtuk9CqQqz7ScqxhSIxeYGrWjK0/edit">Jeff Dean&#8217;s account</a> and Timnit&#8217;s agree on a basic outline.  Timnit and others wrote a paper for <a href="https://facctconference.org/">FAccT</a> which was approved for submission by the normal internal review process, then later unapproved.  Timnit threatened to leave unless various details about this unapproval were clarified.  Google then declared her resigned.</p>
<p><a href="https://www.google.com/search?source=hp&amp;q=resign+definition">The definition of resign</a> makes it clear an employee does it, not an employer. Since that apparently never happened, this is a mischaracterized firing.  It also seems quite credible that the unapproval process was highly unusual based on various reactions I&#8217;ve seen and my personal expectations of what researchers would typically tolerate.</p>
<p>This frankly looks bad to me and quite a number of other people.  Aside from the plain facts, this is also consistent with racism and/or sexism given the roles of those involved.  Google itself now faces <a href="https://googlewalkout.medium.com/standing-with-dr-timnit-gebru-isupporttimnit-believeblackwomen-6dadc300d382">a substantial rebellion amongst employees</a>.</p>
<p>However, I worry about consequences to some of these reactions.</p>
<ol>
<li>Some people suggest not reviewing papers from Google-based researchers.  As a personal decision, this is making a program chair&#8217;s difficult job harder. As a communal decision, this would devastate the community since a substantial fraction are employed at Google.  These people did not make this decision and many actively support Timnit there (at some risk to their job) so a mass-punishment approach seems deeply counterproductive.</li>
<li>Others have suggested that Google should not be a sponsor at major machine learning conferences.  Since all of these are run as nonprofits, the lost grants will either be made up by increasing costs for everyone or reducing grants to students and diversity sponsorship.  Reduced grants in particular seem deeply counterproductive.</li>
<li>Some have suggested that all industry research in general is bad.  Industrial research varies <i>substantially</i> from place to place, perhaps much more so than in academia.  As an example, Microsoft Research has no similar internal review process for publications.  Overall, the stereotyping inherent in this view makes me uncomfortable and there are some real advantages to working in industry in terms of ability to concentrate on research or effecting real change.</li>
</ol>
<p>It&#8217;s critical to understand that the strength of the research community is incredibly valuable to the community.  It&#8217;s not hard to imagine a different arrangement where all industrial research is proprietary, with only a few major companies operating competitive  internal research teams.  This sort of structure exists in some other fields, often to the detriment of anyone other than a major company.  Researchers at those companies can&#8217;t as easily switch jobs and researchers outside of those companies may lack the context to even contribute to the state of the art.  The field itself progresses slower and in a more secretive way due to lack of sharing.  Anticommunal acts based on  mass ostracization or abandonment could shift our structure from the current relatively happy equilibrium where people from all over can participate, learn, and contribute towards a much worse situation.</p>
<p>This is not to say that there are no consequences.  The substantial natural consequences of a significant moral-impacting event will play out regardless of anything else.  The marketplace for top researchers is quite competitive so for many of them uncertainty about the feasibility of publication, the disposition and competence of senior leadership, or constraints on topics tips the balance towards other offers.  That may be severe this year, since this all blew up as the recruiting season was launching and I expect it to last over many years unless some significant action is taken.  In this sense, I expect all the competitors may be looking forward to recruiting more than they were previously and the cost of not resolving the conflict here in a better way may be much, much higher than just about any other course of action.  This is not particularly hypothetical&#8212;I saw it play out over the years after <a href="https://hunch.net/?p=2798">the silicon valley lab was cut</a> as the brain drain of other great researchers in competitive areas was severe for several years afterwards.</p>
<p>I don&#8217;t think a general answer to the starting question is possible, since it will always depend on circumstances.  Even this instance is complex with actions that could cause unintuitive adverse impacts on unanticipated parts of our community or damage the community as a whole.  I personally hope that the considerable natural consequences here form a substantial deterrent to misbehavior in the long term. Please think this through when considering your actions here.</p>
<p>Edits: tweaked conclusion wording a bit with advice from <a href="https://twitter.com/reshamas">reshamas</a>.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762892</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Experiments with the ICML 2020 Peer-Review Process</title>
		<link>https://hunch.net/?p=13762807</link>
					<comments>https://hunch.net/?p=13762807#comments</comments>
		
		<dc:creator><![CDATA[stiv]]></dc:creator>
		<pubDate>Tue, 01 Dec 2020 16:04:01 +0000</pubDate>
				<category><![CDATA[Conferences]]></category>
		<category><![CDATA[Empirical]]></category>
		<category><![CDATA[General]]></category>
		<category><![CDATA[Research]]></category>
		<category><![CDATA[Reviewing
]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762807</guid>

					<description><![CDATA[This post is cross-listed&#160;on the CMU ML blog. The International Conference on Machine Learning (ICML) is a flagship machine learning conference that in 2020 received 4,990 submissions and managed a pool of 3,931 reviewers and area chairs. Given that the stakes in the review process are high &#8212; the careers of researchers are often significantly &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762807" class="more-link">Continue reading<span class="screen-reader-text"> "Experiments with the ICML 2020 Peer-Review Process"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p><em>This post is cross-listed&nbsp;<a href="https://blog.ml.cmu.edu/2020/12/01/icml2020exp/">on the CMU ML blog</a></em>.</p>



<p>The International Conference on Machine Learning (ICML) is a flagship machine learning conference that in 2020 received 4,990 submissions and managed a pool of 3,931 reviewers and area chairs. Given that the stakes in the review process are high &#8212; the careers of researchers are often significantly affected by the publications in top venues &#8212; we decided to scrutinize several components of the peer-review process in a series of experiments. Specifically, in conjunction with the ICML 2020 conference, we performed three experiments that target: resubmission policies, management of reviewer discussions, and reviewer recruiting. In this post, we summarize the results of these studies.</p>



<h2 class="wp-block-heading">Resubmission Bias</h2>



<p><strong>Motivation. </strong>Several leading ML and AI conferences have recently started requiring authors to declare previous submission history of their papers. In part, such measures are taken to reduce the load on reviewers by discouraging resubmissions without substantial changes. However, this requirement poses a risk of bias in reviewers’ evaluations.</p>



<p><strong>Research question.</strong> Do reviewers get biased when they know that the paper they are reviewing was previously rejected from a similar venue?</p>



<p><strong>Procedure. </strong>We organized an auxiliary conference review process with 134 junior reviewers from 5 top US schools and 19 papers from various areas of ML. We assigned participants 1 paper each and asked them to review the paper as if it was submitted to ICML. Unbeknown to participants, we allocated them to a test or control condition uniformly at random:</p>



<p><em>Control. </em>Participants review the papers as usual.</p>



<p><em>Test.</em> Before reading the paper, participants are told that the paper they review is a resubmission.</p>



<p><strong>Hypothesis.</strong> We expect that if the bias is present, reviewers in the test condition should be harsher than in the control.&nbsp;</p>



<p><strong>Key findings. </strong>Reviewers give almost one point lower score (95% Confidence Interval: [0.24, 1.30]) on a 10-point <a href="https://en.wikipedia.org/wiki/Likert_scale">Likert item</a> for the overall evaluation of a paper when they are told that a paper is a resubmission. In terms of narrower review criteria, reviewers tend to underrate “Paper Quality” the most.</p>



<p><strong>Implications. </strong>Conference organizers need to evaluate a trade-off between envisaged benefits such as the hypothetical reduction in the number of submissions and the potential unfairness introduced to the process by the resubmission bias. One option to reduce the bias is to postpone the moment in which the resubmission signal is revealed until after the initial reviews are submitted. This finding must also be accounted for when deciding whether the reviews of rejected papers should be publicly available on systems like <a href="https://openreview.net">openreview.net</a> and others.&nbsp;</p>



<p><strong>Details. </strong><a href="http://arxiv.org/abs/2011.14646">http://arxiv.org/abs/2011.14646</a></p>



<h2 class="wp-block-heading">Herding Effects in Discussions</h2>



<p><strong>Motivation. </strong>Past research on human decision making shows that group discussion is susceptible to various biases related to social influence. For instance, it is documented that the decision of a group may be biased towards the opinion of the group member who proposes the solution first. We call this effect <a href="https://en.wikipedia.org/wiki/Herd_mentality">herding</a> and note that, in peer review, herding (if present) may result in undesirable artifacts in decisions as different area chairs use different strategies to select the discussion initiator.</p>



<p><strong>Research question. </strong>Conditioned on a set of reviewers who actively participate in a discussion of a paper, does the final decision of the paper depend on the order in which reviewers join the discussion?</p>



<p><strong>Procedure. </strong>We performed a randomized controlled trial on herding in ICML 2020 discussions that involved about 1,500 papers and 2,000 reviewers. In peer review, the discussion takes place after the reviewers submit their initial reviews, so we know prior opinions of reviewers about the papers. With this information, we split a subset of ICML papers into two groups uniformly at random and applied different discussion-management strategies to them:&nbsp;</p>



<p><em>Positive Group</em>. First ask the most positive reviewer to start the discussion, then later ask the most negative reviewer to contribute to the discussion.</p>



<p><em>Negative Group</em>. First ask the most negative reviewer to start the discussion, then later ask the most positive reviewer to contribute to the discussion.</p>



<p><strong>Hypothesis.</strong> The only difference between the strategies is the order in which reviewers are supposed to join the discussion. Hence, if the herding is absent, the strategies will not impact submissions from the two groups disproportionately. However, if the herding is present, we expect that the difference in the order will introduce a difference in the acceptance rates across the two groups of papers.</p>



<p><strong>Key findings. </strong>The analysis of outcomes of approximately 1,500 papers does not reveal a statistically significant difference in acceptance rates between the two groups of papers. Hence, we find no evidence of herding in the discussion phase of peer review.</p>



<p><strong>Implications. </strong>Regarding the concern of herding which is found to occur in other applications involving people, discussion in peer review does not seem to be susceptible to this effect and hence no specific measures to counteract herding in peer-review discussions are needed.</p>



<p><strong>Details. </strong><a href="https://arxiv.org/abs/2011.15083">https://arxiv.org/abs/2011.15083</a></p>



<h2 class="wp-block-heading">Novice Reviewer Recruiting</h2>



<p><strong>Motivation. </strong>&nbsp;A surge in the number of submissions received by leading ML and&nbsp; AI conferences has challenged the sustainability of the review process by increasing the burden on the pool of qualified reviewers. Leading conferences have been addressing the issue by relaxing the seniority bar for reviewers and inviting very junior researchers with limited or no publication history, but there is mixed evidence regarding the impact of such interventions on the quality of reviews.&nbsp;</p>



<p><strong>Research question. </strong>Can very junior reviewers be recruited and guided such that they enlarge the reviewer pool of leading ML and AI conferences without compromising the quality of the process?</p>



<p><strong>Procedure. </strong>We implemented a twofold approach towards managing novice reviewers:</p>



<p><em>Selection.</em> We evaluated reviews written in the aforementioned auxiliary conference review process involving 134 junior reviewers, and invited 52 of these reviewers who produced the strongest reviews to join the reviewer pool of ICML 2020. Most of these 52 “experimental” reviewers come from the population not considered by the conventional way of reviewer recruiting used in ICML 2020.<br /><br /><em>Mentoring</em>. In the actual conference, we provided these experimental reviewers with a senior researcher as a point of contact who offered additional mentoring.</p>



<p><strong>Hypothesis.</strong> If our approach allows to bring strong reviewers to the pool, we expect experimental reviewers to perform at least as good as reviewers from the main pool on various metrics, including the quality of reviews as rated by area chairs.</p>



<p><strong>Key findings. </strong>A combination of the selection and mentoring mechanisms results in reviews of at least comparable and on some metrics even higher-rated quality as compared to the conventional pool of reviews: 30% of reviews written by the experimental reviewers exceeded the expectations of area chairs (compared to only 14% for the main pool).</p>



<p><strong>Implications. </strong>The experiment received positive feedback from participants who appreciated the opportunity to become a reviewer in ICML 2020 and from authors of papers used in the auxiliary review process who received a set of useful reviews without submitting to a real conference. Hence, we believe that a promising direction is to replicate the experiment at a larger scale and evaluate the benefits of each component of our approach.</p>



<p><strong>Details.</strong> <a href="http://arxiv.org/abs/2011.15050">http://arxiv.org/abs/2011.15050</a></p>



<h2 class="wp-block-heading">Conclusion</h2>



<p>All in all, the experiments we conducted in ICML 2020 reveal some useful and actionable insights about the peer-review process. We hope that some of these ideas will help to design a better peer-review pipeline in future conferences.</p>



<p>We thank ICML area chairs, reviewers, and authors for their tremendous efforts. We would also like to thank the Microsoft Conference Management Toolkit (CMT) team for their continuous support and implementation of features necessary to run these experiments, the authors of papers contributed to the auxiliary review process for their responsiveness, and participants of the resubmission bias experiment for their enthusiasm. Finally, we thank Ed Kennedy and Devendra Chaplot for their help with designing and executing the experiments.</p>



<p>The post is based on the works by  Ivan Stelmakh, Nihar B. Shah, Aarti Singh, Hal Daumé III, and Charvi Rastogi.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762807</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
			</item>
		<item>
		<title>HOMER: Provable Exploration in Reinforcement Learning</title>
		<link>https://hunch.net/?p=13762683</link>
					<comments>https://hunch.net/?p=13762683#comments</comments>
		
		<dc:creator><![CDATA[DipendraMisra]]></dc:creator>
		<pubDate>Tue, 21 Jul 2020 16:59:11 +0000</pubDate>
				<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Reinforcement]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762683</guid>

					<description><![CDATA[Last week at ICML 2020,&#160;Mikael Henaff,&#160;Akshay Krishnamurthy,&#160;John Langford&#160;and I had a paper on a new reinforcement learning (RL) algorithm that solves three key problems in RL: (i) global exploration, (ii) decoding latent dynamics, and (iii) optimizing a given reward function. Our ICML poster is&#160;here. The paper is a bit mathematically heavy in nature so this &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762683" class="more-link">Continue reading<span class="screen-reader-text"> "HOMER: Provable Exploration in Reinforcement Learning"</span></a></p>]]></description>
										<content:encoded><![CDATA[
<p>Last week at ICML 2020,&nbsp;<a href="http://www.mikaelhenaff.com/" target="_blank" rel="noreferrer noopener">Mikael Henaff</a>,&nbsp;<a href="https://people.cs.umass.edu/~akshay/" target="_blank" rel="noreferrer noopener">Akshay Krishnamurthy</a>,&nbsp;<a href="https://hunch.net/~jl/" target="_blank" rel="noreferrer noopener">John Langford</a>&nbsp;and I had a paper on a new reinforcement learning (RL) algorithm that solves three key problems in RL: (i) global exploration, (ii) decoding latent dynamics, and (iii) optimizing a given reward function. Our ICML poster is&nbsp;<a href="https://icml.cc/virtual/2020/poster/6535" target="_blank" rel="noreferrer noopener">here</a>.</p>



<p>The paper is a bit mathematically heavy in nature so this post is an attempt to distill the key findings. We will also be following up soon with a new codebase release (more on it later).</p>



<h1 class="wp-block-heading" id="27f4"><strong>Rich-observation RL landscape</strong></h1>



<p>Consider the combination lock problem shown below. The agent starts in the state s<sub>1a</sub> or s<sub>1b</sub> with equal probability. After taking h-1 actions, the agent will be in either state s<sub>ha</sub>, s<sub>hb</sub>, or s<sub>hc</sub>. The agent can take 10 different actions. The agent observes a high-dimensional observation (focus circle) instead of the underlying state which is latent. There is a big treasure chest that one can get after taking 100 actions. We view the states with subscript “a” or “b” as “<em>good states</em>” and one with subscript “c” as “<em>bad states</em>”. You can reach the treasure chest at the end only if you remain in good states. If you reach any bad state, then you can never make it to the treasure chest.</p>



<figure class="wp-block-image size-large"><img decoding="async" src="https://miro.medium.com/max/1400/1*PdxtcVJPhvq6fhit9Rxfig.png" alt=""/></figure>



<p>The environment makes it difficult to reach the big treasure chest in three ways. First, the environmental dynamics are such that if you are in good states, then only 1 out of 10 possible actions will let you reach the two good states at the next time step with equal probability (the good action changes from state to state). Every other action in good states and all actions in bad states put you into bad states at the next time step, from which it is impossible to recover. Second, it misleads myopic agents by giving a small bonus for transitioning from a good state to a bad state (small treasure chest). This means that a locally optimal policy is transitions to one of the bad states as quickly as possible. Third, the agent never directly observes which state it is in. Instead, it receives a high-dimensional, noisy observation from which it must decode the true underlying state.</p>



<p>It is easy to see that if we take actions uniformly at random, then the probability of reaching the big treasure chest at the end is 1/10<sup>100</sup>. The number 10<sup>100</sup> is called&nbsp;<a href="https://en.wikipedia.org/wiki/Googol" target="_blank" rel="noreferrer noopener">Googol</a>&nbsp;and is larger than the current estimate of number of elementary particles in the universe. Furthermore, since transitions are stochastic one can show that no fixed sequence of actions performs well either.</p>



<p>A key aspect of the rich-observation setting is that the agent receives observations instead of latent state. The observations are stochastically sampled from an&nbsp;<em>infinitely</em>&nbsp;large space conditioned on the state. However, observations are rich-enough to&nbsp;<em>enable decoding</em>&nbsp;the latent state which generates them.</p>



<h1 class="wp-block-heading" id="c761"><strong>What does provable RL mean?</strong></h1>



<p>A provable RL algorithm means that for any given numbers&nbsp;<strong>e</strong>,&nbsp;<strong>d</strong>&nbsp;in (0, 1); we can learn an&nbsp;<strong><em>e</em></strong><em>-optimal policy</em>&nbsp;with probability at least&nbsp;<strong>1-d</strong>&nbsp;using a number of episodes which are polynomial in relevant quantities (state size, horizon, action space, 1/e, 1/d, etc.). By&nbsp;<strong>e</strong>-optimal policy we mean a policy whose value (expected total return) is at most&nbsp;<strong>e</strong>&nbsp;less than the optimal return.</p>



<p>Thus, a provable RL algorithm is capable of learning a close to optimal policy with high probability (where the word high and close can be made arbitrarily more refined), provided the assumptions it makes are satisfied.</p>



<h1 class="wp-block-heading" id="7fa7"><strong>Why should I care if my algorithm is provable?</strong></h1>



<p>There are two main advantages of being able to show your algorithm is provable:</p>



<ol class="wp-block-list"><li>We can only test an algorithm on a finite number of environments (in practice somewhere between 1 and 20). Without guarantees, we don’t know how they will behave in a new environment. This matters especially if failure in a new environment can result in high real-world costs (e.g., in health or financial domains).</li><li>If a provable algorithm fails to consistently give the desired result, this can be attributed to failure of at least one of its assumptions. A developer can then look at the assumptions and try to determine which ones are violated, and either intervene to fix them or determine that the algorithm is not appropriate for the problem.</li></ol>



<h1 class="wp-block-heading" id="33fa"><strong>HOMER</strong></h1>



<p>Our algorithm addresses what is known as the&nbsp;<a href="https://arxiv.org/pdf/1901.09018.pdf" target="_blank" rel="noreferrer noopener">Block MDP</a>&nbsp;setting. In this setting, a small number of discrete states generates a potentially infinite number of high dimensional observations.</p>



<p>For each time step, HOMER learns a state decoder function, and a set of exploration policies. The state decoder maps high-dimensional observations to a small set of possible latent states, while the exploration policies map observations to actions which will lead the agent to each of the latent states. We describe HOMER below.</p>



<ul class="wp-block-list"><li>For a given time step, we first learn a decoder for mapping observations to a small set of values using contrastive learning. This procedure works as follows: collect a transition by following a randomly sampled exploration policy from the previous time step until that time step, and then taking a single random action. We use this procedure to sample two transitions shown below.</li></ul>



<figure class="wp-block-image size-large"><img decoding="async" src="https://miro.medium.com/max/1400/1*mWMGmHX9R-gBH4RuPkuAZQ.gif" alt=""/></figure>



<ul class="wp-block-list"><li>We then flip a coin; if we get heads then we store the transition&nbsp;<strong>(x1, a1, x’1)</strong>, and otherwise we store the&nbsp;<em>imposter</em>&nbsp;transition&nbsp;<strong>(x1, a1, x’2)</strong>. We train a supervised classifier to predict if a given transition&nbsp;<strong>(x, a, x’)</strong>&nbsp;is real or not.<br />This classifier has a special structure which allows us to recover a decoder for time step h.</li></ul>



<figure class="wp-block-image size-large"><img decoding="async" src="https://miro.medium.com/max/1100/1*_0irtizbezKiGq_XkOGmMw.png" alt=""/></figure>



<ul class="wp-block-list"><li>Once we have learned the state decoder, we will learn an exploration policy for every possible value of the decoder (which we call&nbsp;<em>abstract state</em>&nbsp;as they are related to the latent state space). This step is standard can be done using many different approaches such as model-based planning, model-free methods, etc. In the paper we use an existing model-free algorithm called&nbsp;<a rel="noreferrer noopener" href="https://www.cs.cmu.edu/~schneide/bagnellPSDP.pdf" target="_blank">policy search by dynamic programming (PSDP)</a>&nbsp;by Bagnell et al. 2004.</li></ul>



<figure class="wp-block-image size-large"><img decoding="async" src="https://miro.medium.com/max/1400/1*TKdAjp42Uprz0ca4koTdyg.png" alt=""/></figure>



<ul class="wp-block-list"><li>We recovered a decoder and a set of exploration policy for this time step. We then keep doing it for every time step and learn a decoder and exploration policy for the whole latent state space. Finally, we can easily optimize any given reward function using any provable planner like PSDP or a model-based algorithm. (The algorithm actually recovers the latent state space up to an inherent ambiguity by combining two different decoders; but I’ll leave that to avoid overloading this post).</li></ul>



<h1 class="wp-block-heading" id="0126"><strong>Key findings</strong></h1>



<p>HOMER achieves the following three properties:</p>



<ol class="wp-block-list"><li>The contrastive learning procedure gives us the right state decoding (we recover up to some inherent ambiguity but I won’t cover it here).<br /></li><li>HOMER can learn a set of exploration policies to reach every latent state<br /></li><li>HOMER can learn a nearly-optimal policy for&nbsp;<em>any</em>&nbsp;given reward function with high probability. Further, this can be done after exploration part has been performed.</li></ol>



<h1 class="wp-block-heading" id="ba71"><strong>Failure cases of prior RL algorithms</strong></h1>



<p>There are many RL algorithms in the literature and many new are proposed every month. It is difficult to do justice to this vast literature in a blog post. It is equally difficult to situate HOMER in this vast literature. However, we show that several very commonly used RL algorithms fail to solve the above problem while HOMER succeeds. One of these is the&nbsp;<a href="https://arxiv.org/abs/1707.06347" target="_blank" rel="noreferrer noopener">PPO</a>&nbsp;algorithm, a widely used policy gradient algorithm. In spite of its popular use, PPO is not designed for challenging exploration problems and easily fails. Researchers have made efforts to alleviate this with ad-hoc proposals such as using prediction errors, counts based on auto-encoders, etc. The best alternative approach we found is called&nbsp;<a href="https://arxiv.org/abs/1810.12894" target="_blank" rel="noreferrer noopener">Random Network Distillation</a>(RND) which measures novelty of a state based on prediction errors for a fixed randomly initialized network.</p>



<p>Below we show how PPO+RND fails to solve the above problem while HOMER succeeds. We simplify the problem by using a grid pattern where rows represent the state (the top two represents “good” states and bottom row represents “bad” states), and column represents timestep.</p>



<figure class="wp-block-embed-youtube wp-block-embed is-type-video is-provider-youtube wp-embed-aspect-16-9 wp-has-aspect-ratio"><div class="wp-block-embed__wrapper">
https://youtu.be/tjxl4kpd7Uw
</div></figure>



<p>We present counter-examples for other algorithms in the paper (see Section 6&nbsp;<a href="https://arxiv.org/pdf/1911.05815.pdf" target="_blank" rel="noreferrer noopener">here</a>). These counterexamples allow us to find limits of prior work without expensive empirical computation on many domains.</p>



<h1 class="wp-block-heading" id="4c49"><strong>How can I use with HOMER?</strong></h1>



<p>We will be providing the code soon as part of a new package release called cereb-rl. You can find it here:&nbsp;<a href="https://github.com/cereb-rl" target="_blank" rel="noreferrer noopener">https://github.com/cereb-rl</a>&nbsp;and join the discussion here:&nbsp;<a href="https://gitter.im/cereb-rl" target="_blank" rel="noreferrer noopener">https://gitter.im/cereb-rl</a></p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762683</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
			</item>
		<item>
		<title>Critical issues in digital contract tracing</title>
		<link>https://hunch.net/?p=13762603</link>
					<comments>https://hunch.net/?p=13762603#comments</comments>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Sun, 19 Apr 2020 03:00:33 +0000</pubDate>
				<category><![CDATA[Coronavirus]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762603</guid>

					<description><![CDATA[I spent the last month becoming a connoisseur of digital contact tracing approaches since this seems like something where I might be able to help. Many other people have been thinking along similar lines (great), but I also see several misconceptions that even smart and deeply involved people are making. For the following a key &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762603" class="more-link">Continue reading<span class="screen-reader-text"> "Critical issues in digital contract tracing"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>I spent the last month becoming a connoisseur of digital contact tracing approaches since this seems like something where I might be able to help.  Many other people have been thinking along similar lines (great), but I also see several misconceptions that even smart and deeply involved people are making.</p>
<p>For the following a key distinction to understand is between proximity and location approaches.   In proximity approaches (such as <a href="https://github.com/DP-3T/documents">DP3T</a>, <a href="https://www.tcn-coalition.org/">TCN</a>, <a href="https://pact.mit.edu/">MIT PACT(*)</a>, <a href="https://www.apple.com/covid19/contacttracing/">Apple</a> or one of the <a href="https://arxiv.org/abs/2004.03544">UW PACT(*)</a> protocols which I am involved in) smartphones use <a href="https://en.wikipedia.org/wiki/Bluetooth_Low_Energy">Bluetooth low energy</a> and possibly ultrasonics to discover other smartphones nearby.  Location approaches (such as <a href="https://www.media.mit.edu/projects/safepaths/overview/">MIT Safe Paths</a> or <a href="https://techcrunch.com/2020/03/18/israel-passes-emergency-law-to-use-mobile-data-for-covid-19-contact-tracing/">Israel</a>) instead record the absolute location of the device based on gps, cell tower triangulation, or wifi signals.</p>
<p><b>Location traces are both poor quality and intrinsically identifying</b><br />
Many people associate the ability of a phone to determine where it is with the ability to discover where it is with high precision.  This is typically incorrect.  Common healthcare guidance for possible contact is &#8220;within 2 meters for 10 minutes&#8221; while location data is often off by 10-100 meters, with varying accuracy due to which location methodology is in use.  As an example, approximately everyone in Manhattan may be within 100 meters of someone who later tested positive for COVID-19. Given this inaccuracy, I expect users of a system based on location crossing to simply turn them off due to the large number of false positives.</p>
<p>These location traces, even though they are crude, are also highly identifying.  When going about your normal pre-pandemic life, you move from location X to Y to Z.   Typically no one else goes from X to Y to Z in the same timeframe (clocks are typically very accurate).  If you test positive and make your trace available to help suppress the virus, a store owner with a video camera and a credit card record might de-anonymize you and accuse you of killing someone they care about.  Given the stakes here, preserving as much anonymity as possible is critical for convincing people to release the information which is needed to control the virus.</p>
<p>Given this, approaches which upload the location data of users seem likely to have reduced adoption and many false positives.  While some governments are choosing to use all location data on an involuntary basis like <a href="https://techcrunch.com/2020/03/18/israel-passes-emergency-law-to-use-mobile-data-for-covid-19-contact-tracing/">Israel</a>, the lack of effectiveness compared to proximity based approaches and the draconian compromise of civil liberties are worrisome.</p>
<p><b>Location traces can be useful in a privacy-preserving way</b><br />
Understanding the above, people often conclude that location traces are subsumed by alternatives.  That&#8217;s not true.  Location approaches can be made very private by simply never allowing a location trace leave the personal device.  While this might feel contradictory to epidemiological success, it&#8217;s actually extremely helpful in at least two ways.</p>
<ol>
<li>People have a pretty poor memory, so when they test positive and someone calls them up to do a contact tracing interview, having a location trace on their phone can be incredibly useful in jogging their memory.  Using the location trace this way allows the manual contact tracing process to be much more complete.  It can also be made much faster by allowing infected people to prefill much of a contact interview form before they get a call.</li>
<li>The virus is inherently very localized, so public health authorities often want to quickly talk to people at location X or warn people to stay away from location Y until it is cleaned. This can be strongly enabled by on-device location traces.  The phone can download all the public health messages in a region and check automatically which are relevant to the phone&#8217;s location trace, surfacing those as needed to the user.  This provides <i>more</i> power than crossing location traces.  A message of &#8220;If you were at store X on April 16th, please email w@y.z&#8221; allows people to not respond if they went to store V next door.</li>
</ol>
<p>Both of these capabilities are a part of the <a href="https://arxiv.org/abs/2004.03544">UW PACT protocols</a> I worked on for this reason.</p>
<p><b>Proximity-only approaches have an x<sup>2</sup> problem</b></p>
<p>When people abandon location-based approaches, it&#8217;s in favor of proximity-based approaches.  For any proximity protocol approach to work, both the infected person and the contact must be running the protocol implying there are two ways for it to fail to be useful.<br />
<img decoding="async" src="images/square_problem.png" alt="illustration of x*x" width="600"/><br />
To get a sense of what is necessary, consider the reproduction number of the coronavirus.  Estimates vary but a reproduction number of 2.5 is reasonable.   That is, the virus might infect  2.5 new people per infected person on average in the absence of countermeasures.  To keep an infection with a base reproduction number of 2.5 from exponentiating, it is necessary to reduce the reproduction number to 1 which can be done when 60% of contacts are discovered, assuming (optimistically) no testing error and perfect isolation of discovered contacts before they infect anyone else.</p>
<p>To reach 60% you need 77.5% of people to download and run proximity protocols.  This is impossible in many places where smartphones are owned by fewer than 77.5% of the population.  Even in places where it&#8217;s possible it&#8217;s difficult to imagine reaching that level of usage without it being a mandatory part of the operating system that you are forced to use.  Even then, subpopulations without smartphones are not covered.  The square problem gets worse at lower levels of adoption.  At 10% adoption (which corresponds to a hugely popular app), only 1% of contacts can be discovered via this mechanism.  Despite the smallness, informing 1% of contacts does have real value in about the same sense that if someone loaned you money with a 1%/week interest rate you would call them a loan shark.  At the same time, this is only 1/60th of a solution to getting the reproduction number below 1.</p>
<p>Hence, people advocating for proximity approaches must either hope for pervasive mandatory use (which will still miss subcommunities without smartphones) or accept that proximity approaches are only a part of the picture.</p>
<p>This quadratic structure also implies that the number of successful proximity tracing protocols will be either 0 or 1 in any geographic region.  Given that Apple/Google are building a protocol into their OSes, that&#8217;s the candidate for the possible 1 in most of the world once it becomes available(**).</p>
<p>This quadratic structure is difficult to avoid.  For example, if location traces are crossed with location traces, the same issue comes up.  Similarly for proximity tracing, you could imagine recording &#8220;wild&#8221; bluetooth beacons and then reporting them to avoid the quadratic structure.  This however unavoidably reveals contacts publicly which can then cause the positive person to be revealed publicly.</p>
<p>Interestingly, traditional manual contact tracing does <i>not</i> suffer from the quadratic problem.  Hence approaches (discussed above) which augment and benefit from manual contact tracing have a linear value structure, which matters enormously with lower levels of adoption.</p>
<p><b>What works?</b><br />
The primary thrust of contract tracing needs to be manual, as that is what has worked in countries (like South Korea) which suppressed large outbreaks.  Purely digital approaches don&#8217;t seem like a credible solution due to issues discussed above.  Hybrid approaches with smartphone-based apps can help by complementing manual contact tracing and perhaps via proximity approaches.  Getting there requires high levels of adoption, which implies trust is a critical commodity.  In addition to navigating the issues above, projects need to be open source, voluntary, useful, and strongly respect privacy (the <a href="https://www.aclu.org/report/aclu-white-paper-principles-technology-assisted-contact-tracing">ACLU recommendations</a> are good here).  This is what the <a href="https://covidsafe.cs.washington.edu/">CovidSafe</a> project is aimed at in implementing the UW PACT protocols.  Projects not navigating the above issues as well are less credible in my understanding.</p>
<p>An acknowledgement: many people have affected my thinking through this process, particularly those on the UW PACT paper and CovidSafe projects.</p>
<p>(*) I have no idea how the name collision occurred. We started using PACT <a href="https://github.com/PACT-protocols/PACT">here</a>, 3 weeks ago, and circulated drafts to many people including a subset of the MIT PACT group before putting it on arxiv.</p>
<p>(**) The Apple protocol is a bit worrisome as development there is not particularly open and I have a concern about the crypto protocol.  The <a href="https://covid19-static.cdn-apple.com/applications/covid19/current/static/contact-tracing/pdf/ContactTracing-CryptographySpecification.pdf">Tracing Key on page 5</a>, if acquired via hack or subpeona, allows you to prove the location of a device years after the fact.  This is not epidemiologically required and there are other protocols without this weakness.  Edit: The new version of their protocol addresses this issue.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762603</wfw:commentRss>
			<slash:comments>68</slash:comments>
		
		
			</item>
		<item>
		<title>What is the most effective policy response to the new coronavirus pandemic?</title>
		<link>https://hunch.net/?p=13762539</link>
					<comments>https://hunch.net/?p=13762539#comments</comments>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Tue, 17 Mar 2020 18:45:02 +0000</pubDate>
				<category><![CDATA[Coronavirus]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762539</guid>

					<description><![CDATA[Disclaimer: I am not an epidemiologist, but there is an interesting potentially important pattern in the data that seems worth understanding. World healthcare authorities appear to be primarily shifting towards Social Distancing. However, there is potential to pursue a different strategy in the medium term that exploits a vulnerability of this disease: the 5 day &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762539" class="more-link">Continue reading<span class="screen-reader-text"> "What is the most effective policy response to the new coronavirus pandemic?"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>Disclaimer: I am not an epidemiologist, but there is an interesting  potentially important pattern in the data that seems worth understanding.</p>
<p>World healthcare authorities appear to be primarily shifting towards <a href="https://en.wikipedia.org/wiki/Social_distancing">Social Distancing</a>.  However, there is potential to pursue a different strategy in the medium term that exploits a vulnerability of this disease: the <a href="https://annals.org/aim/fullarticle/2762808/incubation-period-coronavirus-disease-2019-covid-19-from-publicly-reported">5 day incubation time</a> is much  longer than a <a href="https://www.nytimes.com/reuters/2020/03/13/world/europe/13reuters-health-coronavirus-roche.html">4 hour detection time</a>.  This vulnerability is real&#8212;it has proved exploitable at scale in South Korea and in China outside of Hubei.</p>
<p>Exploiting this vulnerability requires:</p>
<ol>
<li>A sufficient capacity of rapid tests be available.  Sufficient here is perhaps 30 times the number of true new cases per day based on <a href="https://en.wikipedia.org/wiki/COVID-19_testing">South Korea&#8217;s testing rate</a>.</li>
<li>The capacity to rapidly trace the contacts of confirmed positive cases.  This is both highly labor intensive and absurdly cheap compared to <a href="https://www.nytimes.com/2020/03/16/business/economy/coronavirus-us-economy-shutdown.html?action=click&amp;module=Spotlight&amp;pgtype=Homepage">shutting down the economy</a>.</li>
<li>Effective quarantining of positive and suspect cases. This could be in home, with the quarantine extended to the entire family.   It could also be done in a hotel (&#8230; which are pretty empty these days), or in a hospital.</li>
</ol>
<p>Where Test/Trace/Quarantine are working, the number of cases/day have declined empirically.  Furthermore, this appears to be a radically superior strategy where it can be deployed.  I&#8217;ll review the evidence, discuss the other strategies and their consequences, and then discuss what can be done.</p>
<p><b>Evidence for Test/Trace/Quarantine</b><br />
The TTQ strategy works when it effectively catches a 1 &#8211; 1 / <a href="https://en.wikipedia.org/wiki/Basic_reproduction_number">reproduction number</a> fraction of cases.  The reproduction number is not precisely known although discovering 90% of cases seems likely effective and 50% of cases seems likely ineffective based on public data.</p>
<p>How do you know what fraction of cases are detected?  A crude measure can be formed by comparing detected cases / mortality across different countries.  Anyone who dies from pneumonia these days should be tested for COVID-19 so the number of deaths is a relatively trustworthy statistic.  If we suppose the ratio of true cases to mortality is fixed, then the ratio of observed cases to mortality allows us to estimate the fraction of detected cases.  For example, if the true ratio between infections and fatalities is 100 while we observe 30, then the detection rate is 30%.</p>
<p>There are many caveats to this analysis (see below).  Nevertheless, this ratio seems to provide real information which is useful in thinking about the future.   Drawing data from the <a href="https://github.com/CSSEGISandData/COVID-19">Johns Hopkins COVID-19 time series</a>, and plotting we see:<br />
<img decoding="async" src="images/cumulative_cases_over_cumulative_deaths.png"/></p>
<p>The arrows here represent the progression of time by days with time starting at the first recorded death.  The X axis here is the ratio between cumulative observed cases and cumulative observed deaths. Countries that are able and willing to test widely have progressions on the right while those that are unable or unwilling to test widely are on the left.  Note here that the X axis is on a log scale allowing us to see small variations in the ratio when the ratio is small and large variations in the ratio when the ratio is large.</p>
<p>The Y axis here is the number of cases/day.  For a country to engage in effective Test/Trace/Quarantine, it must effectively test, which the X axis is measuring.  Intuitively, we expect countries that test effectively to follow up with Trace and Quarantine, and we expect this to result in a reduced number of cases per day.  This is exactly what is observed.  Note that we again use a log scale for the Y axis due to the enormous differences in numbers.</p>
<p>There are several things you can read from this graph that make sense when you consider the dynamics.</p>
<ol>
<li>China excluding Hubei and South Korea had outbreaks which did not exceed the hospital capacity since the arrows start moving up and then loop back down around a 1% fatality rate.</li>
<li>The United States has a growing outbreak and a growing testing capacity.  Comparing with China-excluding-Hubei and South Korea&#8217;s outbreak, only a 1/4-1/10th fraction of the cases are likely detected.  Can the United States expand capacity fast enough to keep up with the growth of the epidemic?</li>
<li>Looking at Italy, you can see evidence of an overwhelmed healthcare system as the fatality rate escalates.  There is also some hope here, since the effects of the Italian lockdown are possibly starting to show in the new daily cases.</li>
<li>Germany is a strange case with an extremely large ratio.  It looks like there is evidence that Germany is starting to control their outbreak, which is hopeful and aligned with our expectations.</li>
</ol>
<p>The creation of this graph is fully automated and it&#8217;s easy to graph things for any country in the Johns Hopkins dataset.  I created a <a href="https://github.com/JohnLangford/coronavirus">github repository</a> with the code.  Feel free to make fun of me for using C++ as a scripting language <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
<p>You can also understand some of the limitations of this graph by thinking through the statistics and generation process.</p>
<ol>
<li>Mortality is a delayed statistic.   Apparently, it&#8217;s about a week delayed in the case of COVID-19.  Given this, you expect to see the ratio generate loops when an outbreak occurs and then is controlled.  South Korea and China-excluding-Hubei show this looping structure, returning to a ratio of near 100.</li>
<li>Mortality is a small statistic, and a small statistic in the denominator can make the ratio unstable.  When mortality is relatively low, we expect to see quite a variation.  Checking each progression, you see wide ratio variations initially, particularly in the case of the United States.</li>
<li>Mortality may vary from population to population.  It&#8217;s almost surely dependent on the age distribution and health characteristics of the population and possibly other factors as well.  Germany&#8217;s ratio is notably large here.</li>
<li>Mortality is not a fixed variable, but rather dependent on the quality of care.  A reasonable approximation of this is that every &#8220;critical&#8221; case dies without intensive care support.  Hence, we definitely do not expect this statistic to hold up when/where the healthcare system is overwhelmed, as it is in Italy. This is also the reason why I excluded Hubei from the China data.</li>
</ol>
<p><b>Lockdown</b><br />
The only other strategy known to work is a &#8220;lockdown&#8221; where nearly everyone stays home nearly all the time, as first used in <a href="https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_pandemic#China">Hubei</a>. This characterization is simplistic&#8212;in practice such a quarantine comes with many other measures as well.  This can work very effectively&#8212;today the number of new case in Hubei is in the 10s.</p>
<p>The lockdown approach shuts down the economy fast and hard.  Most people can&#8217;t work, so they can&#8217;t make money, so they can&#8217;t buy things, so the people who make things can&#8217;t make money, so they go broke, etc&#8230;   This is strongly reflected in the stock market&#8217;s reaction to the escalating pandemic.  If the lockdown approach is used for long most people and companies are destined for bankruptcy.  If a lockdown approach costs 50% of GDP then a Test/Trace/Quarantine approach costing only a few% of GDP seems incredibly cheap in comparison.</p>
<p>The lockdown approach is also extremely intrusive.  It&#8217;s akin to collective punishment in that it harms the welfare of everyone, regardless of their disease status.  Many peoples daily lives fundamentally depend on moving around&#8212;for example people using dialysis.</p>
<p>Despite this, the lockdown approach is being taken up everywhere that cases are overwhelming or threaten to overwhelm hospitals because the alternative (next) is even worse.  One advantage that a lockdown approach has is that it can be used <i>now</i> while the Test/Trace/Quarantine approach requires more organizing.  It&#8217;s the best bad option when the Test/Trace/Quarantine capacity is exceeded or to bridge the time until it becomes available.</p>
<p>If/when/where Test/Trace/Quarantine becomes available, I expect it to be rapidly adopted.  <a href="https://www.imperial.ac.uk/media/imperial-college/medicine/sph/ide/gida-fellowships/Imperial-College-COVID19-NPI-modelling-16-03-2020.pdf">This new study</a> (page 11) points out that repeated lockdowns are close to permanent lockdowns in effect.</p>
<p><b>Herd Immunity</b><br />
Some countries have <a href="https://www.marketwatch.com/story/its-going-to-be-daunting-uk-considers-opposite-approach-to-the-uss-by-allowing-more-people-to-contract-coronavirus-2020-03-14">considered skipping measures to control the virus</a> on the theory that the population eventually acquires enough people with individual  immunity after recovery so the disease dies out.  This approach invites severe consequences.</p>
<p>A key issue here is: How bad is the virus?  The mortality rate in China excluding Hubei and South Korea is only about 1%.  From this, some people appear to erroneously reason that the impact of the virus is &#8220;only&#8221; having 1% of 50% of the population die, heavily weighted towards older people.  This reasoning is fundamentally flawed.</p>
<p>The mortality rate is <i>not</i> a fixed number, but rather dependent on the quality of care.  In particular, because most countries have very few intensive care units, an uncontrolled epidemic effectively implies all but a vanishing fraction of sick people only benefit from home stay quality of care.  How many people could die with home stay quality of care?  Essentially everyone who would otherwise require intensive care at a hospital.  In China, that meant <a href="https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf">6.1%</a> (see page 12).  Given this, the sound understanding is that COVID-19 generates a factor 2-3 worse mortality than the <a href="https://en.wikipedia.org/wiki/Spanish_flu">1918 influenza pandemic</a> where modern healthcare might make this instead be half as bad when not overwhelmed.  Note here that the fatality rate in Hubei (4.6% of known cases, which might be 3% of total cases) does not fully express how bad this would be due to the fraction of infected people remaining low and a surge of healthcare support from the rest of China.</p>
<p>The herd immunity approach also does not cause the disease to die out&#8212;instead it continues to linger in the population for a long time.  This means that people traveling from such a country will be effectively ostracized by every country (like China or South Korea) which has effectively implemented a Test/Trace/Quarantine approach.</p>
<p>I&#8217;ve avoided discussing the ethics here since people making this kind of argument may not care about ethics.  For everyone else it&#8217;s fair to say that letting part of the population die to keep the economy going is anathema.  My overall expectation is that governments pursuing this approach are at serious risk of revolt.</p>
<p><b>Vaccine</b></p>
<p>Vaccines are extremely attractive because they are a very low cost way to end the pandemic.  They are however uncertain and take time to develop and test, so they are not a viable strategy for the next few months.</p>
<p><b>What can be done?</b></p>
<p>Public health authorities are generally talking about <a href="https://en.wikipedia.org/wiki/Social_distancing">Social Distancing</a>.  This is plausibly the best general-public message because everyone can do something to help here.</p>
<p>It&#8217;s also clear that healthcare workers, vaccines makers, and everyone supporting them have a critical role to play.</p>
<p>But, perhaps there&#8217;s a third group that can really help?  Perhaps there are people who can help scale up the Test/Trace/Quarantine approach so it can be rapidly adopted?  Natural questions here are:</p>
<ol>
<li>How can testing be scaled up rapidly&#8212;more rapidly than the disease?  This question is already getting quite a bit of attention, and deservedly so.</li>
<li>How can tracing be scaled up rapidly and efficiently?  Hiring many people who are freshly out of work is the most obvious solution.  That could make good sense given the situation.  However, automated or partially automated approaches have the potential to greatly assist as well. I hesitate to mention <a href="https://www.nytimes.com/interactive/2019/12/19/opinion/location-tracking-cell-phone.html">cell phone tracking</a> because of the potential for abuse, but can that be avoided while still gaining the potential public health benefits?</li>
<li>How can quarantining be made highly precise and effective? Can you estimate the risk of infection with high precision?  What support can safely be put in place to help those who are quarantined? Can we avoid the situation where the government says &#8220;you should quarantine&#8221; and &#8220;people in quarantine can&#8217;t vote&#8221;?</li>
</ol>
<p>Some countries started this pandemic setup for relatively quick scaleup of the Test/Trace/Quarantine.  Others, including the United States, seem to have been unprepared.  Nevertheless, I am still holding out hope that the worst case scenarios (high mortality or months-long lockdowns) can be largely avoided as the available evidence suggests that this is certainly possible.  Can we manage to get the number of true cases down (via a short lockdown if necessary) to the point where an escalating Test/Trace/Quarantine approach can take over?</p>
<p>Edit: I found myself remaking the graph for myself personally so I made it update hourly and added New York (where I live).</p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762539</wfw:commentRss>
			<slash:comments>17</slash:comments>
		
		
			</item>
		<item>
		<title>Coronavirus and Machine Learning Conferences</title>
		<link>https://hunch.net/?p=13762505</link>
					<comments>https://hunch.net/?p=13762505#comments</comments>
		
		<dc:creator><![CDATA[John Langford]]></dc:creator>
		<pubDate>Sun, 23 Feb 2020 23:15:47 +0000</pubDate>
				<category><![CDATA[Conferences]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[Organization]]></category>
		<guid isPermaLink="false">http://hunch.net/?p=13762505</guid>

					<description><![CDATA[I&#8217;ve been following the renamed COVID-19 epidemic closely since potential exponentials deserve that kind of attention. The last few days have convinced me it&#8217;s a good idea to start making contingency plans for machine learning conferences like ICML. The plausible options happen to be structurally aligned with calls to enable reduced travel to machine learning &#8230; <p class="link-more"><a href="https://hunch.net/?p=13762505" class="more-link">Continue reading<span class="screen-reader-text"> "Coronavirus and Machine Learning Conferences"</span></a></p>]]></description>
										<content:encoded><![CDATA[<p>I&#8217;ve <a href="https://twitter.com/JohnCLangford/status/1222525110457901057">been</a> <a href="https://twitter.com/JohnCLangford/status/1227601272989192193">following</a> the renamed <a href="https://en.wikipedia.org/wiki/Coronavirus_disease_2019">COVID-19</a> epidemic closely since potential exponentials deserve that kind of attention.</p>
<p>The last few days have convinced me it&#8217;s a good idea to start making contingency plans for machine learning conferences like ICML.  The plausible options happen to be structurally aligned with <a href="https://yoshuabengio.org/2020/02/10/fusce-risus/">calls</a> to <a href="https://www.change.org/p/organizers-of-data-science-and-machine-learning-conferences-neurips-icml-aistats-iclr-uai-allow-remote-paper-poster-presentations-at-conferences?recruited_by_id=3ed1e3f2-9626-4419-84ae-3c7655dfd5f8">enable reduced travel to machine learning conferences</a>, but of course the need is much more immediate.</p>
<p>I&#8217;ll discuss relevant observations about COVID-19 and then the impact on machine learning conferences.</p>
<p><strong>COVID-19 observations</strong></p>
<ol>
<li>COVID-19 is capable of exponentiating with a <a href="https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_outbreak#Spread">base estimated at 2.13-3.11</a> and a doubling time around a week when unchecked.</li>
<li>COVID-19 is far more deadly than the seasonal flu with estimates of a <a href="https://en.wikipedia.org/wiki/2019%E2%80%9320_coronavirus_outbreak#Deaths&quot;">2-3% fatality rate</a> but also much milder than <a href="https://en.wikipedia.org/wiki/Severe_acute_respiratory_syndrome">SARS</a> or <a href="https://en.wikipedia.org/wiki/Middle_East_respiratory_syndrome">MERS</a>.  Indeed, part of what makes COVID-19 so significant is the fact that it is mild for many people leading to a lack of diagnosis, more spread, and ultimately more illness and death.</li>
<li>COVID-19 can be controlled at a large scale via draconian travel restrictions.  The number of new observed cases per day peaked about 2 weeks after China&#8217;s lockdown and has been declining for the last week.</li>
<li>COVID-19 can be controlled at a small scale by careful contact tracing and isolation.  There have been hundreds of cases spread across the world over the last month which have not created new uncontrolled outbreaks.</li>
<li>New significant uncontrolled outbreaks in Italy, Iran, and South Korea have been revealed over the last few days.  Some details:
<ol>
<li>The 8 COVID-19 deaths in Iran suggests that the few reported cases (as of 2/23) are only the tip of the iceberg.</li>
<li>The fact that South Korea and Italy can suddenly discover a large outbreak despite heavy news coverage suggests that it can really happen anywhere.</li>
<li>These new outbreaks suggest that in a few days COVID-19 is likely to become a world-problem with a declining China aspect rather than a China-problem with ramifications for the rest of the world.</li>
</ol>
</li>
</ol>
<p>There remains quite a bit of uncertainty about COVID-19, of course. The plausible bet is that the known control measures remain effective when and where they can be exercised with new ones (like a vaccine) eventually reducing it to a non-problem.</p>
<p><strong>Conferences</strong><br />
The plausible scenario leaves conferences still in a delicate position because they require many things go right to function.  We can easily envision 3 quite different futures here consistent with the plausible case.</p>
<ol>
<li><strong>Good case</strong> New COVID-19 outbreaks are systematically controlled via proven measures with the overall number of daily cases declining steadily as they are right now.  The impact on conferences is marginal  with lingering travel restrictions affecting some (&lt;10%) potential attendees.</li>
<li><strong>Poor case</strong> Multiple COVID-19 outbreaks turn into a pandemic (=multi-continent epidemic) in regions unable to effectively exercise either control measure.  Outbreaks in other regions occur, but they are effectively controlled.  The impact on conferences is significant with many (50%?) avoiding travel due to either restrictions or uncertainty about restrictions.</li>
<li><strong>Bad case</strong> The same as (2), except that an outbreak occurs in the area of the conference.  This makes the conference nonviable due to travel restrictions alone.  It&#8217;s notable here that Italy&#8217;s new outbreak involves travel lockdowns a few hundred miles/kilometers from Vienna where ICML 2020 is planned.</li>
</ol>
<p>Even the first outcome could benefit from some planning while gracefully handling the last outcome requires it.</p>
<p>The obvious response to these plausible scenarios is to reduce the dependence of a successful conference on travel.  To do this we need to think about what a conference is in terms of the roles that it fulfills.  The quick breakdown I see is:</p>
<ol>
<li>Distilling knowledge.  Luckily, our review process is already distributed.</li>
<li>Passing on knowledge.</li>
<li>Meeting people, both old friends and discovering new ones.</li>
<li>Finding a job / employee.</li>
</ol>
<p>How (and which) of these can be effectively supported remotely?</p>
<p>I&#8217;m planning to have discussions over the next few weeks about this to  distill out some plans.  If you have good ideas, let&#8217;s discuss.  Unlike most contingency planning, it seems likely that efforts are  not wasted no matter what the outcome <img src="https://s.w.org/images/core/emoji/16.0.1/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /></p>
]]></content:encoded>
					
					<wfw:commentRss>https://hunch.net/?feed=rss2&#038;p=13762505</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
			</item>
	</channel>
</rss>
