<?xml version="1.0" encoding="UTF-8"?>
<!--Generated by Site-Server v@build.version@ (http://www.squarespace.com) on Wed, 08 Apr 2026 19:31:23 GMT
--><rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://www.rssboard.org/media-rss" version="2.0"><channel><title>Blog - Jess Whittlestone</title><link>https://jesswhittlestone.com/blog/</link><lastBuildDate>Sat, 13 Aug 2022 18:06:13 +0000</lastBuildDate><language>en-US</language><generator>Site-Server v@build.version@ (http://www.squarespace.com)</generator><description><![CDATA[]]></description><item><title>Being a socially anxious extrovert</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sun, 01 Nov 2020 14:08:41 +0000</pubDate><link>https://jesswhittlestone.com/blog/2020/11/1/being-a-socially-anxious-extrovert</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5f9ec0ec0e604768db19a336</guid><description><![CDATA[I think I’m a socially anxious extrovert. This sounds like an oxymoron, but 
I don’t think it is. I love being around other people and get a lot of my 
energy from social interaction, but I also easily get anxious in social 
situations where I’m not entirely comfortable.]]></description><content:encoded><![CDATA[<p class="">A few years ago, I wrote about <a href="https://jesswhittlestone.com/blog/2014/8/20/beyond-the-introversionextroversion-distinction"><span>why I don’t personally find the distinction between introversion and extroversion that useful</span></a> - sometimes I get a lot of energy from social situations, other times I find them very draining, and this seems to just depend a lot on the features of the situation.</p><p class="">I’ve been thinking about this again recently, as we’ve all had to face more restrictions on socialising than we ever imagined. It’s made me realise just how much I need social interaction to feel happy and energised. If I spend a weekend at home not seeing anyone, I end up feeling lethargic and low. It’s become clear to me that socialising is actually <em>really </em>important to my energy and happiness, in a way that I hadn’t quite appreciated before, and in a way that doesn’t seem to be the case for my more introverted friends.</p><p class="">And yet I still feel like I don’t fit the classic “extrovert” mould, because I’m also often easily overwhelmed and fatigued by too much or the wrong kind of social interaction. Here’s what I’ve realised: I think I’m a socially anxious extrovert. This sounds like an oxymoron, but I don’t think it is. I love being around other people and get a lot of my energy from social interaction, but I also easily get anxious in social situations where I’m not entirely comfortable. A quick google suggests this <a href="https://medium.com/lifewithbemo/are-you-an-introvert-or-an-extrovert-with-social-anxiety-b5f4b376bf48#:~:text=Socially%20anxious%20extroverts%20care%20so,others%20in%20a%20social%20situation.&amp;text=Other%20characteristics%20of%20an%20extrovert,don't%20work%20for%20you"><span>isn’t</span></a> <a href="https://medium.com/@emma.austin.writer/im-an-extrovert-trapped-by-social-anxiety-23ca653e3cf7"><span>just</span></a> <a href="https://www.independent.co.uk/life-style/health-and-families/social-anxiety-extrovert-hidden-torment-symptoms-mental-illness-a7680811.html"><span>me</span></a>.</p><p class="">This has been a really helpful realisation for me. It’s helped me realise how important it is for me to have social interaction where I feel comfortable and at ease - which mostly means seeing small groups of close friends - while also allowing me to acknowledge that a lot of unfamiliar social situations do make me anxious, and that’s okay. This year has actually been surprisingly good for me socially, because there’s been way less pressure (and fewer opportunities) to go to big social events and I’ve had more space to invest in developing closer friendships with people who I feel good around.</p><p class=""><br><br><br><br><br></p>]]></content:encoded></item><item><title>Actually solving problems</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sat, 03 Oct 2020 13:59:35 +0000</pubDate><link>https://jesswhittlestone.com/blog/2020/10/3/actually-solving-problems</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5f7883a0bb9a9f741d42e1c5</guid><description><![CDATA[I’ve been feeling some dissatisfaction lately around not always being clear 
what I’m trying to achieve in the work I’m doing, so I want to explore that 
a bit.]]></description><content:encoded><![CDATA[<p class="">I’ve been feeling some dissatisfaction lately around not always being clear what I’m trying to achieve in the work I’m doing, so I want to explore that a bit.</p><p class="">When I graduated from my undergrad back in 2012, I suddenly felt quite lost: it was the first time a path wasn’t laid out for me, and I had this sense I wanted to do something valuable with my career, but didn’t know how. This was the time I came across the nascent effective altruism community, and was very drawn to the way many people involved were thinking strategically and ambitiously about how to have an impact in the world. My involvement in his community, and many people in it, had a pretty big influence on me: it made me more ambitious about trying to actually solve important problems in the world, prompted me to take seriously things like risks from AI and the importance of shaping humanity’s long-term future, and encouraged me to think more strategically about how I’m trying to impact the world.</p><p class="">However, as EA evolved, there were an increasing number of things about the culture and norms of the community that didn’t sit quite right with me. I won’t go into loads of detail here, but a few things in particular: (1) I worried the community was getting overconfident about a set of relatively narrow ideas and becoming increasingly insular, lacking respect for any expertise that wasn’t “EA”, (2) relatedly, it felt like the community was evolving in a way that implicitly encouraged people to defer to the view of “high-status” individuals, rather than enabling independent thinking; and (3) I found that the culture of really questioning what was “<em>the most </em>important thing to do”, was causing me to put pressure on myself in unhealthy ways. For these reasons among others, I’ve distanced myself from the EA community over the years. I’m still connected with specific individuals and groups that I find supportive, interesting, and helpful, but I identify much less with this broader thing called “EA”.</p><p class="">Over the last couple of years, I’ve also reduced the pressure on myself to be doing “the most important thing” all the time. I drove myself a bit mad in the first couple of years of my PhD trying to figure out “the most important research topic”, resulting in me going round in circles, feeling unmotivated, and not doing much at all. I ended up really accepting that it’s better to do something that’s ‘merely’ somewhat useful than nothing at all. After my PhD, I decided to go and work at the Centre for the Future of Intelligence and the Centre for the Study of Existential Risk in Cambridge, thinking about the long-term impacts and risks of AI and what we can do about them today. I was drawn to this particular environment because there seemed to be a culture of thinking hard about important questions but with less pressure on always doing the absolute most important thing compared to other places that more explicitly identify as “EA”.</p><p class="">I’ve allowed myself a lot more freedom these past couple of years to not always be optimising, to allow myself to explore and learn and do what I’m motivated by. This felt particularly important to me going into research in the ‘AI policy’ space, which I thought was important but where I didn’t have fully-formed views on what really needed to be done or how I could best contribute. I’d been around a lot of other people with opinions about AI risks and policy, and I didn’t want to just adopt these assumptions and perspectives.&nbsp;</p><p class="">I think this has been really good for me, and I’ve developed my thinking a lot over the past couple of years on what we should be concerned about with AI, what needs to be done, and where I’m best placed to contribute. But though I’m getting closer, I still feel like I lack a very clear sense of what I am trying to achieve in the world, and this means I sometimes lack confidence that I can actually achieve anything important. I feel some tension here, because increasingly I feel like I <em>need </em>to get to a place where I’m clearer and more confident about what I’m trying to achieve in the world, but I’m also resisting it a bit because I don’t want to end up back in the place where I’m over-optimising and putting too much pressure on myself.</p><p class="">I think part of the answer here might be not to focus on identifying “the most important problem”, but on learning how to be <em>actually solving problems, </em>full stop. I think actually solving problems in the world is really, really, hard, and requires a kind of mindset that isn’t really taught or encouraged. In giving myself space to ‘explore’ different research topics, for example, I’ve noticed how easy it is to get caught up chasing short-term incentives: to convince myself I just need to get publications and job security so that later I can do something useful with it... and quickly lose sight of what I’m doing it all for.</p><p class="">Actually solving real problems in the world is much harder than getting academic publications or a promotion, because there’s no clear path and often no good feedback, so it can be very hard to tell if you’re making progress. At the same time, there are plenty of other incentives and feedback mechanisms which push us in other directions - towards making money, impressing other people, ‘succeeding’ or climbing the ladder in a given domain or industry. I don’t think any of these incentives are aligned with solving real problems.</p><p class="">I’d probably go so far as to say that very little of the work people do is really aimed at solving real problems. I think many people <em>care </em>about solving problems, but doing so is hard and it’s much easier to follow other things where incentives are stronger.&nbsp;</p><p data-rte-preserve-empty="true" class=""></p><p class="">Actually solving problems requires a kind of persistent mindset that’s not natural for many of us, and so requires a lot of effort. Every now and then I notice this mindset in someone I meet, who seems to keep pulling conversations back to what the issue <em>really </em>is, rather than just talking about what’s interesting or feasible within clearly-defined constraints. When you ask them to make a decision, or ask why they’re working on a specific project, they’ll have a principled answer that comes back to something they want to achieve in the world, not a vague comment about what seemed interesting or what they think other people want to see. I’m finding myself increasingly drawn to these people, and increasingly keen to try and interrogate this mindset and figure out how to cultivate it more in myself.</p><p class=""><br><br><br><br><br></p>]]></content:encoded></item><item><title>Building collaborative research teams</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sat, 26 Sep 2020 12:51:58 +0000</pubDate><link>https://jesswhittlestone.com/blog/2020/9/26/building-collaborative-research-teams</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5f6f38fc8bc67827185fcca0</guid><description><![CDATA[In the last few years I’ve been struck by how individualistic much of 
academic research culture is. In my experience, it is pretty rare to find 
groups working together towards clearly articulated long-term research 
goals, even in parts of academia that are much more interdisciplinary and 
collaborative than most.]]></description><content:encoded><![CDATA[<p class="">In the last few years I’ve been struck by how <em>individualistic </em>much of<em> </em>academic research culture is (at least in fields I’m more familiar with, mostly social sciences.) As a researcher, your success and reputation is very much determined by your personal ideas and outputs - much more so than contributions to important team efforts. The fact jobs are so competitive means people can end up very protective of ‘their’ research ideas, making collaboration difficult. In my experience, it is pretty rare to find groups working together towards clearly articulated long-term research goals, even in parts of academia that are much more interdisciplinary and collaborative than most.&nbsp;</p><p class="">While great insights do sometimes come from lone researchers, I suspect much more valuable research comes from groups of people with complementary strengths working together in a fairly directed way towards shared goals. This points to something else that I see lacking in most academic communities: a sense of <em>strategy</em>. While many research groups have loosely shared aims, what individual researchers choose to work on often seems pretty ad-hoc, often involving jumping from one small paper/project to another, without any larger sense of how these build on each other and add to the work of others to create something of value. I think this is partly a consequence of publication pressures being stronger than any other incentives in academia, and the fact that journals often reward incremental contributions to well-established areas over building up and synthesising work to produce actionable conclusions. I’ve certainly seen myself bow to these pressures at times in the last few years: it’s much easier to just work on the next vaguely interesting and exciting-sounding paper than to step back and think about what I’m trying to achieve more broadly.</p><p data-rte-preserve-empty="true" class=""></p><p class="">Beyond publication pressures, taking a more collaborative and strategic approach to research is difficult because many parts of academia have extremely strong norms around <em>autonomy</em>: respecting individuals’ freedom to research what they choose within certain constraints. Working together towards shared goals inevitably requires some sacrifice of individual interests for the sake of the group. And part of the reason many people are attracted to academia is that they value research autonomy very highly. If those same people are then asked to work towards a more externally-determined group strategy they might end up unmotivated and unsatisfied.</p><p class="">That said, I think it’s also true that many people end up unmotivated and unsatisfied in academia because they have too <em>much </em>autonomy. The freedom to choose what you work on sounds great but in practice can also feel like enormous pressure and leave many feeling lost (making it much easier, therefore, to just write the papers someone else wants you to.) Other people who could be really great at research in a slightly more structured environment don’t go into academia because they know they wouldn’t be suited to it when it requires so much self-directedness. While there’s still a balance to be struck between autonomy and structure, I do think there is space for more collaborative and strategic research teams in academia, which might actually suit many people better and improve motivation and productivity over the status quo.</p><p class="">Because there’s a strong focus on autonomy and an individualistic culture in academia, good research management isn’t prioritised that highly. <a href="https://80000hours.org/2013/02/bringing-it-all-together-high-impact-research-management/"><span>I used to think</span></a> of good research management mostly in terms of helping individual researchers to be more effective. But there’s also a different kind of management: the kind that provides high-level strategy for a group and a structure within which people can collaborate towards shared goals - which seems even more neglected. If you want to build collaborative research teams, good management is essential.</p><p data-rte-preserve-empty="true" class=""></p><p class="">I don’t know yet what it takes to do this kind of research management well, or especially how to do it in academia without compromising the important aspects of research autonomy. But I want to think a lot more about this. I suspect I might really enjoy and be well-suited to this kind of research management, and that I might be able to do a lot more good by helping build a team who can work effectively together in this way, than I could through my own research. And while there seem to be lots of good books out there on general management or specific areas like engineering management, I’ve struggled to find good advice or discussion on research management that gets at the kind of thing I’m thinking about. I’d love recommendations of things to read, examples of successful collaborative research teams, or people to talk to who are also thinking about or trying to build anything similar.</p>]]></content:encoded></item><item><title>Writing about not writing</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sat, 29 Aug 2020 14:13:08 +0000</pubDate><link>https://jesswhittlestone.com/blog/notwriting</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5f4a61ec1eaeb065d4ac3d52</guid><description><![CDATA[I love writing, but I don’t really write anymore. In some sense I write 
most days, but I don’t really write in my own voice, don’t really write 
with any feeling.]]></description><content:encoded><![CDATA[<p class="">I love writing, but I don’t really write anymore. In some sense I write most days, but I don’t really write in my own voice, don’t really write with any feeling. I write google docs and research papers in some semi-formal, semi-authoritative voice that isn’t quite my own, about topics I think are important but... sometimes it feels like I’m writing what I think people want to hear, in the way I think I’m supposed to write it, rather than what I really want to say.&nbsp;</p><p class="">I miss writing like <em>this</em>: writing more in stream-of-thought style, writing to help me think. Every few months or so I have a conversation with a friend who asks, “do you ever blog anymore?” and I say “no, not really... I’d like to start again, I get a lot out of it, but I just haven’t quite figured out how to fit it into my schedule.” I come away from these conversations with a deep urge to be writing more again, but also a frustration that this probably isn’t enough to make it happen. I’ve stopped saying “yeah, maybe I’ll really try and start writing properly again...” because I don’t quite believe myself when I say it.</p><p class="">So now I’m writing about the fact I don’t write, because somehow that feels doable. This isn’t a promise to write more, to myself or anyone else, but perhaps at least a reminder of what I value in writing, and an attempt to navigate the frustration I feel with myself for not writing.</p><p class="">What is it about writing? It’s a way to express myself that feels important: I’m one of those people who often finds it easier to express themselves in writing than when speaking. It allows me to clarify my thoughts around things that seem interesting or important, and more than that, I think writing actually helps me to think more clearly about <em>what</em> I think is important and <em>why</em>. When I’m in a habit of writing, I notice<em> </em>interesting thoughts and ideas in a way I don’t otherwise. Writing also feels like a kind of creative outlet to me in a way that’s quite satisfying, even though the things I write might not be considered all that “creative”. In some fairly fundamental sense, I feel like writing helps me to clarify who I am and who I want to be.</p><p class="">Writing like <em>this </em>is important because it’s not writing for anyone else’s purpose or expectations. I want to be able to write like this - just exploring and clarifying my thoughts, not trying to produce a certain output - in a whole range of different ways: to clarify my thinking on specific research questions; to explore my bigger-picture goals and aims; to identify what I’m really confused about or struggling with.&nbsp;</p><p class="">I struggle to write consistently because as soon as I try to set myself goals or a schedule it starts to turn writing into a <em>should</em>, a chore, which largely defeats the point. Writing works best for me when it comes from <em>wanting </em>to write, from having an idea I want to explore or just feeling the desire to get my thoughts down on paper. So now rather than saying “I want to write more, I really need to just find a way to fit it into my schedule”, I’m going to try instead focusing on cultivating and acting on the <em>desire </em>to write: noticing those random thoughts or ideas in conversation that I’d like to explore more, really noticing what I enjoy about writing when I do sit down and do it - and not beating myself up if my motivation doesn’t always fit itself to a consistent schedule.</p>]]></content:encoded></item><item><title>How useful is technical understanding for working in AI policy?</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Mon, 15 Jul 2019 17:20:15 +0000</pubDate><link>https://jesswhittlestone.com/blog/2019/7/15/how-useful-is-technical-understanding-for-working-in-ai-policy</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5d2cb4d9573dc30001a50081</guid><description><![CDATA[It’s not totally clear what the ideal background or relevant ‘expertise’ 
for AI policy is. One thing I’ve been thinking about is how useful it is 
for people working in AI policy to have technical experience/understanding 
in machine learning, or computer science more generally.]]></description><content:encoded><![CDATA[<p class="">It’s not totally clear what the ideal background or relevant ‘expertise’ for AI policy is. Working in other areas of technology policy seems like the most directly relevant experience, but because this is such a new and fast-growing area, people are coming from all kinds of different backgrounds. One thing I’ve been thinking about is how useful it is for people working in AI policy to have technical experience/understanding in machine learning, or computer science more generally. Should more people with technical expertise be working on AI policy issues? Should people working on AI policy already be focusing more on developing their technical understanding? I think I lean more strongly towards “yes” on these questions than many people, and so I want to try and spell out why I think this.</p><p class="">(Note: here I’m using ‘AI policy’ quite broadly, to encompass all kinds of thinking about how AI will impact society and how those impacts should be managed - not just referring people working directly in policy jobs.)</p><p data-rte-preserve-empty="true" class=""></p><h2><strong>Why technical understanding matters</strong></h2><h3>1. Thinking clearly about possibilities and risks</h3><p class=""><em>First, thinking clearly about how current AI systems will impact society </em>requires a decent understanding of the capabilities and limitations of those systems. I do think that it’s possible to think usefully about societal decision-making around AI with a pretty high-level sense of what “AI” is. But I also worry that the notion of “AI” that underpins many policy discussions is far too vague, in a way that fuels misunderstanding about what the possibilities and risks posed by AI are.&nbsp;</p><p class="">We’re currently seeing a lot of people repeating the same buzzwords and concerns in vague terms - e.g. privacy, bias, explainability - but there are relatively few people thinking from ‘first principles’ about what specific current capabilities and limitations mean for society. For example, almost all applications of “AI” in society raising concerns today seem to be specifically those using supervised learning training methods, but this is rarely explicitly acknowledged. There’s an interesting question of whether current concerns around AI being applied in society are overly specific to SL, and whether AI systems based on different training methods (e.g. reinforcement learning) raise different concerns or should be treated differently. Thinking clearly about this question doesn’t require deep expertise in ML, but it does require solid intuitions about the differences between these methods and their applications that is non-trivial and most AI policy researchers probably lack.</p><h3>2. Thinking ahead</h3><p class=""><em>Second, it’s important that AI policy can be </em><a href="https://www.nesta.org.uk/report/renewing-regulation-anticipatory-regulation-in-an-age-of-disruption/"><span><em>anticipatory</em></span></a>, not just reacting to the problems that have already arisen, but thinking ahead about how society can prepare for advances in AI capabilities and their possible impacts. Of course, we can’t predict any of this with any certainty, but we can think carefully about different ways that AI capabilities might evolve and the implications of different development trajectories. This requires a high-level understanding of what general capabilities AI systems have, what tasks and problems this makes them well-suited to, where the limitations of current systems are and where research appears to be making progress. I think there’s a real tendency to talk about future AI systems as if they’re magic - e.g. suggesting that in future AI will be used to do scientific research or be able to self-improve, without actually thinking through any details of what being able to do these things might involve - which is at least partly due to lack of technical understanding.</p><h3><br>3. Working collaboratively</h3><p class=""><em>Third, solutions to problems arising from AI need to be a collaborative effort </em>between policy experts, social scientists, and technical researchers/developers (at the very least.) This means these groups of people need to be able to talk to each other! Of course, there’s a responsibility to bridge important divides on all of these groups, but policy practitioners/researchers working to understand what ML research &amp; development looks like will be an important component of these collaborations. One part of this is just being able to speak roughly the same language as technical researchers - e.g. understanding what it means to train a model, the difference between different training methods. Also important is being able to identify when drawing on deeper technical expertise would be useful, and where to look for it. More generally, most problems arising from AI will have solutions that are part ‘technical’, i.e. partly about the kinds of systems and capabilities we develop, and part ‘social’, i.e. partly about how we design aspects of society to respond to and govern these systems. For example, ensuring that medical AI systems are used safely requires thinking about both (a) how to make sure those systems are robust and verifiable/interpretable on the technical side, and (b) what kinds of checks, processes, and governance are needed more broadly to prevent, catch, and respond to important errors. If people working on (a) and (b) are operating entirely independently of one another, this work is going to be hugely inefficient at best - and in particular those on the governance side need to understand the limitations of technical safety approaches so they know where safety checks and regulations are most needed.</p><h2><br><strong>What kind of technical understanding?</strong></h2><p class="">One thing that is fairly clear to me is that the most useful kind of technical understanding is in most cases <em>not </em>going to be deep expertise in some specific subfield of ML. Much more likely to be useful is having a decent understanding of what ML research involves in practice, enough terminological understanding to be able to talk to ML researchers and skim papers, and a high-level understanding of current capabilities and limitations, and where they might go in future. I’m not actually sure what the best way to acquire this is - especially the “high-level understanding of current capabilities and limitations” part. I think this high-level understanding is probably quite difficult to acquire, and something that many ML experts don’t actually have, if their focus is very narrow. It’s also not necessarily something you get automatically from learning more about how ML works and knowing the difference between different current methods.</p><p class="">Obviously, the level and type of technical understanding that’s useful depends a lot on the type of research or work you’re doing, and I don’t necessarily think all policy researchers should be going away and taking ML courses. Maybe it’s fine for there to be many people thinking about AI policy in very broad strokes, just understanding the very general features and implications of AI - e.g. the fact that AI involves automating tasks previously done by humans, generally requires large amounts of data, and that methods aren’t always fully interpretable to us. But we do need some people in the policy space who are thinking more deeply about what is and what might be technically possible, to ensure current concerns are well-grounded, to ensure solutions will still be relevant as capabilities advance, and to ensure productive collaboration with ML researchers. In particular I worry that there’s a severe lack of technical expertise in government - where decisions about how AI is actually governed will actually get made.</p><p class=""><br></p><p class="">One thing I’d like to think more about is how this works and has been thought about in other areas of science and technology policy: e.g. how well do people thinking about biosecurity or climate policy understand the relevant science and technical capabilities? Climate policy is a bit disanalogous because we’re not talking about an evolving technology, but there’s still a certain level of scientific understanding that seems like an important prerequisite for working in this space. Governance of biotechnology might be more closely analogous. One suspicion I have is that the average level of technical understanding in AI policy is lower than in many other areas, because almost everyone has a high-level impression of what ‘AI’ is (whereas most non-experts are more aware that they have no idea really what current biotechnology looks like.) It could be that the level of technical understanding required to contribute usefully to AI policy <em>is </em>just lower than in other fields, for this reason - but I also worry that it’s not, and it’s just easier to delude ourselves that we really understand what AI is than it is to delude ourselves about technologies where there’s less of a public narrative.</p><p class=""><br><br><br><br></p>]]></content:encoded></item><item><title>Thoughts on short- vs. long-term AI policy</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sun, 23 Jun 2019 09:04:00 +0000</pubDate><link>https://jesswhittlestone.com/blog/2019/6/22/thoughts-on-short-vs-long-term-ai-policy</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5d0dfebd3bd0a2000166c7e4</guid><description><![CDATA[It’s generally acknowledged that there’s a distinction between “short-term” 
(or “near-term”) and “long-term” AI policy issues. But these distinctions 
actually tend to conflate (at least) three things.]]></description><content:encoded><![CDATA[<p class="">It’s generally acknowledged that there’s a distinction between “short-term” (or “near-term”) and “long-term” AI policy issues. But these distinctions actually tend to conflate (at least) three things:</p><ol data-rte-list="default"><li><p class="">When issues <strong>arise</strong>. </p><p class="">e.g. Cave and ÓhÉigeartaigh (2019) define ‘near-term’ issues as “immediate or imminent challenges”; the 80k guide to working in AI policy defines ‘short-term’ issues as “issues society is grappling with today”</p></li><li><p class="">How advanced the relevant AI <strong>capabilities</strong> are. </p><p class="">e.g. Baum (2018) distinguishes between a ‘futurist’ AI claim which says that “attention should go to the potential for radically transformative long-term AI” and a ‘presentist’ AI claim which says that “attention should go to existing and near-term AI.” Similarly 80k talk about ‘long-term’ issues as those “that either only arise at all or arise to a much greater extent when AI is much more advanced than it is today.”</p></li><li><p class="">How likely an issue is to have long-term <strong>consequences</strong>. </p><p class="">e.g. 80k say that ‘long-term’ issues are those that “will have very long-lasting consequences.”</p></li></ol><p class="">These three things are generally assumed to go hand-in-hand, or at least not clearly distinguished. This might seem like quibbling with definitions, but I actually think it fuels confusion about which issues are most important to work on. </p><p class="">I think that what most people in the ‘long-term camp’ really care about is (3) - how likely an issue is to have (large and) long-lasting consequences for society. (1) and (2) only matter insofar as they influence this. </p><p class="">If we define ‘long-term’ issues in this way - as the issues most likely to have long-lasting consequences for society - I’m not sure how many people in the ‘short-term’ camp would actually put themselves in opposition to that. I certainly don’t think many people would say that they are explicitly prioritising issues that will only have a short-term impact on society over those with longer-lasting consequences. This distinction between ‘short-term’ and ‘long-term’ starts to feel a lot messier and less clear-cut.</p><p class="">I think there are actually several different ways in which people disagree about which AI policy issues to work on, that don’t come down to a simple short-/long-term distinction. It’s worth trying to pick these apart, because in doing so we might realise that e.g. people disagree less than they seem to, there’s empirical research that could resolve important disagreements, or perhaps even that important issues or areas are being neglected because of the assumptions being made on both ‘sides’. Here are some key disagreements I think are getting mixed up:</p><ul data-rte-list="default"><li><p class="">(a) <strong>Disagreement about whether we should work on issues affecting current vs. future people.</strong> There are some genuine disagreements about whether it’s more important to work on issues affecting current populations, and to what extent we should also be concerned about future generations. These stem from pretty deep philosophical beliefs: i.e. some people believe we have a greater moral obligation to those alive today whereas others don’t. I think these views contribute somewhat to what AI policy issues people think are most important to work on, but I suspect it’s only a relatively small part. </p></li><li><p class="">(b) <strong>Disagreement about how long-lasting the consequences of ‘nearer-term’ issues are likely to be</strong>. I think many people would broadly agree that all else equal, it’s better to prioritise working on issues with longer-lasting consequences for humanity. I imagine many people working on making algorithms fair and accountable today are doing so because they believe that failing to solve these problems could have extremely bad, long-lasting consequences for society (entrenching extreme power structures, leading to extreme inequality, and so on.)</p></li><li><p class="">(c)<strong> Disagreement about the best ways to influence the long-term future.</strong> When prioritising which issues to work on, what matters is not just their potential impact but also whether we have any ‘leverage’ to shape the way things go (a point made nicely by Ben Garfinkel <a href="https://forum.effectivealtruism.org/posts/9sBAW3qKppnoG3QPq/ben-garfinkel-how-sure-are-we-about-this-ai-stuff">here</a>). One criticism of those who focus on issues relating to very advanced AI is that it’s very difficult for us to have any idea what AI will look like in the future - and the implication, there, I think, is that this means we have little leverage to influence it. On the other side, some in the ‘long-term’ camp might criticise work addressing issues arising today on the basis that it’s unlikely to have any long-lasting impact.</p></li></ul><p class="">I think that the way the “short vs long-term” divide in AI policy is currently, there’s way too much focus on the deep ideological disagreement of (a), and not enough on really understanding the tricky and mostly empirically-based disagreements of (b) and (c). I think it would be really valuable to try and unpick further some of the assumptions underpinning these disagreements, and think about what kinds of research might actually help us think more clearly about the best ways to influence the long-term societal impacts of AI. </p><p class="">In case it’s not clear by this point, I’m pretty firmly in the “we should care about future populations” camp on (a), and I do think that we should be trying to work in those areas where we might have some influence over how AI impacts society in the very long-run. But I’m much less clear on (b) or (c). I think it’s possible that issues arising from current AI systems or more advanced capabilities that fall far short of AGI could have extreme and long-lasting impacts on society - either by leading to extreme scenarios themselves (e.g. automated surveillance leading to global authoritarianism), or by undermining our collective ability to manage other threats (e.g. AI-enabled disinformation undermining collective decision-making/coordination capacity.)</p><p class="">I also think that some of the best ways to influence the long-term future might be by working on what are mostly ‘current’ issues but with the long-term in mind (e.g. ensuring that today’s AI systems are developed in safe and interpretable ways that extend to more advanced systems; creating good research norms and a culture of responsibility within ML research; developing policy processes that are robust to uncertainty about how AI will develop, and so on.)</p><p class="">Mostly, I think we need more thorough thinking on both how ‘near-term’ and emerging issues might have very long-term consequences, and on what kinds of ‘near-term’ work give us the best leverage over the future trajectory and impact of AI. </p><p class="">One thing I haven’t really talked about, but which is important for prioritising areas to work on, is neglectedness: finding important areas to work on that aren’t getting much attention. Neglectedness is a large part of the reason that so far the ‘long-term’ community has mostly focused on more speculative risks from very advanced AI systems - no-one else was thinking about them. But I now think we may be at a point where something like “near-term work from a long-term perspective” is also looking pretty neglected.</p>]]></content:encoded></item><item><title>AI and improving human decision-making</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Tue, 21 May 2019 08:10:08 +0000</pubDate><link>https://jesswhittlestone.com/blog/2019/5/21/ai-and-improving-human-decision-making</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5ce3b1993846d600010fc64f</guid><description><![CDATA[This is a slightly edited transcript from a talk I gave last year at 
Prowler.io’s “Decision Summit”]]></description><content:encoded><![CDATA[<p class=""><em>This is a slightly edited transcript from </em><a href="https://www.youtube.com/watch?v=c4l6YNWiLic" target="_blank"><em>a talk I gave last year</em></a><em> at Prowler.io’s “Decision Summit”</em></p><p data-rte-preserve-empty="true" class=""></p><p class="">Over six years ago now, I read a book called “Thinking Fast and Slow” by Daniel Kahneman, which is now very well known. This book really grabbed me: it got me thinking about the limitations of human reasoning and how these limitations underpin a lot of problems in the world - from my own personal indecision to big societal problems such as climate change and poverty. I went to do a PhD in behavioural science, to research strategies for overcoming our ‘biases’. And it’s from this perspective that I first got interested in AI: as something that might help us to do better, as humans. </p><p class="">I now work for a research group at Cambridge called the Centre for the Future of Intelligence, thinking about the ethical and policy issues surrounding the use of AI systems in society. AI has been getting increasing amounts of attention over the last year or two: with multiple articles being published on AI across different media outlets every day, governments across the world beginning to develop AI strategies, and new academic research groups like the one I work for cropping up all the time. But what, exactly, makes AI so exciting? I’d like to suggest that the biggest reason is that we hope AI might be able to help us solve really important problems in the world, problems that we as humans alone struggle to solve.</p><p class="">AI is already being used to solve important problems that we couldn’t solve alone as humans. For example, AI is beginning to improve the quality of diagnosis and treatment in healthcare. It could help reduce poverty - last May I was at the UN’s “AI for Global Good Summit” where Stuart Russell discussed the potential of using machine learning and satellite imagery to rapidly and accurately map poverty and wealth across different parts of the world. Even more ambitiously, AI could help us do better scientific research, speeding up progress on all kinds of problems: helping neuroscientists better understand the brain, or physicists better analyze physics data - perhaps ultimately AI systems will be able to come up with better, more creative scientific theories than we can.</p><p class="">We can all agree that the potential for AI to help us ‘make the world a better’ place is exciting. But at the same time, there are two very different models we might have of how AI systems will help us to solve important problems:</p><ol data-rte-list="default"><li><p class="">The first is to think of AI systems as replacing human capabilities: we solve problems better by increasingly outsourcing tasks and decisions to automated systems which can solve them more quickly, efficiently, and effectively. To give a simple example, Google maps is much better than my brain could ever be at knowing all the different routes in a city and calculating the quickest way to get from A to B: so most of the time I just input where I am and where I want to go to, and then follow what Google tells me to do pretty blindly.</p></li><li><p class="">A second, different way to think of AI systems is as complementing human capabilities: AI systems can help us to understand the world in new and important ways, which are complementary to - rather than simply ‘better than’ - the ways that humans understand the world. Returning to the example of Google maps, there may be things I know about my city - which routes are safest at night, or which are most scenic for example - that aren’t captured by the software. Using google maps can help me to identify the quickest route very quickly, which saves me time and energy - but it’s best used in conjunction with things I already know.</p></li></ol><p class="">I think that a lot of current discourse around AI research and its application in society implicitly assumes this first model - that the aim is to replace human capabilities with better, AI ones. I also think that a lot of the ethical concerns and fears around deploying AI systems in society naturally stem from this assumption. There are serious concerns about the increasing automation of jobs in society, and what this will do to the economy, inequality, and people’s sense of worth and meaning. People are beginning to worry about how certain human skills might atrophy as we have to use them less: perhaps our memories are already worse today than when we didn’t have the ability to look up everything on the internet. And there are concerns about the safety and reliability of AI systems as they replace humans in safety-critical domains such as self-driving cars.</p><p class="">Especially given all of these concerns, a really important question to ask right now is: do we actually want, or need, to build AI systems that can replace human capabilities? In many domains and applications the answer may be ‘yes’, but I think this question needs to be asked and pushed on a bit more. I want to suggest that there is a quite different way of thinking about how AI systems can help us solve problems - by complementing human capabilities - and that thinking more explicitly about the relative strengths of human and machine capabilities, and how they can work together to solve different types of problems, might be really beneficial.</p><p class="">I suspect this idea that we want AI systems to replace human capabilities is influenced in part by the attitude that people are pretty ‘irrational’. This attitude stems from psychology research in recent decades, which has focused a lot on identifying the various biases and irrationalities that people are prone to, leading to quite a pessimistic picture of human capabilities. Our brains have to process a huge amount of information, filter out what’s relevant and ignore what isn’t, make sense of ambiguity, act quickly, all often to solve problems that aren’t even clearly defined.</p><p class="">To illustrate the kinds of challenges we face day-to-day, take the ‘simple’ task of buying a bike, which I had to do when I moved to Cambridge recently. There are thousands of places you could look online and offline, and thousands of different makes. There are also many different things you might care about when buying a bike - should I just buy the cheapest decent one I can find? Do I really want the prettiest one? Or should I just go for the best reviewed one - but according to which website? It quickly becomes totally overwhelming trying to weigh up all your options on all these different variables at the same time. </p><p class="">Because we’re faced with a huge amount of complexity and uncertainty, we can’t possibly optimise every decision. So we use heuristics, shortcuts: in my case, I bought the bike that my sister has, because I’ve ridden it and it seemed pretty good, and it didn’t seem worth spending much more time to find a slightly better one. In this case, I think this was a pretty good heuristic. But “do what my sister does” might not be a great heuristic for other kinds of decisions - for choosing which political candidate to vote for, for example.</p><p class="">To understand both the strengths and limitations of human reasoning, we need to understand these heuristics that we use to make sense of an incredibly complex and uncertain world. These heuristics actually work extraordinarily well a lot of the time - but they go wrong in some systematic ways. I’ll give a few examples which I think are pretty central to the limitations of human reasoning. </p><p class="">Because we’re faced with an overwhelming amount of information in making even the simplest decisions, we have to decide what to pay attention to, and what to filter out. One problem that occurs here is we tend to overweight things that are particularly emotionally compelling or easy to visualise, relative to important pieces of information that might be more abstract and uncertain. We’re much more motivated by immediate rewards - the desire for just one more scoop of ice cream - than longer-term, more probabilistic ones - such as the long-term benefits of eating healthily. </p><p class="">Because we use ‘rules of thumb’ rather than strict and systematic procedures, our judgements are easily influenced by what are called ‘framing effects’ - how a question is asked, or what other things we’ve thinking about recently, for example. This means that consistency is not a strength of human reasoning - ask me the same question twice on different days, and I might well give different answers. One study, for example, found that experienced radiologists rating x-rays as “normal” or “abnormal” contradicted themselves 20% of the time! </p><p class="">We also aren’t particularly good at reasoning clearly about large numbers: above a certain size, our brains tend to see all large numbers as pretty similar. This is a problem because sometimes these differences really matter - one famous study found that when asked how much they thought it was worth spending to save 10,000 or 100,000 birds, people gave roughly similar answers - which seems mad when we think about how big the difference between these two numbers actually is.</p><p class="">Finally, these information processing shortcuts mean we’re prone to learning “illusory correlations” when faced with complex, messy information: that is, convincing ourselves of relationships that don’t really exist. Some have suggested that this tendency to identify illusory correlations underpins how untrue stereotypes form and persist: if you believe that women are less confident than men, for example, then you may start noticing all the cases where this is true and ignore all the cases where it isn’t.</p><p class="">Machines can help us overcome a lot of these biases and problems, because they have very different strengths and limitations:</p><ul data-rte-list="default"><li><p class="">Because machines can store and process much larger quantities of information in parallel, it’s much easier for them to weigh up lots of different factors in making a decision. In fact, research has shown that even a very simple linear formula (i.e. no complex functions or machine learning involved) can outperform human judgement on a range of tasks which require weighing lots of different factors: including predicting the future grades of students, the longevity of cancer patients and the chances of success for a new business. So it’s not surprising that machine learning models, trained on a huge amount of relevant data, can do even better still.</p></li><li><p class="">Part of the reason given for this is that machines are much more consistent in many ways, and not swayed by irrelevant factors in the way that humans are.</p></li><li><p class="">Machines are also much better at working with precise numbers and probabilities than we are, and at identifying reliable patterns in large and complex datasets - this is why they have the potential to be so valuable in healthcare.</p></li></ul><p class="">But while there certainly are ways that machines could help us overcome human limitations, I think it’s a little too easy to take the view that “humans are irrational, and AI systems, once they’re more advanced, will just be so much better than us at anything.” What AI research has shown us over recent years, if anything, is that many aspects of human cognition which we take completely for granted are actually incredibly complex and difficult to replicate in machines.</p><p class="">One interesting comparison here: we see chess as a complex game, requiring quite a lot of human intelligence to be good at. This turned out to be surprisingly easy to brute-force with a computer program. By contrast, certain aspects of vision like the ability to consistently recognise a wide range of different ‘chairs’ as belonging to the same category is something we completely take for granted and don’t associate with intelligence at all in humans - but has turned out to be surprisingly difficult to build into AI systems.</p><p class="">This is just to point out that despite some of the flaws and limitations of human reasoning, and despite the huge amount of progress we’ve been seeing in machine learning, human cognition still has a lot of strengths relative to machines. Human reasoning may often be imprecise and inconsistent - but it’s also amazingly robust and flexible. Infants can learn stable and flexible concepts incredibly quickly, learning to tell the difference between cats and dogs pretty reliably after only seeing a few examples - whereas current machine learning systems generally take thousands of examples and still can’t learn in such a flexible way.</p><p class="">I think we sometimes take for granted the strengths of human cognition because they are precisely those things that we do automatically and without effort, like recognising chairs, navigating our environment, picking up nuance in sentences and recognising emotions on people’s faces. By contrast, the things we associate with ‘intelligence’ in humans are those things we find difficult, like chess - but this gives us a distorted view of what’s difficult and impressive in cognition more generally. As it stands at the moment, humans and AI systems appear to have very different and complementary strengths, and my suggestion is that perhaps we should be trying to understand and leverage those differences more. </p><p class="">Through doing so, we might be able to:</p><ul data-rte-list="default"><li><p class="">identify better ways for humans and machines to work together to solve important problems,</p></li><li><p class="">better prioritise what kinds of AI capabilities we most want to develop, </p></li><li><p class="">and identify ways that humans can best learn from AI systems and vice versa.</p></li></ul><p class="">I think there are a lot of different motivations driving AI progress. In part, AI research is driven by curiosity - I think there’s a deep drive to understand what intelligence is, and a hope that advances in AI might help us to get there. In part, research is driven by commercial or near-term incentives: companies trying to get a competitive advantage or make money - that’s how the world works! But as I said at the beginning, I think this desire to improve our ability to solve important problems in the world is really, fundamentally what makes AI so exciting for most people. And if this is what’s really driving us, I think this question of how we can build AI systems that are complementary to humans is a pretty important one.</p><p class="">At the moment, I don’t see very much of this in how people talk about AI. There are some researchers doing great work exploring the relative strengths of human and machine learning, but their focus is on understanding what we can learn from human strengths about how to build more generally capable machine learning systems. I think this is important, but perhaps equally important is to ask: how can understanding the strengths and limitations of human reasoning help us build AI systems that best complement those abilities: that do the things we do poorly, well?</p><p class="">I want to end by pointing out that there’s a difficult tension here, in how we think about developing AI. On the one hand, we want AI systems that can do things we can’t, that are better than us - otherwise what’s the point?  - but we’re also scared about what this will mean for us as humans. In a way, what I’m really suggesting here is that we ask a bit more explicitly: what do we really want from AI, and how do we want it to affect us as humans?</p>]]></content:encoded></item><item><title>Reflections on AIES/AAAI 19</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Tue, 19 Feb 2019 14:23:27 +0000</pubDate><link>https://jesswhittlestone.com/blog/2019/2/19/reflections-on-aiesaaai-19</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5c6bfa6ea4222f3378351b41</guid><description><![CDATA[Last month, I made the arduous trip to Hawaii (half joking - it is a 20hr 
journey!) to the AI Ethics and Society conference (AIES), co-located with 
the AAAI/ACM annual conference. I wanted to share some slightly delayed 
reflections on the trip.]]></description><content:encoded><![CDATA[<p class="">Last month, I made the arduous trip to Hawaii (half joking - it is a 20hr journey!) to the AI Ethics and Society conference (<a href="http://aies-conference.com" target="_blank">AIES</a>), co-located with the AAAI/ACM annual conference. I wanted to share some slightly delayed reflections on the trip.</p><p data-rte-preserve-empty="true" class=""></p><p class=""><span><strong>Some general thoughts on AIES</strong></span></p><p class="">This is only the second year AIES has run. I was particularly interested in going because it is the first mainstream academic conference focused explicitly on the ethical and societal aspects of advances in AI, covering a wide range of disciplines. Because this is a new and emerging area of research that’s incredibly broad, bringing that all together effectively in a single conference isn’t exactly easy - but I think it’s important to enable people across many different disciplines - philosophy, political science, economics, law, literature, history, and so on... - to collaborate and learn from one another.</p><p class="">Quite a large proportion of the accepted papers either (a) focused on solutions to the technical problem of AI alignment/safety, or (b) presented technical work on making ML systems transparent and fair. I found a lot of these presentations really interesting, and think this kind of work is important at a conference like AIES. But they’re also two areas which are now fairly well-established subfields of AI research, and I was a bit disappointed not to see more research that focused on policy or governance approaches to AI in particular. It’s also definitely challenging to bring together people from different disciplines and have them communicate effectively with one another - in some cases it felt like the talks were steeped in the language of a discipline I wasn’t familiar with and so difficult to follow, and someone else commented to me that they felt some presentations coming from one discipline weren’t aware enough of relevant work in existing disciplines.</p><p class="">All that said, AIES still did a much better job of bringing together a wide range of people and perspectives than most conferences I’ve been to! And it <em>was</em> only the second year of the conference, so there’s lots of room for it to develop and provide an even better environment for cross-disciplinary collaboration and conversations. I think it’d be worth thinking about how to get the call for papers out to a wider range of disciplines and groups next year, and how to build opportunities for sharing insights across relevant disciplines into the conference program (maybe a session or workshop on the side explicitly focused on this would be helpful.)</p><p class=""><br><span><strong>Some notes and highlights</strong></span></p><p class="">These are some pretty quick thoughts/summaries of some of the talks I found interesting and actually managed to take notes on...</p><p class="">There were a couple of talks on AI safety/alignment research that I thought did a really good job of explaining technical work in an engaging and accessible way. This is something I particularly care about, as I think it’s really important for those working on ethics/policy issues to have a solid a grounding in what’s technically possible in general, and technical approaches to ensuring safe and beneficial AI specifically.</p><p class=""><strong>Anca Dragan - Specifying AI Objectives as a Human-AI Collaboration Problem</strong></p><p class="">First, Anca Dragan from UC Berkeley/CHAI gave a great talk on her group’s work on inverse reward design, an approach to building reinforcement learning systems that can <em>learn </em>what we want them to do from what we tell them. The idea here is to avoid us having to specify a reward function precisely, which is really difficult - in large part because we’re only able to consider a relatively small subset of possible situations, and will inevitably always miss some unintended consequences. So rather than us providing the agent with a reward function and the agent taking that <em>literally </em>as what it should do, the idea of inverse reward design is that the agent should take the specified reward function merely as <em>evidence </em>of what we actually want, as evidence of the “true reward function.” Instead of optimising a single reward function, then, the</p><p class="">agent learns a <em>probability distribution</em> over a range of possible reward functions (i.e. a range of things we might<em> </em>have meant), allowing it to then take actions which account for uncertainty, and are robust across a wide range of possibilities. This approach also makes the task of defining a reward function a much less difficult one for humans - in fact, we could define multiple different reward functions for different environments which are then <em>all </em>taken as evidence of what we actually want. Anca showed some nice results demonstrating how this could improve the robustness and reliability of a robotic arm. </p><p class="">Of course, this only works well if we have a good way to define the appropriate space of possible reward functions. We could give this to the agent explicitly, but we might not always know what the best space to consider is, and this somewhat defeats the point of wanting the agent to learn to be sensitive to situations we <em>haven’t </em>thought about. Something which can help with this is to enable the agent to come back and ‘query’ its designers about unknown situations, rather than being overly risk-averse (called ‘active inverse reward design’). This reframes the problem of trying to get an agent to do what we want somewhat: rather than being an optimisation problem, we might instead think of it as a human-AI collaboration problem. But we still face a tradeoff here in how we want this collaboration to work in practice - we’d rather human input didn’t constrain possibilities too much initially, but the more open we leave things the more the agent will likely need to query the human later, which is also costly and may be impractical.</p><p class="">A next step for research here is to try and build models which capture human biases - which would make it easier to model the kinds of mistakes we might make in specifying reward functions. This would make it easier for the agent to anticipate the <em>kinds </em>of mistakes we are likely to make across different scenarios, and so narrow the space of possible reward functions that are likely to be relevant initially.</p><p class=""><strong>Alexander Peysakhovich - </strong><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_25.pdf"><span><strong>Reinforcement learning and inverse reinforcement learning with system 1 and system 2 </strong></span></a></p><p class="">Second, and relatedly, Alex Peysakhovich presented some interesting work on incorporating models of human biases from behavioural economics into reinforcement learning. Alex begins by pointing out that inferring a person’s goals from their behaviour is important to many problems in AI - including for cooperation between humans and AI in general, for products such as recommender systems, and for inverse reinforcement learning (where we aim to train an agent to learn human goals/preferences from behaviour.) Most approaches to learning goals from behaviour in AI assume a rational actor model, which has been challenged by a great deal of research in psychology/behavioural economics in recent years. Peysakhovich’s paper uses the now-popular idea that some of the irrationalities in human cognition can be modelled as “two systems” (a slow, reflective system 2, and a fast, intuitive, associative system 1), formalised as two separate utility functions. He shows that both reinforcement learning and inverse reinforcement learning still work with this distinction between s1 and s2: it’s still possible to compute an optimal policy using the two utility functions, and you can infer what both s1 and s2 want separately using inverse RL. </p><p class="">Peysakhovich ends by with the broader suggestion that we need better models of human irrationalities for both RL and inverse RL. It was interesting to see this conclusion coming out pretty strongly both in this talk and in Anca’s talk described above. I also wonder whether trying to implement different models of human (ir)rationality in AI systems might yield some interesting findings for cognitive/behavioural science in return, as it could potentially provide a means of testing different models of cognition, and might encourage thinking in novel ways about how we model irrationality. For example, in my PhD I ended up suggesting that some of the things we call ‘biases’ in psychology might be better modelled as solutions to trade-offs that are better or worse suited to different environments (rather than as strict deviations from some well-defined normative standard) - I wonder if modelling the different tradeoffs people commonly face and the way they tend to resolve them could be useful here. More generally, if AI systems are using models of rationality to predict human behaviour or cooperate with humans, and we can measure how effectively they are able to do so, this might tell us something interesting about the utility (or even accuracy?) of those different models.</p><p class=""><strong>Daniel Susser - </strong><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_54.pdf" target="_blank"><span><strong>Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures</strong></span></a></p><p class="">A third talk I really enjoyed focused on how data and machine learning techniques can be used for online manipulation - something I’ve been thinking about myself recently. Susser opened his talk by pointing out that while a lot is being said now about the impact of AI on structural issues (bias, power, etc.), there’s somewhat less discussion about how AI is affecting and might affect individual experience. He focuses on <em>online manipulation</em>, defined as the use of information technologies to impose hidden influences on another person’s decision-making, which has the potential to both undermine autonomy and to diminish welfare. In particular, new forms of online manipulation are being made possible via the use of <em>adaptive choice architectures: </em>highly personalised choice environments, constructed based on data about individuals, that can be used to steer behaviour. Especially as we get used to new technologies, Susser points out, they recede from our conscious attention and awareness, and so we stop noticing their impact on us. (He uses the term “technological transparency” to refer to this fact that we stop noticing technologies and how they impact us, which is somewhat confusing as the term transparency is often now used to mean the goal of making people <em>more </em>aware of applications of AI technologies, almost the opposite!) </p><p class="">I enjoyed this talk and paper, especially as I’ve been thinking about some very related issues, and I think that the kinds of manipulation made possible by even today’s available data and ML techniques is worrying, and something we need to find ways to prevent. Some of the ways that Susser presented these ideas helped me to clarify some of my own half-formed thoughts on these issues, and I think this is a paper I’ll be returning to. One of the high-level points I took away from the talk was that the ‘invisibility’ of many of the technologies that are potentially influencing our choice environments is a big part of the threat to our autonomy - and that there’s therefore a real tradeoff between the benefits of a ‘seamless user experience’, and the costs of not making conscious decisions about how we use our phones, the internet, social media etc.</p><p class=""><br>A few other mentions:</p><ul data-rte-list="default"><li><p class=""><strong>Gillian Hadfield presented joint work with Dylan Hadfield-Menell on </strong><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_231.pdf"><span><strong>Incomplete Contracting and AI Alignment</strong></span></a><strong>,</strong> which attempts to draw insights from economics for the AI alignment problem. The idea of misalignment - between individual and societal welfare - is central to ‘principal agent analysis’ in economics, they point out, and this misalignment is governed by contracts. These contracts are generally <em>incomplete </em>i.e. they do not completely specify all behaviour in all situations, due to our limited rationality and the fact that some things are not easily describable or verifiable. These incomplete contracts are supported by the ability to take disputes into external formal or informal enforcement mechanisms, including legal processes. The idea that we might learn something from this about how to align AI systems with our interests seems like an interesting one, and worth exploring the implications for current work on AI alignment (such as the work on inverse reward design Anca Dragan presented where reward functions are specified imprecisely but agents can then query and work with humans to figure out the best behaviour in edge cases.) The talk didn’t really get into this as much as I would have liked, but they did only have 12 minutes - I should instead probably just read the paper properly!</p></li><li><p class=""><strong>Sky Croeser and Peter Eckersley presented a paper on </strong><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_147.pdf"><span><strong>Th</strong></span></a><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_147.pdf" target="_blank"><span><strong>e</strong></span></a><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_147.pdf"><span><strong>ories of parenting and their application to AI</strong></span></a>. I liked this idea simply because it was an angle I’d never thought of. I wasn’t totally convinced by the analogy e.g. there was a claim that “RL agents are like rampant toddlers”, but actually I think there’s a lot of ways they are very <em>unlike </em>toddlers: in particular, toddlers seem to be heavily driven by curiosity, trying out lots of new things, while this is something that is missing from standard RL (there’s no intrinsic desire to explore and try new things), and needs to be explicitly built in. That aside, the parenting perspective still raised some really interesting points. For example, Croeser and Eckersley suggested that if developers thought about building AI agents more from the perspective of parenting they might invest more effort in dataset curation relative to architecture design (we tend to care a lot about ensuring our children have the right kinds of experiences to learn from, but architecture design is currently much more popular in ML - that said, we have a lot more control over architecture design in AI systems than we do in children!) They also suggested that a parenting perspective might make people more open to differences in AI development - not necessarily seeking to create AI agents that are just like us - and reconsider the problem of control - perhaps being more open, within certain constraints, to giving up control once we have achieved a certain level of trust. I think it could be really interesting to think a bit more about how far these analogies between parenting and developing AI systems go and what their limits are, especially when it comes to questions about how “like us” we want AI systems to be and how much control we should be aiming for. Perhaps it makes sense for us to be much more willing to allow for difference and cede control with our own children, because we already have a relatively high baseline for how “like us” they will be, and good mechanisms for understanding and trusting human children, which we may not have with AI systems.</p></li><li><p class=""><strong>Tom Gilbert and Mckane Andrus gave an interesting talk on their paper </strong><a href="http://www.aies-conference.com/wp-content/papers/main/AIES-19_paper_130.pdf" target="_blank"><span><strong>Towards a Just Theory of Measurement: A Principled Social Measurement Assurance Program</strong></span></a>, where they made the point that AI ethics shouldn’t just be about making ML tools fair within the constraints of existing institutions, but that we should be going a step further and trying to use these tools to make the institutions and processes themselves more just. I’m not exactly sure how we do this in practice, but I think it’s a great point and one that applies to how we think about AI ethics more broadly: a lot of discussion and writing focuses on how we ensure that AI systems don’t worsen the status quo in certain ways, but we should also be thinking about how they may be able to change that status quo.</p></li><li><p class=""><strong>Inioluwa Deborah Raji and Joy Buolamwini presented a really cool paper on what they call </strong><a href="http://www.aies-conference.com/wp-content/uploads/2019/01/AIES-19_paper_223.pdf" target="_blank"><span><strong>Actionable Auditing</strong></span></a><strong> </strong>- in particular, looking at the impact of publicly naming companies whose products were found to have biased performance (in this case, racial bias in facial recognition models.) They found that those companies which were publicly named significantly reduced the bias in their models (without reducing overall performance) relative to those who were not named. I really liked this as showing that it’s actually possible to change companies’ behaviour in a direction that’s clearly positive, given a relatively small intervention. </p></li><li><p class=""><strong>David Danks gave a great invited talk on “The Value of Trustworthy AI”.</strong> I didn’t manage to take notes on this so I’m forgetting some of the details, but he gave a pretty in-depth, very clear analysis of what we mean by “trust” in AI and why it matters, drawing on both philosophy and psychology literature. I enjoyed how clear and precise the talk was, and Danks is an amazing speaker. By the end, though, I wasn’t sure if he’d really reached any new conclusions, or just built a much more solid and rigorous foundation below claims that we mostly all already accept and know to be true (e.g. why interpretability is important and in what contexts.) I don’t think this is necessarily a problem, and given how often these claims about trust and interpretability and made uncritically and ambiguously I think this more rigorous foundation can be incredibly useful - I’m just not quite sure how deep this foundation needs to go, and how useful it is relative to more constructive work. One new-ish thing the talk did make me think about is how we will, almost inevitably, sometimes need to build trust on something other than really understanding how a system works - and how we probably need much more research on what this might look like. This is something I’ve thought about before but seemed much clearer to me after Danks’ analysis. </p></li></ul><p data-rte-preserve-empty="true" class=""></p><p class=""><span><strong>A bit of AAAI</strong></span></p><p class="">I also managed to make it to a couple of sessions at AAAI, the bigger AI conference of which AIES was a part. </p><p class="">The first was a <strong>panel debate on the “Future of AI”</strong>, which was surprisingly entertaining - the moderator and panelists had decided to try and make it light-hearted given it was in the early evening, and I laughed a <em>lot </em>more than I expected to. The proposition was “The AI community today should continue to focus mostly on ML methods.” Of course, this is frustratingly vague in a few ways - what exactly counts as “mostly”? How long past literally “today” does this extend? - but I’ll resist focusing on this. What was somewhat surprising is that a majority of the audience - 64% - voted <em>against </em>the proposition, and the panel seemed to come out stronger in that direction too. I’m not sure whether people wanted to be contrary or somehow ‘interesting’ or progressive in their answers, but I didn’t expect this. I’d be pretty interested to see a comparison between these votes and the proportion of the audience who <em>themselves </em>work mostly on methods they would refer to as ML (and plan to continue to for the considerable future...) I strongly suspect it would be more than 36%...</p><p class="">The main argument on the “for” side (i.e. the AI community should focus mostly on ML) was that ML is where we are suddenly making a lot of progress that shows no sign of slowing, we haven’t been doing this for all that long, and it’s currently the area of AI we understand least - and so here we should continue to focus, for now. But even those arguing this side suggested that “ultimately” we would need a much broader set of approaches, and emphasised the importance of combining ML with symbolic approaches and different kinds of structure. The “against” side began by taking a more, um, humorous approach - comparing the current focus on ML within AI research to the populist movements leading to the election of Trump and Brexit... And perhaps my favourite quote of the conference: “If you have any doubt that an AI winter is coming, just look outside: we’ve all come to Hawaii and today was a disaster!” (It had been an unusually cold and rainy day by Hawaii standards...)</p><p class="">One ambiguity in the proposition which I found a bit frustrating was that there was no clear statement or agreement on what counts as “ML methods” and what was considered “other approaches.” Much of the time it seemed like “ML” was actually being used to mean something more like “learning purely from data using deep neural networks.” One side would claim something like “we need to figure out how to incorporate more innate knowledge and cognitive architectures into ML approaches” as an argument <em>against </em>focusing mostly on ML, and the other side would just respond with “but that’s still ML!” This reminds me of a similar frustration I’ve felt when people talk about whether “current methods” in AI will enable us to solve certain problems, but it’s not really clear where the boundaries of current methods lie. Presumably many kinds of new architectures don’t move us that far away from current methods, and nor does incorporating insights from other disciplines to improve current methods... I think what people are trying to get at here is the possibility of new, deep algorithmic insights of some kind on a similar level to training neural networks using gradient descent - but I don’t think there’s any clear line here between what counts as totally novel and what’s just an adaption of existing approaches. I’d be interested to see more discussion here that explicitly picks apart different types of approach/research in AI, and different kinds of novel insight/progress that might be made, rather than talking in vague terms about “ML” or “current methods.”</p><p data-rte-preserve-empty="true" class=""></p><p class="">I also made it to <strong>Ian Goodfellow’s keynote talk on Adversarial Machine Learning</strong>. Goodfellow essentially argued that adversarial approaches underpin (or at least can be very useful for) most of the new and important areas that ML is beginning to branch out into, now that we’ve got the basics down. The basic idea of adversarial ML, as I understand it, is to train two different ML systems (normally neural networks) which have connected and ‘adversarial’ goals, such that they continually force each other to improve. The classic example, generative adversarial networks (GANs) involves training one network that aims to classify images correctly, and another network which aims to generate images that will ‘fool’ the first network into misclassifying them. As the discriminator gets better at telling the difference between images, the generator must get better at producing ‘convincing’ images to achieve its objective, which then means the former has to be able to discriminate more finely, and so on.</p><p class=""><br>Goodfellow then spent an hour going through many of the areas where adversarial ML can be useful: including in generative modelling, security, reliability, model-based optimisation, reinforcement learning, domain adaptation, fairness, accountability and transparency (FATML), and neuroscience. There were some really interesting examples here, and it helped me to better understand what adversarial ML is actually doing, and how it has applications beyond the standard panda image adversarial example. I’m naturally a bit sceptical of anything that attempts to claim that a single method, approach, or theory, can be applied to almost all things we think are important (especially if that method happens to be the speaker’s own specialism), and Goodfellow’s talk felt a <em>little </em>bit like that... but at the same time, contained a lot of really interesting ideas and I liked the fact it had a really clear message and cohesive structure. </p>]]></content:encoded></item><item><title>Sensitivity and Resilience</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Wed, 31 Jan 2018 20:01:31 +0000</pubDate><link>https://jesswhittlestone.com/blog/2018/1/29/sensitivity-resilience</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5a6f4782e2c48360134fd37b</guid><description><![CDATA[I’ve always assumed there’s a tradeoff between sensitivity and resilience: 
both have benefits, but each comes at the cost of the other. But I've been 
thinking recently that maybe this is an unnecessary dichotomy. I don’t 
think it’s easy, but it may be possible to be both highly sensitive and 
highly resilient.]]></description><content:encoded><![CDATA[<p class="">Sensitivity and resilience are two character traits which often seem to be at odds with one another. When we think of sensitivity, we think of someone who tends to feel strong emotions, who is more likely to be affected by things that happen to them, someone who has a lot of ups and downs. A resilient person, by contrast, is able to go through life relatively unbothered by their circumstances, and bounces back quickly.&nbsp;</p><p class="">I’ve always assumed there’s a tradeoff between sensitivity and resilience: both have benefits, but each comes at the cost of the other. Maybe sensitive people can appreciate pleasure and beauty more, and have more empathy for others, but this comes at the cost of more ups and downs, of struggling more. Resilient people can do much more, may never go through truly difficult times, but might miss out on some deeper emotional experiences, or find it harder to connect with people.&nbsp;</p><p class="">I think a lot of the people in my life do broadly fit into these two categories, to varying degrees.&nbsp;And for a long time I think I’ve implicitly accepted that I’m always going to be on the sensitive side of the tradeoff.&nbsp;</p><p class="">But I've been thinking recently that maybe this is an unnecessary dichotomy. I don’t think it’s easy, but it may be possible to be both highly sensitive and highly resilient.&nbsp;</p><p class="">Sensitivity describes how much we react to what’s happening to us our around us - how much and how strongly we respond to other people and our circumstances. But resilience is more about how we <em>respond</em> to our feelings, than about what we actually feel. Being resilient <em>isn't </em>about never encountering difficult circumstances, or never feeling strong emotions.&nbsp;A person who never or rarely experiences difficult emotions might appear resilient from the outside - but someone who is able to experience difficult emotions and not be consumed by them, who can bounce back and keep going, shows real resilience.&nbsp;</p><p class="">Sometimes, when I feel anxious, I also feel totally consumed by that feeling: like it’s totally controlling me, like I can’t see beyond it, and I need to do whatever I possibly can to make it stop. I’m reacting to the feeling, caught up in it, resisting it with all my might. But other times - more often, recently - I can feel just as anxious and yet somehow I have a little more distance from it. I don’t feel overwhelmed by it, and I feel like I can accept it - of course, I’d rather not be anxious given the choice, but it’s okay. I can look at it a bit more objectively, see that it’s not going to last forever, notice how it’s affecting me physically. In these moments, I feel pretty damn resilient. But it’s not really because the emotions I'm feeling are less strong.&nbsp;</p><p class="">I worry a bit that sometimes, in an attempt to be more “resilient”, people switch off and ignore or push away strong emotions, for fear of being overwhelmed by them. And that people who are naturally sensitive think they cannot also be resilient, think that’s just a price they have to pay. There’s this sense we have to choose - to the extent that it’s under our control - between sensitivity and resilience. I suspect this is partly based on misconceptions of both sensitivity and resilience: that being sensitive means "overreacting" to things, that being resilient means "grin and bear it." But I think there's a type of sensitivity - a kind of emotional responsiveness - that is totally compatible with a certain kind of resilience - the ability to feel things but not be overwhelmed or controlled by them.</p><p class="">I realised this is probably part of why I like meditation so much, because it's essentially teaching you to be both more sensitive (to be more mindful of your experiences) and more resilient (to not get caught up in or resist what you’re feeling.)</p>]]></content:encoded></item><item><title>Reflections on confirmation bias</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Wed, 10 Jan 2018 13:15:17 +0000</pubDate><link>https://jesswhittlestone.com/blog/2018/1/10/reflections-on-confirmation-bias</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5a560e9f085229d58d08cdaa</guid><description><![CDATA[This is the postscript/"final reflections" section from my PhD thesis. I 
tried to write it so that it would stand fairly well on its own as a 
high-level summary of the issues I discuss in more detail in the thesis 
itself.]]></description><content:encoded><![CDATA[<p class=""><strong><em>Below is the postscript/"final reflections" section from my </em></strong><a href="https://drive.google.com/file/d/0B7Ogifr6junAMTZaV2ozZmpNVG8/view" target="_blank"><strong><em>PhD thesis</em></strong></a><strong><em>.&nbsp;I tried to write it so that it would stand fairly well on its own as a high-level summary of the issues I discuss in more detail in the thesis itself: why the evidence for confirmation bias is much weaker than most people think, and how this fits with broader narratives about the importance of open-mindedness and changing one's mind.&nbsp;</em></strong></p><p class=""><strong><em>At some point I'd like to write something about this explicitly for a popular audience, but until then...</em></strong></p><p class="">This PhD has been an interesting exercise for me in changing my own mind, and trying to set aside my preconceptions. I chose to study confirmation bias because I genuinely believed it was pervasive, and at the root of many of society’s problems. I hoped that my research could help find a way to ‘debias’ people against it, to reduce this harmful source of irrationality. More generally, I had the impression that people are too slow and reluctant to change their minds, too ‘closed-minded’, and that pushing in the other direction - helping people to be more open-minded, was clearly a good thing.</p><p class="">However, over the course of my research, I’ve come to question all of these assumptions. As I begun exploring the literature on confirmation bias in more depth, I first realised that there is not just one thing referred to by ‘confirmation bias’, but a whole host of different tendencies, often overlapping but not well connected. I realised that this is because of course a ‘confirmation bias’ can arise at different stages of reasoning: in how we seek out new information, in how we decide what questions to ask, in how we interpret and evaluate information, and in how we actually update our beliefs. I realised that the term ‘confirmation bias’ was much more poorly defined and less well understood than I’d thought, and that the findings often used to justify it were disparate, disconnected, and not always that robust.</p><p class="">Reasoning that it made sense to start at the beginning of the process, I first focused my attention on selective exposure: this idea that people tend to seek out information they expect to confirm what they already believe. Though I knew that this was not all there was to confirmation bias, I thought that it was a good place to start: if people ￼don’t even engage with different viewpoints at all, how are they ever going to be able to change their minds when they should? My focus therefore shifted from ‘fix confirmation bias’ to the only-mildly-less-ambitious ‘fix selective exposure’. But as I began exploring the selective exposure literature further, and conducting my own experiments, this also began to look misguided: it wasn’t clear from either the existing literature, or from the results of my first few studies, that selective exposure was actually a particularly strong or robust phenomenon. Was I trying to fix a problem that didn’t exist?</p><p class="">Unsurprisingly, at this point I found myself feeling quite confused about what I was really trying to do. I spent several months trying to make sense of the mixed findings in the selective exposure literature, and trying to square this with a belief I still struggled to let go of: that outside of the lab, people do genuinely seem to have a hard time engaging with different perspectives. Eventually I realised that the problem was that selective exposure was far too narrow, and that my measures weren’t really capturing the most important aspects of people’s motivation and behaviour. Someone could display no or little selective exposure - reading a balance of arguments from both sides - but still not really be engaging with those arguments in an ‘open-minded’ way. Equally, the arguments a person chose to pay attention to might make them look biased, but actually be chosen for good reason - based on where they genuinely expected to learn more, for example. At this point I felt that further exploring the question of whether and when selective exposure occurs wasn’t really going to help me make progress on the questions I was really interested in: whether people really are biased towards their existing beliefs, and what it really means to be open-minded.</p><p class="">This set me off along two closely related paths that would eventually converge, both involving taking a big step back.</p><p class="">First, I began exploring the broader literature on confirmation bias in more detail, along with the associated normative issues. My investigation of the selective exposure literature had made me realise that if I wanted to understand confirmation bias, I couldn’t look at different aspects of reasoning independently: I needed to understand how bias might arise at all stages of reasoning, and how these stages interacted with one another. It made me wonder whether other findings I’d taken for granted, like selective exposure, might actually be less robust than I’d thought. I also realised that there were a number of normative questions that the selective exposure research did not adequately deal with - whether selective exposure is genuinely a ‘bias’ or ‘irrational’, and what this really means - that other areas of research might address better. I had been interested in this broader debate around what it means to be rational, and whether it is possible to improve human reasoning, since the beginning of my PhD, so I decided to look into this further.</p><p class="">Second, I started delving into the question of what it really means to be ‘open-minded’ and how we might measure it. I was dissatisfied with the way that selective exposure was often implicitly taken to be a measure of ‘open-mindedness’: where open-mindedness seemed to me to be a much broader concept, a concept that selective exposure experiments were far from capturing. I also recognised that open-mindedness was closely related to confirmation bias, but that the term seemed to be somewhat vague, and I wasn’t aware of good ways to measure how ‘open-minded’ someone was being. I therefore wanted to explore the literature on open-mindedness to see if I could get some more clarity on the concept and its relationship to confirmation bias, and to see whether there were better ways to measure open-mindedness than simply what arguments people select to read.</p><p class="">On the first path - exploring the confirmation bias literature and associated normative issues - I realised that most of the findings commonly cited as evidence for confirmation bias were much less convincing than they first seemed. In large part, this was because the complex question of what it really means to say that something is a ‘bias’ or ‘irrational’ is unacknowledged by most studies of confirmation bias. Often these studies don’t even state what standard of rationality they were claiming people were ‘irrational’ with respect to, or what better judgements might look like. I started to come across more and more papers suggesting that findings classically thought of demonstrating a confirmation bias might actually be interpreted as rational under slightly different assumptions - and found often these papers had much more convincing arguments, based on more thorough theories of rationality.</p><p class="">On the second path, I realised that most of the interesting discussion around open- mindedness was taking place in the philosophical, not the psychological, literature. In psychology, discussion of open-mindedness largely took it for granted what it means to be open-minded, and focused on developing measures of open-mindedness as a personality ￼trait based on self-report scales. I was more interested in whether it was possible to measure open-mindedness behaviourally (i.e. how open-minded someone is in their thinking about a given topic), which required pinning down this vague term to something more precise. The philosophical discussion of open-mindedness seemed to be trying harder to elucidate what it means to be open-minded: but in doing so, found itself caught up in this tricky question of whether it’s possible to be too open-minded, and if so, whether it is misguided for us to think we should teach open-mindedness. For a while, I myself got caught up in this elusive quest to define open-mindedness in a way that evades all possible downsides, before realising this was probably neither useful nor necessary.</p><p class="">All of this investigation led me to seriously question the assumptions that I had started with: that confirmation bias was pervasive, ubiquitous, and problematic, and that more open-mindedness was always better. Some of this can be explained as terminological confusion: as I scrutinised the terms I’d been using unquestioningly, I realised that different interpretations led to different conclusions. I have attempted to clarify some of the terminological confusion that arises around these issues: distinguishing between different things we might mean when we say a ‘confirmation bias’ exists (from bias as simply an inclination in one direction, to a systematic deviation from normative standards), and distinguishing between ‘open-mindedness’ as a descriptive, normative, or prescriptive concept. However, some substantive issues remained, leading me to conclusions I would not have expected myself to be sympathetic to a few years ago: that the extent to which our prior beliefs influence reasoning may well be adaptive across a range of scenarios given the various goals we are pursuing, and that it may not always be better to be ‘more open-minded’. It’s easy to say that people should be more willing to consider alternatives and less influenced by what they believe, but much harder to say how one does this. Being a total ‘blank slate’ with no assumptions or preconceptions is not a desirable or realistic starting point, and temporarily ‘setting aside’ one’s beliefs and assumptions whenever it would be useful to consider alternatives is incredibly cognitively demanding, if possible to do at all. There are tradeoffs we have to make, between the benefits of certainty and assumptions, and the benefits of having an ‘open mind’, that I had not acknowledged before.</p><p class="">There’s a nice irony to the fact that over the course of this PhD, I’ve ended up thoroughly questioning my own views about confirmation bias and open-mindedness: questioning my assumptions about the value of making assumptions, as it were. I haven’t changed<br>my mind completely - I am still concerned that in some situations, and for certain topics, people really are too dogmatic and could do with exploring more. But I’m certainly more open-minded about this than I was. Whether my increased open-mindedness is a good thing, of course, is another question.</p>























<p><a href="https://jesswhittlestone.com/blog/2018/1/10/reflections-on-confirmation-bias">Permalink</a><p>]]></content:encoded></item><item><title>Richard Hamming on doing important research</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Wed, 25 Oct 2017 17:03:04 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/10/25/richard-hamming-on-doing-important-research</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59f050db2278e76ed422ca10</guid><description><![CDATA[I've heard a lot of people talk about Richard Hamming's advice on how to do 
valuable research, but I only just got around to properly reading the 
transcript of his talk "You and Your Research." Here's a few things he 
talks about I found particularly interesting.]]></description><content:encoded><![CDATA[<p class="">I've heard a lot of people talk about Richard Hamming's advice on how to do valuable research, but I only just got around to properly reading the transcript of his talk <a href="http://homepages.inf.ed.ac.uk/wadler/papers/firbush/hamming.pdf" target="_blank">"You and Your Research."</a>&nbsp;Here's a few things he talks about I found particularly interesting.</p><p class=""><strong>Have the courage to pursue independent thoughts, and to believe you can do important work:</strong></p><p class="">“One of the characteristics you see, and many people have it including great scientists, is that usually when they were young they had independent thoughts and the courage to pursue them. For example, Einstein somewhere around 12 or 14, asked himself the question, “What would a light wave look like if I went with the velocity of light to look at it?”... He could see a contradiction at the age of 12, 14, or somewhere around there, that everything was not right and that the velocity of light had something peculiar.</p><p class="">One of the characteristics of successful scientists is having courage. Once you get your courage up and believe that you can do important problems, then you can. If you think you can’t, almost surely you are not going to. ”</p><p class=""><strong>Beware fame:</strong></p><p class="">“When you are famous it is hard to work on small problems. This is what did Shannon in. After information theory, what do you do for an encore? The great scientists often make this error. They fail to continue to plant the little acorns from which the mighty oak trees grow. They try to get the big thing right off. And that isn’t the way things go.”</p><p class=""><strong>If you can’t solve a problem, turn it around:</strong></p><p class="">“I think that if you look carefully you will see that often the great scientists, by turning the problem around a bit, changed a defect to an asset. For example, many scientists when they found they couldn’t do a problem finally began to study why not. They then turned it around around the other way and said, “But of course, this is what it is” and got an important result.”</p><p class=""><strong>Don’t underestimate the importance of drive and commitment...:</strong></p><p class="">“You observe that most great scientists have tremendous drive. I worked for ten years with John Turkey at Bell Labs. He had tremendous drive... I went storming into Bode’s office and said, “How can anybody my age know as much as John Turkey does?” He leaned back in his chair, put his hands behind his head, grinned slightly, and said, “You would be surprised Hamming, how much you would know if you worked as hard as he did that many years.”... What Bode was saying was this: “Knowledge and productivity are like compound interest.”... The more you know, the more you learn; the more you learn, the more you can do; the more you can do, the more the opportunity.</p><p class="">...</p><p class="">If you are deeply immersed and committed to a topic, day after day after day, your subconscious has nothing to do but to work on your problem. And so you wake up one morning, or on some afternoon, and there’s the answer. For those who don’t get committed to their current problem, the subconscious goofs off on other things and doesn’t produce the big result. So the way to manage yourself is that when you have a real important problem you don’t let anything else get the center of your attention - you keep your thoughts on the problem. Keep your subconscious starved so it has to work on your problem, so you can sleep peacefully and get the answer in the morning, free.”</p><p class=""><strong>...but maybe don’t overestimate it either:</strong></p><p class="">“The misapplication of effort is a very serious matter. Just hard work is not enough - it must be applied sensibly.”</p><p class=""><strong>Learn to be comfortable with ambiguity:</strong></p><p class="">“Most people like to believe something is or is not true. Great scientists tolerate ambiguity very well. They believe the theory enough to go ahead; they doubt it enough to notice the errors and faults so they can step forward and create the new replacement theory. If you believe too much you’ll never notice the flaws; if you doubt too much you won’t get started.”</p><p class=""><strong>And of course, work on problems you really believe are important:</strong></p><p class="">“If you do not work on an important problem, it’s unlikely you’ll do important work. It’s perfectly obvious. Great scientists have thought through, in a careful way, a number of important problems in their field, and they keep an eye on wondering how to attack them....&nbsp;The average scientist, so far as I can make out, spends almost all his time working on problems which he believes will not be important and he also doesn’t believe that they will lead to important problems.”</p>]]></content:encoded></item><item><title>The value in vagueness</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Mon, 23 Oct 2017 18:12:04 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/10/23/the-value-in-vagueness</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59ee2bddda02bc967ffb09af</guid><description><![CDATA[I’ve begun to appreciate that sometimes vagueness has value. If we want 
everything we write, read, and say to be clear and concise, we’re going to 
be limited in what we write, read and talk about.]]></description><content:encoded><![CDATA[<p class="">Lately, I’ve noticed that my taste in reading material has changed slightly. It used to be that almost everything I read was a similar kind of non-fiction: the kind of non-fiction that has a very clear thesis, clear structure, and makes very clear points.&nbsp;I couldn’t be doing with anything that wasn’t clear - lack of clarity to me always seemed unnecessary and pretentious. Even examples and stories used to illustrate points in non-fiction books often frustrated me: I just wanted them to get to the point.</p><p class="">Recently, though, I’ve found myself gravitating towards more autobiographies, more fiction, more narrative non-fiction. Rather than reading blogs that attempt to provide a clear answer to a question or a clear explanation of something, I’ve been enjoying reading those that simply grapple with complex and interesting ideas without necessarily reaching any conclusion.</p><p class="">I think there are a couple of reasons for this. One is that I’ve more explicitly recognised that reading can serve different purposes - sometimes that purpose is to learn things, to absorb facts - in which case, clarity and simplicity can be really useful. But another reason we might read is to evoke feelings, to help us think about something on more of a gut level, or to see someone else’s perspective. And lately, for whatever reason, I’ve been more interested in finding ways to feel different things and see different perspectives than to “learn facts” in the strictest sense.</p><p class="">A slightly different perspective on this, though, is that I’ve begun to appreciate that sometimes vagueness has value. If we want everything we write, read, and say to be clear and concise, we’re going to be limited in what we write, read and talk about. If we prioritise clarity, we’re going to miss out on grappling with some of the most interesting ideas out there: those we don’t fully understand yet. We also risk oversimplifying and thinking we understand things much better than we do.</p><p class="">I read something recently about how people who are willing to grapple with and try to express feelings that they don’t quite understand yet apparently do much better in therapy than those who always seem able to express themselves clearly. Those people who feel they can only talk about things that they can express clearly are, perhaps, failing to acknowledge a whole subsection of their feelings and experiences - those they don’t understand yet - which might be the most important. In some ways, it seems obvious when put like this - you’ll only ever improve your understanding of anything (including yourself) if you’re willing to face what’s presently beyond your understanding.</p><p class="">Acknowledging that the world is messy, that our concepts are vague, that we really don’t understand things, is difficult. We all have a strong drive to make sense of the world, to organise things into neat patterns, to put things in boxes, to make things make sense. The feeling that things suddenly fit together and click into place can be incredibly satisfying.</p><p class="">What’s strange though, is that recently I’ve been finding it somehow <em>more</em> rewarding to grapple with complex ideas I don’t yet understand, to read something that’s thought-provoking but doesn’t have any resolution, than to read someone’s neat and simple explanation of how the world works. Given the choice between an article that attempts to lay out a clear model of how something works, and one that explores a number of connected ideas, tries to make sense of them, and raises interesting questions, the latter feels much more appealing to me. This seems to conflict with my model of how the brain works when it comes to processing ideas - that we find chaos and unpredictability deeply unsettling, and a strong drive to organise ideas so that they “make sense.” So how is it that I’m now finding it oddly satisfying, and maybe even more than that - a sense of greater meaning - from thinking about things that don’t yet make sense to me?</p><p class="">Part of the key here might be in that word “yet” - there’s something exciting about encountering an interesting question you don’t know the answer to if you’re anticipating that you might at some point be able to resolve it. In the same way that sometimes the anticipation of a fun event can be more enjoyable than the event itself, maybe my anticipation of understanding something better can feel as good - or better - than actually reaching the understanding. It might also be that, as I become more and more aware of just how damn complicated the world is and how little I understand, those “simple” explanations feel less satisfying, because I’m harbouring some scepticism about whether they actually explain things as well as they seem to. &nbsp;</p><p class="">But I think there’s even more to it than this. Even though arguably, a great deal of our sense of meaning comes from this making sense of things, seeing patterns, drawing connections, it feels like I get a different - perhaps deeper - sense of meaning from realising how little I understand. This deeper sense of meaning is like a kind of awe - a sudden appreciation of how incredible and incredibly complex the world is, of how little I understand, of how insignificant I am, of how much I will never understand. And somehow, weirdly, this feels good, in a “looking up at the stars and realising how crazy it is that anything exists at all” kind of way.&nbsp;</p><p class="">I have a sense that this ability - to let go of needing to make sense of everything, to accept uncertainty without struggling with it, to embrace it and actually see it as good - is incredibly important. It’s what allows us to not get too attached to any one perspective, to be willing to reconsider our views, to listen to viewpoints we disagree with. It’s what allows us to venture out into the unknown and discover new things, to try to understand things about the world that make little or no sense to us.<br>&nbsp;</p>]]></content:encoded></item><item><title>More ways of improving decision-making</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Tue, 10 Oct 2017 14:55:03 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/10/10/more-ways-of-improving-decision-making</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59dcdcd8a8b2b02864d18cf1</guid><description><![CDATA[There’s one distinction between different methods we might use to try and 
improve decisions, and another distinction between different kinds of 
decisions we might target for improvement:]]></description><content:encoded><![CDATA[<p class="">I wrote the other day about <a href="http://jesswhittlestone.com/blog/2017/9/30/two-ways-of-improving-decision-making">a distinction between two different ways of decision-making</a>: trying to improve lots of small decisions via ‘nudging’ (e.g. changing the presentation of options in cafeterias so more people make healthy choices) vs. trying to improve a few, particularly high stakes decisions by training people in better decision-making techniques (e.g. training members of the National Security Council to make more accurate forecasts.)</p><p class="">A conversation I had the other made me realise that these examples actually highlight two important distinctions - and therefore, potentially, four different ways of thinking about improving decision-making. There’s one distinction between different methods we might use to try and improve decisions, and another distinction between different kinds of decisions we might target for improvement:</p><ol data-rte-list="default"><li><p class="">Different <strong>methods</strong>: improving decisions via “nudges” vs. teaching people better decision-making strategies</p></li><li><p class="">Different <strong>kinds of decisions</strong>: improving lots of small decisions a small amount (thousands of people eat more healthily) vs. improving a few important decisions a larger amount (govt spends important resources on more effective health interventions.)</p></li></ol><p class="">I only focused on two possible combinations of these two variables - improving lots of small decisions via nudges, and improving a few important decisions by teaching better strategies. But the other combinations are also possible - we could try to improve decisions made by the general public via training (by teaching better decision-making strategies in schools, for example), or we could try to improve a few very important decisions via small “nudges” in key environments (by, um, <a href="https://www.wired.com/2010/12/eyes-good-behavior/">putting a picture of a pair of eyes over the Prime Minister’s desk</a>?) To make this clearer, let’s put it in a 2x2 matrix, because everyone likes those:</p>
























  
    
<table class="tg">
<colgroup>
<col>
<col>
<col>
</colgroup>
  <tr>
    <th class="tg-l2oz"></th>
    <th class="tg-9hbo">'Nudging'</th>
    <th class="tg-9hbo">Training</th>
  </tr>
  <tr>
    <td class="tg-9hbo">Lots of small decisions</td>
    <td class="tg-yw4l">Classic “behavioural insights” work - e.g. changing the wording on a letter so more people change their taxes, changing the display of food options so more people make healthy choices</td>
    <td class="tg-yw4l">Adding “critical thinking” or other “rationality training” into school curricula, running workshops that help people make better decisions in their lives</td>
  </tr>
  <tr>
    <td class="tg-9hbo">A few big decisions</td>
    <td class="tg-yw4l">Make relevant evidence more easily available and understandable to policymakers, create social rewards for using certain procedures</td>
    <td class="tg-yw4l">Training influential decision makers (e.g. in tech companies or government) to recognise and avoid cognitive biases or other bad thinking habits</td>
  </tr>
</table>
  




  <p class="">The bottom left square - trying to improve a few very important decisions by “nudges” - seems particularly interesting to me, because it’s perhaps the least obvious or least discussed. It’s not totally clear to me what “nudging” influential decision-makers - e.g. policymakers in government - would look like, but it certainly seems plausible that one could find ways to tweak the environments in which important decisions are made (by changing regulations, processes, or salient ideas/information) in ways that would result in a subtle shift of incentives, thereby really improving the quality of decisions made.</p><p class="">I recently wrote about <a href="https://80000hours.org/problem-profiles/improving-institutional-decision-making/">improving institutional decision-making</a> as a high impact cause area for 80,000 Hours, where I focused mostly on trying to get better decision-making techniques implemented - i.e. the bottom right cell of the above matrix. One piece of pushback I got on this was that this is incredibly difficult to do in practice, because policymakers and other influential decision-makers simply don’t have much incentive to use costly/effortful new strategies, and there are plenty of bureaucratic barriers to doing so. I agree that this is a concern, but I also wasn’t sure what on earth “changing incentives” could look like in practice. One thing it might look like, though, is nudging - making subtle changes to the environment in which decisions are made that make ‘better’ decisions easier. A huge advantage of “nudging” approaches over “training” approaches, long recognised by the behavioural science crowd, is that they don’t require much if any effort from the people whose decisions are being improved.</p><p class="">Of course, the fact that “nudging” often doesn’t even require awareness on the part of people whose decisions are being targeted, is also the reason it’s sometimes ethically dubious. However, I don’t think this is as much of a concern as it seems for nudging institutions towards better decisions, for a couple of reasons. First, if we’re going to try and ‘nudge’ important institutions/teams towards better decisions, those institutions/teams will presumably have to be a lot more involved in doing this than the general public are in the kinds of policy nudges currently employed. I think this doing this kind of thing would look a lot more like a few key specialists in an organisation coming up with proposals for how the organisations’ processes and environment could be subtly changed to incentivise better decisions. This would inevitably have to be approved by at least some of those who would be affected, while still not requiring much effort from them beyond that. &nbsp;Second, ‘nudging’ for better decisions here would probably focus on improving the processes by which decisions are made, and building the capacity of groups to make better decisions, rather than on improving the outcomes. For example, the focus might be on making it easier for policymakers to make use of relevant evidence when making decisions, or to use certain systematic processes for assessments.</p>]]></content:encoded></item><item><title>Does a good career need to tell a good story?</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Thu, 05 Oct 2017 11:44:54 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/10/5/does-a-good-career-need-to-tell-a-good-story</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59d61a7eb1ffb632881f50ad</guid><description><![CDATA[I suspect that a lot of people think about their careers in this narrative 
sense - what step makes sense next, given what I’ve done so far? I’ve 
certainly been noticing this kind of thinking in myself. I’m a bit worried 
about this, because I’m not sure “telling a good story” necessarily tracks 
that what I care about - having a career that I enjoy and that has an 
impact in the world.]]></description><content:encoded><![CDATA[<p class=""><a href="http://jesswhittlestone.com/blog/2015/2/25/the-story-of-your-life?rq=narrativity" target="_blank">I’ve written before</a> about the difference between people who like to think of their lives as a story (“narratives”), and those who see their lives more as a series of disconnected episodes (“episodics.”)&nbsp;</p><p class="">I’ve been thinking recently about this in the context of career choices. I suspect that a lot of people think about their careers in this narrative sense - what step makes sense next, given what I’ve done so far? Where do I want to end up in 5, 10, or 20 years time, what kind of story do I want to be able to tell about what I’ve done, and how I got to where I am?&nbsp;</p><p class="">I’ve certainly been noticing this kind of thinking in myself. I’m thinking about what to do next after my PhD, and I’ve been finding myself drawn to options that feel like a good next step in my story, while feeling some resistance to make choices that don’t seem to produce such a good narrative. I’m a bit worried about this, because I’m not sure “telling a good story” necessarily tracks that what I care about - having a career that I enjoy and that has an impact in the world.&nbsp;<br>The best way for me to do valuable, fulfilling work might well be to do something that makes a bit less sense, that doesn’t tell such a good story. And yet I do still feel this strong pull towards doing whatever makes for a good story.</p><p class="">There are certainly some reasons why optimising for good storytelling could be a good way to think about your career. It helps you to sell yourself to other people, to ensure you’re developing expertise in a specific area rather than just jumping around randomly, to use and build on what you’ve learnt at each step. This all makes sense. But sometimes this can go too far - wanting to tell a good story might encourage you to stick on a path even if you don’t enjoy it anymore, or lead you to do things that will “justify” past decisions in an irrational way. Having just spent the past few years getting a PhD, I now find myself more attracted to options which require a PhD. These options would help me justify (to myself and others) why spending all that time on a PhD was clearly worth it. But this doesn’t actually make any sense. It’s great that I now have options I wouldn’t have had without a PhD, but if the very best option is something I don’t need a PhD for, it seems crazy to turn it down just for that reason.&nbsp;</p><p class="">I worry a little bit that many people’s career choices are driven too much by the desire to tell a good story, and this prevents them from considering or choosing otherwise great options that don’t fit into such a great narrative. Maybe we can counteract this by finding ways to tell compelling, unconventional stories about not-so-neat career paths - paths that involve exploring a variety of different options and industries, combining unconventional skills, doing lots of small valuable things rather than one hugely influential thing. I’d personally like to hear more stories of people who have had careers that don’t necessarily fit together neatly, people who spent a few years doing something entirely random, but who don’t see that random thing as a mistake or a waste of time.</p><p class="">Relatedly, I’ve also noticed how difficult I’ve found it meeting new people while I’m exploring and figuring out what to do - because I don’t feel like I have a good story I can tell about what I’m doing now and where I’m going. I find myself trying to fit my introduction into some kind of narrative that makes sense depending on who I’m talking to and what background information they have. Maybe the biggest reason that stories are so important to us, especially when it comes to work, is that they form part of our identities. Having a neat, clear story I can tell about my life helps others to make sense of me, and perhaps even helps <em>me</em> to make sense of <em>myself</em>. But none of us fall into neat, clearly-defined identity boxes anyway - maybe it would be better if we stopped trying to.</p>]]></content:encoded></item><item><title>Two ways of "improving decision-making"</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sat, 30 Sep 2017 16:07:25 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/9/30/two-ways-of-improving-decision-making</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59cfbfbccd39c3d497e28fdb</guid><description><![CDATA[People sometimes talk about “improving decision making” as a way to improve 
the world. think that there’s promise here, and I’d like more people to be 
focusing on this. But I also think that this project is also often stated 
in a way that’s too broad and vague to be tractable.]]></description><content:encoded><![CDATA[<p class="">People sometimes talk about “improving decision making” as a way to improve the world - if we could find ways to overcome the various ‘biases’ and ‘irrationalities’ that people are prone to, we’d be better able to solve some of the world’s most important problems. I think that there’s promise here, and I’d like more people to be focusing on this. But I also think that, as stated above, this project is too broad and vague to be tractable. I’d like to be able to say something a bit more concrete about what working on this problem might look like, and to begin with, I’ve found it helpful to distinguish between two different types of “improving decision-making.”</p><p class="">The idea of improving policy-making using “behavioural insights” has been gaining popularity in government over the last few years - largely due to the work of the UK Behavioural Insights Team (BIT), and other smaller groups and organisations doing similar work. (Disclaimer: I worked for BIT for ~1 year during my PhD.) The basic idea here is that we can use an understanding of behavioural science to design policies that “nudge” citizens’ behaviour in better directions: helping people to eat more healthily, save more for retirement, or get back into work quicker. By improving the design of policies that affect thousands or even millions of people, we can make the world better by improving many, many small decisions in people’s lives.</p><p class="">I think this work is clearly valuable, and I’m glad there’s more focus on it (setting aside potential ethical issues with governments deliberately influencing citizens’ behaviour - I think there are some legitimate worries here but in practice most of this work is defensible.) But there’s also a second way to improve decision-making, a second way of applying “behavioural insights” to improve policy, that I think might be even more valuable, and hasn’t gotten as much attention.</p><p class="">In addition to improving the design of specific policies, we could also apply insights from psychology to improve the processes by which policy decisions are made. Rather than trying to improve lots and lots of small decisions, we could focus our efforts on a few very high-stakes decisions, the decisions made by people in powerful positions most likely to affect humanity’s future. This might be more challenging than small nudges, but might also be much more valuable in the long-run. As technology gets more and more advanced, potential worst-case scenarios from conflict are growing in severity - with nuclear weapons, we have the ability to wipe out millions or even billions of humans, and advances in AI and biotechnology may pose new unprecedented threats. This makes the decisions of powerful institutions all the more crucial, and improving their decision-making competence all the more valuable.</p><p class="">These kinds of “high stakes decisions” - deciding how to respond to threats from other countries or terrorist groups, or deciding how to prioritise government’s scarce resources - are of course much more complex than the decisions most individuals make on a day-to-day basis. In the case of improving citizens’ decisions, generally it’s objectively clear what the “better” decision is (and this is part of how we defend the ethics of nudging) - people making healthier food choices, keeping more people in work, or widening participation in higher education, are all pretty uncontroversially good for society. When it comes to complex and high-stakes government decisions, it’s less often the case that people struggle to make what’s clearly the best decision - and more often that it’s incredibly difficult to know what the best decision is at all.</p><p class="">Perhaps it’s helpful here to additionally distinguish between two types of human irrationality. In some cases, we sort-of-know reflectively what the best decision or answer is, but short-term focused heuristics and incentives mean we fail to act accordingly - I know that I’ll feel better in the long-run if I exercise and keep my finances organised, but I often feel more motivated in the moment to spend money on fancy ice cream than to go running. But for other kinds of problems, it’s incredibly difficult for us to know what the right answer to a question or best course of action is, even reflectively, even given a lot of time to think about it. How advanced is North Korea’s nuclear weapons programme? How likely is it that there will be a nuclear attack on the US in the next two years? Part of the difficulty with answering these questions is incomplete information, of course, but there’s also the fact that our brains naturally struggle to combine large amounts of information at once, to think probabilistically, to see the implications and inferences one should draw given various different pieces of data, and so on. Even given a great deal of relevant information, and enough time for reflection, intelligent people will fail to make accurate judgements about complex problems, especially those involving predicting the future. This is largely a problem of limited cognitive ability, and so quite a different type of “irrationality.”</p><p class="">This means the best approaches for improving the ability of powerful institutions to make high-stakes decisions are likely to be very different from the best approaches for improving small decisions people make on a day-to-day basis. Simple nudges - making the obviously best option easier or more attractive - aren’t going to cut it. There might be some low-hanging fruit in terms of removing impediments to better decision-making: using checklists is surprisingly effective at reducing simple errors, for example. And I think continuing to push for more evidence-based policy is likely to be very valuable, which already has a fair amount of traction. But ultimately I think better institutional decision-making will be less straightforward - most of the best-established techniques from improving judgements and decisions (e.g. from the literature on forecasting and improving calibration) seem to require a fair amount of conscious effort and training on the part of decision-makers. There’s also the issue of incentives - arguably we already know a lot about how to make better decisions, and the reason they’re not used is that influential decision-makers face bureaucratic barriers and competing incentives which mean it’s not in their personal interests to do so. This means that if we want to improve decision-making e.g. at high levels of government, it’s not enough to just understand some techniques that have performed well in academic contexts - this research needs to be combined with an in-depth understanding of how bureaucracies work, and we need to find ways to align better decision-making with other incentives. To see better decision-making techniques adopted in government, I think we’ll need to find ways to show decision makers that these techniques will actually help them achieve their more immediate objectives, whatever they are.&nbsp;</p><p class="">None of this is easy, especially compared to changing the wording on a letter to increase response rates. So it’s not surprising that people interested in improving decision-making have focused much more on simple nudges like changing the wording on a letter. But I think that psychology research actually has a lot to say about this second type of improving decision-making - improving the processes by which people make complex, high-stakes decisions where there’s no obvious “correct” answer - as long as we acknowledge the practical complexities and avoid oversimplifying the problem. I think it would be really valuable if there was more collaboration between social scientists and people actually making important decisions, and more discussion of ways we could improve the quality of the institutional decision-making. And none of this is really meant to criticise the way government currently makes decisions - I certainly don’t know enough about it! - but just to recognise that humans are imperfect, there’s always room for improvement, and focusing our effort on the highest-stakes decisions might be particularly important.<br>&nbsp;</p>]]></content:encoded></item><item><title>Dangerous drives</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Thu, 14 Sep 2017 11:50:10 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/9/14/dangerous-drives</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:59ba6c25bce176cf268d17a4</guid><description><![CDATA[I really like this speech by C.S. Lewis. It’s about the tendency to form 
“Inner Rings” - informal groups and hierarchies, impossible to pin down 
precisely, but which exist everywhere - in all schools, organisations, and 
societies.]]></description><content:encoded><![CDATA[<p class="">I really like <a href="http://www.mit.edu/~hooman/ideas/the_inner_ring.htm">this speech</a> by C.S. Lewis. It’s about the tendency to form “Inner Rings” - informal groups and hierarchies, impossible to pin down precisely, but which exist everywhere - in all schools, organisations, and societies. Lewis argues that the drive to be “on the inside” of some Inner Ring is a more fundamental human drive than most people think. He also thinks it’s a dangerous one.</p><p class=""><em>“Unless you take measures to prevent it, this desire is going to be one of the chief motives of your life, from the first day on which you enter your profession until the day when you are too old to care.”</em></p><p class="">Why does Lewis think this desire is so dangerous? He gives two reasons:</p><p class="">First, that, “of all the passions, the passion for the Inner Ring is most skillful in making a man who is not yet a very bad man do very bad things.” For almost all people, he says, the choice that might lead them down a bad path will not be an obvious or dramatic one. It will be “the hint of something which is not quite in accordance with the technical rules of fair play: something which the public, the ignorant, romantic public, would never understand...” It will be something which, a friend tells you, “we always do.” And even if something feels a bit off, you might ignore that feeling and do it anyway, because if you were to refuse, you’d feel thrown out - no longer part of that “we”, thrust out of that Inner Ring you so desperately want to be part of.</p><p class="">The second reason Lewis gives is subtler - but I think perhaps all the more dangerous. For “as long as you are governed by that desire,” Lewis cautions, “you will never get what you want... until you conquer the fear of being an outsider, an outsider you will remain.” If you seek recognition within a group simply for the sake of the boost you get from being an “insider” - and not because that group provides you with something you value - then you will never be satisfied. Once you’re “in”, the circle will quickly lose the charm it had from the outside. You’ll soon find some new, smaller, more alluring, or higher-status clique to pine after.</p><p class="">So what’s the alternative?</p><p class=""><em>“The quest of the Inner Ring will break your heart unless you break it. But if you break it, a surprising result will follow. If in your working hours you make the work your end, you will presently find yourself all unawares inside the only circle in your profession that really matters. You will be one of the sound craftsmen, and other sound craftsmen will know it.</em></p><p class=""><em>And if in your spare time you consort simply with the people you like, you will again find that you have come unawares to a real inside: that you are indeed snug and safe at the center of something which, seen from without, would look exactly like an Inner Ring. But the difference is that its secrecy is accidental, and its exclusiveness a by-product, and no one was led thither by the lure of the esoteric: for it is only four or five people who like one another meeting to do things that they like. This is friendship.”</em></p><p class="">I think, more generally, we have motives that point at things we actually value, and motives that are more illusory - and the latter can be dangerously alluring. We’re driven to impress, achieve, make money, acquire status, be on “the inside” of a particular group. These things feel good, especially in the short-term, but they’re deceptive - they’re never really satisfied, and we’ll just keep chasing them. These drives can be harmful, as Lewis suggests, because they can lead not-bad people to do bad things. But I think the even greater harm of these drives is that they distract us from what actually matters - the drive to be impressive distracts us from actually getting good at something, the drive to be “on the inside” gets in the way of forming genuinely rewarding relationships.</p><p class="">It seriously worries me how much of the time most of us spend chasing things like achievement, status, recognition - and how little we spend thinking about what we actually value: what we’d find rewarding in a job, what kinds of people make us feel good, what we really want in a relationships. It’s easy, in a sense, to just go after the job that makes the most money, the friendship group that makes us look coolest, the partner who is most attractive. Asking ourselves what we actually want and value in life, and then trying to find that, is much harder.</p>]]></content:encoded></item><item><title>Aphorisms</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sun, 06 Aug 2017 08:58:22 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/8/6/aphorisms</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5986d76759cc6828fa29528e</guid><description><![CDATA[Some aphorisms that I like, from Vectors by James Richardson.]]></description><content:encoded><![CDATA[<p class="">A while back a friend introduced me to <a href="https://www.amazon.co.uk/Vectors-Aphorisms-Ten-Second-James-Richardson/dp/0967266882" target="_blank"><em>Vectors </em>by James Richardson</a>&nbsp;- a nice little book of aphorisms and very short essays. He's a wise guy, I think (both Richardson and my friend :)). Here are a few that I especially liked:</p><p class=""><em>Later you will not want it. Does that mean you should take it now, or let it go?</em></p><p class=""><em>I trick myself into sins I could not forgive myself for intending. If I could depend on myself for a little mercy, I would perhaps not have grown so expert in the self-deception that makes it so difficult minute to minute to know what I am really doing.</em></p><p class=""><em>You keep track of your worth on some wildly cyclic stock market that will soar in fantasy, crash at a cold glance. Other people think you never change.</em></p><p class=""><em>Say too soon what you think and you will say what everyone else thinks.</em></p><p class=""><em>More dangerous than the worst is the pretty good you can no longer tell from the best.</em></p><p class=""><em>So many times I've made myself stupid with the fear of being outsmarted.</em></p><p class=""><em>I am not unambitious. I am just too ambitious for what you call ambitions.</em></p><p class=""><em>The days are exactly alike in that they repeat nothing exactly.</em></p><p class=""><em>I'm sitting here bored, trying to understand. No: trying to remember that everything is a mystery.</em></p>]]></content:encoded></item><item><title>My favourite books of 2016</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Sun, 19 Mar 2017 19:36:12 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/3/18/my-favourite-books-of-2016</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:58cd75ddebbd1a54a33aa953</guid><description><![CDATA[Here are three books that really stood out for me last year.]]></description><content:encoded><![CDATA[<p class="">Ok, so this is a bit late - but here are three books that really stood out for me last year. Part of the reason I've delayed this post is I wanted to write in more detail about each of these books, but since I haven't gotten round to that, I at least wanted to briefly share these:</p><p class=""><strong>1. </strong><a href="https://www.amazon.co.uk/d/cka/Impro-Performance-Books-Improvisation-Theatre-Keith-Johnstone/0713687010" target="_blank"><strong>Impro</strong></a><strong> -- Keith Johnstone</strong></p><p class="">A book about theatre and improvisation that's about so much more than theatre and improvisation -&nbsp;a book about social interaction and identity, how we perceive ourselves and others, and the things that constrain us. I found this incredibly dense with insights and wisdom - I'm not sure I've ever highlighted/commented in a book so much, or so strongly felt I needed to read a book multiple times to even begin to make the most of it. <a href="https://www.youtube.com/watch?v=bz9mo4qW9bc" target="_blank">Keith Johnstone's TED talk</a> is also excellent - definitely my favourite TED talk ever.</p><p class="">I don't even really know where to begin explaining what I got from this book , and I hope to write more about it at some point. I think the biggest thing for me was just realising how much I'm holding back all the time out of fear of being judged, how afraid I am to express myself, of doing the wrong thing, of allowing myself to look silly.</p><p class="">Thanks to <a href="http://www.uribram.com/" target="_blank">Uri Bram</a> for buying this book for me <em>years </em>ago, and somehow knowing it was exactly what I needed well before I was able to see it myself. I somehow couldn't get into it until this year - I think I just wasn't quite in the right place.&nbsp;</p><p class=""><strong>2. </strong><a href="https://www.amazon.co.uk/dp/B06XCD1C29/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1489861455&amp;sr=1-1&amp;keywords=i+am+a+strange+loop" target="_blank"><strong>I Am A Strange Loop</strong></a><strong> -- Douglas Hofstadter</strong></p><p class="">This is the book Hofstadter wrote years after <em>Godel, Escher, Bach</em> - in response to the fact he felt no-one really got what he was trying to say with GEB (despite the fact it was so popular.) I'm actually only reading GEB <em>now</em>, after having read this - I figured that if <em>Strange Loop </em>was supposed to be a clearer exposition of Hofstadter's ideas, it made sense to read it first. I feel like this was a good decision - I enjoyed it even more than I expected to, and I think I'm finding GEB easier to get into now - I tried reading it a few years ago and it somehow just didn't grab me enough.</p><p class="">I think part of the reason I liked this book so much is it really pushes all my nerd buttons.&nbsp;I've always been fascinated by philosophical questions of consciousness and identity, as well as really enjoying mathematical logic and weird semantic paradoxes and self-reference. These had always seemed quite separate to me - so someone coming along and proposing to link them together, to explain consciousness using logic and self-reference, feels <em>so </em>satisfying to me. Hofstadter is also just an excellent, engaging writer - I think he uses stories and analogies in a really skilful way to make tricky concepts accessible and engaging.</p><p class="">I don't necessarily think Hofstadter manages to crack consciousness completely - but he certainly gave me a different way of thinking about it that I find really useful. Again, hope to write more about this at some point...</p><p class=""><strong>3. </strong><a href="https://www.google.co.uk/search?hl=en&amp;q=the+center+cannot+hold&amp;meta=&amp;gws_rd=ssl" target="_blank"><strong>The Center Cannot Hold</strong></a><strong> -- Elyn Saks</strong></p><p class="">I got really into reading autobiographies this year - particularly memoirs of mental illness. This was my favourite: an autobiography of a woman with schizophrenia. Something I really enjoy when reading is feeling like I'm getting to understand a totally different perspective (which I think is why I've enjoyed autobiographies so much.) I particularly enjoy memoirs of mental illness just because they give me insight into the different ways that people struggle - it's not even that they're about mental illness <em>per se,</em> but just some ways in which the author has found life difficult. I like the honesty and openness that comes with this kind of writing, and I find it helpful to hear others acknowledge just how difficult it is to be human sometimes.</p><p class="">I particularly loved this book, I think, because (a) I knew very little about schizophrenia before, so it was really interesting to learn about, and (b) I was surprised just how...&nbsp;<em>identifiable </em>a lot of the author's experiences sounded, despite the fact I've never experienced anything close to psychotic symptoms. Hearing someone describe how they ended up sitting on a floor in a mental hospital just rocking back and forth moaning for days on end, and thinking,&nbsp;<em>I can totally see how she ended up there,&nbsp;</em>made me extra-aware of just how grey the line between "mentally healthy" and "mentally ill" really is. It made me more aware of how fragile the mind is and how fragile our grip on reality can be. Our brains are generally very good at making sense of the world, of filtering out nonsense, of organising our experiences in a coherent way.&nbsp;I think we take for granted how much hard work is going on here, and it's just not that surprising to me that sometimes these things can break down. So much of what Saks describes just sounds to me like these basic abilities failing in small ways - her brain struggling to filter out what's irrelevant, to zone out silly thoughts that make sense, to organise reality in a neat way.&nbsp;And it just doesn't seem that weird or surprising to me that this could happen - which in turn I think makes 'madness' look much less mad.</p><p class="">Thanks to <a href="https://gruntledandhinged.com/" target="_blank">Kate Donovan</a> for the recommendation!</p><p class="">(I'm conscious of not wanting to sound like I'm oversimplifying schizophrenia - I'm aware I still really have no idea what causes psychosis or what it feels like - but this is just the impression I took from this book.)</p><p class="">---</p><p class="">I've also found it interesting to notice myself drawing connections between these three books, despite the fact they seem really different on the surface. On some level, they're all about how we organise and make sense of the world, and particularly how we conceptualise <em>ourselves.</em>&nbsp;<em>Strange Loop </em>is about where our sense of identity comes from -&nbsp;how this arises from our ability to abstract away from the basic elements of reality, forming higher-level concepts including a concept of <em>ourselves </em>- and how the weird feedback loop this creates (my perception of myself feeds back into the things that I think and do which then feed back into my perception of myself...) may help explain what gives rise to conscious experience.</p><p class="">A lot of what I took from <em>Impro </em>had to do with how this ability to perceive ourselves, and think about how others might perceive us, can constrain us. A lot of what improvisation is aiming to do seems to be teaching people to 'let go' of a maintaining a certain self-image, or the need to organise reality in certain neat ways.&nbsp;</p><p class="">The image of schizophrenia I took from <em>The Center Cannot Hold</em>&nbsp;seems to say something about what happens when the brain's basic abilities to organise and make sense of the world aren't able to function properly: when the brain struggles to organise everything in a coherent manner, when it struggles to maintain a stable self-image, when it can't quite filter out what's relevant from what isn't. All of this just makes me aware of how crucial and yet fragile these abilities are, and how much we take them for granted.</p>]]></content:encoded></item><item><title>Enough</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Tue, 21 Feb 2017 20:42:04 +0000</pubDate><link>https://jesswhittlestone.com/blog/2017/2/21/enough</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:58ac9637d1758efae409bba8</guid><description><![CDATA[When I go for a long run, often I find the latter parts much easier than 
the beginning. I feel like what I've already done is enough, that I could 
stop right now and still feel I'd done a decent run. I think this tendency 
is pervasive in other areas of my life too.]]></description><content:encoded><![CDATA[<p class="">When I go for a long run, often I find the latter parts much easier than the beginning. This seems strange in a way: you'd think that as my body got more tired, it would become more of a struggle. But it's not really about how my body feels at all - my mindset is much more important. When I've been running for half an hour or so, my mindset shifts. I feel like what I've already done is enough, that I could stop right now and still feel I'd done a decent run. Strangely, this makes it much easier to continue, because anything I do beyond this point is a bonus. With the feeling of having done enough, a lot of pressure and anxiety I didn't even acknowledge I was putting on myself is lifted. I'm no longer worrying that I might get too tired: I'm already tired, but it's ok. I'm not counting the minutes until I can say I've reached a reasonable time. I just run, one stride at a time, and start to enjoy the rhythm of it.&nbsp;</p><p class="">I think this tendency is pervasive in other areas of my life too, especially work: a feeling of anxiety about whether I'm doing "enough" gets in the way of my natural motivation to do things. It's common for me to worry about whether I'm being productive enough, working enough hours, getting through enough tasks. I'll often have a specific goal in mind I feel I need to meet in order to be satisfied, and until I reach that point, my motivation is coming primarily from a place of anxiety. Fear that if I don't keep going and reach a certain point I'll feel a failure, my day wasted. But once I get to the point of "enough", I can let go - and ironically, it's often then that I do my best work. It's often then that I really <em>enjoy </em>what I'm doing, and feel better able to identify and focus on what's important.&nbsp;</p><p class="">It’s when I’ve already met all my goals for the day that I feel like I want to get ahead for tomorrow. It’s when I’ve already written a blog post for the day that all these other ideas I want to write about start streaming in. It’s when I’ve already been to the gym in the morning and ticked off my “exercise” goal for the day that I really feel like going for a swim in the evening.</p><p class="">It’s like all of this anxiety about whether I can meet a given standard is getting in the way of my intrinsic motivation to do things. I’ve realised recently how even things I genuinely want to do can end up feeling aversive, like a burden on me – because my brain quickly and naturally develops a feeling of “should” around any goal I set myself. It feels like this stems from a deep, vague, fear that I’m somehow not good enough – not until I’ve worked enough hours, run far enough, achieved enough.</p><p class="">I wonder what it would feel like not to have this – to simply wake up and feel like I’ve already met this standard of ‘enough’, to always feel free to do things because I want to, because they feel important – not because I <em>should</em>.&nbsp;</p><p class="">It’s interesting to ask where this bar for what’s “enough” comes from, and what might shift it. I think it's partly influenced by societal norms and culture. When I was working an office job, for example, I started to internalise the idea that as long as I sat at my desk doing vaguely productive work from 9 til 6ish, I was doing enough. That’s what others around me were doing, and what they thought was enough, after all. Doing a PhD, there’s risk that it <em>never</em> feels like I'm doing enough – there’s always something else that needs doing, my incomplete thesis looming in my mind. And the more I spend time around super ambitious and hardworking people, the higher my standards for what’s 'enough' get. I find myself frequently asking: what do I imagine [absurdly-competent-and-productive-person] would do in this situation, how high would <em>their </em>bar be?</p><p class="">My standards also shift as my expectations for myself change, based on what seems 'good' for me at the moment. Having struggled with motivation a bit recently, I got to the point where even managing a couple of good productive hours a day felt ‘enough’ – because that was the best I’d been achieving recently. But as soon as I had a few good days, my standards started to rise – and suddenly what had been good enough a few days ago no longer was. In a sense, the fact that my concept of what’s enough shifts so easily, and is so relative, should be enough to convince me that it’s not really rooted in anything real – nothing beyond my own self-judgement.</p><p class="">I so badly want to live more of my life in a state of ‘enoughness’, where my motivation comes &nbsp;from things I genuinely care about and want to do, not fear of failing to live up to some standard. The anxiety I feel when I’m scared I might not do enough is what so often gets in the way of achieving more. The anxiety that I might get tired before I’ve run enough is most of what makes the running unpleasant and hard, which is what makes me want to stop. The anxiety that I might not be able to finish a project to a good enough standard, or fast enough, is what makes me procrastinate. I've sometimes said to friends that I know I could achieve so much more if I wasn't doubting myself all the time.</p><p class="">I don’t really know yet how to deal with this, to be honest. The short-term solution is to try and set low standards for what’s ‘enough’, and find ways to make sure I can meet them. For example, I’ve recently been starting work earlier in the day before doing other things, so that I end up feeling like I’ve done ‘enough’ earlier in the day – and sooner get to a place where I’m free from that pressure.</p><p class="">But this really feels like just a bandaid: working effectively within the constraints of feeling not good enough, while continuing to feed the feeling. Maybe it's naive, but I do believe that I could free myself from these constraints entirely: completely lose the anxiety, the self-judgement, the feeling that I’m not good enough until I’ve achieved enough. I don't like waking up every day thinking I need to prove myself. Perhaps the biggest barrier is the fact that part of me is still afraid: afraid that if I let go of the anxiety and pressure, I might just not achieve anything. I'm scared that if I find some way to feel good enough without achieving anything, then, well - I might not amount to anything. And maybe that wouldn't be enough.</p>]]></content:encoded></item><item><title>Questioning yourself</title><dc:creator>Jess Whittlestone</dc:creator><pubDate>Mon, 25 Jul 2016 18:47:24 +0000</pubDate><link>https://jesswhittlestone.com/blog/2016/7/24/questioning-yourself</link><guid isPermaLink="false">523a0270e4b01ab6f41e9814:523a039de4b07a227bf7411f:5794bf9bb3db2bd9ef449f26</guid><description><![CDATA[Sometimes the most useful ‘advice’ someone else can give you isn’t advice 
at all - it’s asking you the right question. But of course, it’s not only 
other people who can ask you questions - you can also question yourself!]]></description><content:encoded><![CDATA[<p class="">Sometimes the most useful ‘advice’ someone else can give you isn’t advice at all - it’s asking you the right question. Good questions can help alert you to things you already knew but hadn’t quite seen the implications of, help you to consider a different angle on a problem, or simply help you better structure your thinking.</p><p class="">As I suggested in my <a href="http://jesswhittlestone.com/blog/2016/7/4/asking-for-advice" target="_blank">last post</a>, often when we’re struggling with a problem or a decision, what we need isn’t other people’s <em>opinions </em>exactly. Instead, what we need is help making sense of everything we already know, help figuring out how to balance our own conflicting intuitions and feelings. And a good question can really help to do this, to re-orient your thinking.</p><p class="">But of course, it’s not only other people who can ask you questions - you can also question yourself!&nbsp;I think it often doesn’t occur to us to explicitly question ourselves, because it feels a bit strange. There’s this sense we have,&nbsp;that if we could ask ourselves a question and answer it, then we should be able to jump straight to the answer. Since all of this is going on within the confines of our own minds, the idea of being able to 'uncover' something we didn't already know can intuitively seem a bit odd.</p><p class="">But our minds aren’t totally transparent to us. This isn’t some deep Freudian-esque claim about unconscious motivations and desires. It’s just a fact that our brains can only hold, and process, a limited amount of information and considerations at once - we can’t immediately see the logical consequences of everything we believe, or see which of our beliefs are inconsistent with one another. We don’t even really know what we’re feeling, or why, a lot of the time. But in some sense, all of this information is there for us to access, and asking the right questions can help us to do this.</p><p class="">Realising this, I’ve started collecting together lists of questions that I might find it helpful to ask myself in different scenarios:&nbsp;when I’m struggling with a decision;&nbsp;when I’ve lost motivation and don’t know why;&nbsp;when I’m setting goals;&nbsp;when I’m feeling low for no clear reason. Part of the reason other people can often ask us better questions than we can ask ourselves is they have a different perspective. Sometimes when you’re in the throes of a tricky problem it’s sometimes hard to think clearly enough about what you need to ask yourself (especially if you’re also experiencing difficult emotions.) So it seems really useful to have a bank of questions prepared in advance that you can turn to in these moments, when you're not really sure where to turn next.</p><p class="">I’ve started collecting some questions like this <a href="https://drive.google.com/folderview?id=0B7Ogifr6junAN2Q5U1lLeVVpSlk&amp;usp=sharing" target="_blank">here</a> - feel free to take a look and suggest additions, I’d love to hear what other people feel are the most useful questions they ask themselves or have been asked by others.</p><p class="">Of course, these questions are fairly general, and often the most useful question for a given problem is something quite specific to the scenario. So perhaps the most useful question you can get into the habit of asking yourself is, “What’s the most useful question I could be asking myself right now?”</p><p class="">&nbsp;</p>]]></content:encoded></item></channel></rss>