<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Berkman Klein Center Collection - Medium]]></title>
        <description><![CDATA[Insights from the Berkman Klein community about how technology affects our lives (Opinions expressed reflect the beliefs of individual authors and not the Berkman Klein Center as an institution.) - Medium]]></description>
        <link>https://medium.com/berkman-klein-center?source=rss----cdd8dc4c5fc---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 12 Apr 2026 13:58:30 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/berkman-klein-center" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Reporting from “The Battle for Our Attention” Workshop @ Northeastern, April 11, 2025]]></title>
            <link>https://medium.com/berkman-klein-center/reporting-from-the-battle-for-our-attention-workshop-northeastern-april-11-2025-7ca7976c546d?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/7ca7976c546d</guid>
            <category><![CDATA[law]]></category>
            <category><![CDATA[addiction]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[attention]]></category>
            <category><![CDATA[psychology]]></category>
            <dc:creator><![CDATA[Elettra Bietti]]></dc:creator>
            <pubDate>Mon, 21 Apr 2025 15:39:15 GMT</pubDate>
            <atom:updated>2025-04-21T18:55:14.105Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*ybsg90piGnp11YPmpC1myA.jpeg" /><figcaption>Panel 4 on Law and Policy for the Attention Crisis, featuring Alex Roberts (moderator), Dick Daynard, Leah Plunkett, Woody Hartzog and Zephyr Teachout.</figcaption></figure><p>Last Friday, April 11, three of us, <a href="https://law.northeastern.edu/faculty/bietti/">Elettra Bietti</a>, <a href="https://hls.harvard.edu/faculty/aileen-nielsen/">Aileen Nielsen</a>, <a href="https://law.stanford.edu/laura-aade/">Laura Aade</a>, co-organized a workshop titled “The Battle for Our Attention: Empirical, Philosophical and Legal Questions” which took place at Northeastern University School of Law, and benefited from the support of CLIC, Northeastern’s <a href="https://law.northeastern.edu/academics/centers/clic/">Center for Law, Information and Creativity</a>, and the involvement of Harvard’s <a href="https://cyber.harvard.edu/">Berkman Klein Center</a> community of fellows and faculty. The event brought together leading legal scholars, policymakers, economists, medical scientists, computer scientists, media scholars, and technologists to address the pressing issue of how today’s digital technologies are transforming the understanding, use, and allocation of human attention, including implications for how we spend our time and what information we consume.</p><p>The discussion was wide-ranging, interdisciplinary, and deeply enlightening. We discussed whether attention <a href="https://as.tufts.edu/anthropology/people/faculty/nick-seaver">“actually exists”</a>, <a href="https://heartbrain.hms.harvard.edu/people/michael-esterman-phd">how it works</a>, the <a href="https://hls.harvard.edu/faculty/yochai-benkler/">history</a> and <a href="https://www.bu.edu/questrom/profiles/marshall-van-alstyne/">business models</a> of attention capture, the challenges and <a href="https://cssh.northeastern.edu/faculty/david-lazer/">findings</a> that arise from <a href="https://seas.harvard.edu/person/elena-glassman">empirical</a> <a href="https://cbw.sh/">studies</a> of attention and attention markets, the relation between attention, <a href="https://cyber.harvard.edu/people/bridget-todd">intimacy</a>, <a href="https://www.umass.edu/communication/about/directory/emily-e-west">convenience</a> and<a href="https://hls.harvard.edu/faculty/rebecca-tushnet/"> the law of trademarks</a>, possible analogies with <a href="https://law.northeastern.edu/faculty/daynard/">tobacco and gambling litigation</a>, and the <a href="https://www.fordham.edu/school-of-law/faculty/directory/full-time/zephyr-teachout/">policymaking</a> associated with <a href="https://www.bu.edu/law/profile/woodrow-hartzog/">regulating engagement</a> and <a href="https://hls.harvard.edu/faculty/leah-a-plunkett/">children’s use of social media</a>.</p><p>The event began with a panel on the political economy of attention. <a href="https://hls.harvard.edu/faculty/yochai-benkler/">Yochai Benkler</a> kicked off the discussion with an overview of the capitalist drive to capture and instrumentalize attention over time, beginning with the 19th century press and culminating in today’s digital technologies. He argued that markets won’t solve attention problems and could exacerbate attention harms, in contrast with <a href="https://www.bu.edu/questrom/profiles/marshall-van-alstyne/">Marshall Van Alstyne</a>’s suggestion that a Coasean model of attention rights could help platform owners manage misinformation and reduce incentives to share inaccurate or false information. Where Benkler advocated for the decommodification of attentional experiences, Van Alstyne advocated for a market regime of incentives and individual rights to speak and listen. <a href="https://cssh.northeastern.edu/faculty/david-lazer/">David Lazer</a>, for his part, adopted a middle ground position, presenting several findings on the slow but steady decoupling of content from its sources. He showed that information has become less traceable to sources, and discussed chatbots’ role in producing knowledge that is increasingly divorced from reliable reference to authors and media sources.</p><p>The second morning panel addressed the empirics of attention. <a href="https://heartbrain.hms.harvard.edu/people/michael-esterman-phd">Michael Esterman</a> discussed some of his clinical work, showing that attention is a fluctuating, fragile process deeply shaped by cognitive and environmental factors. Sustaining attention for long periods of time and across contexts remains a phenomenon that is not well understood, and Esterman presented results showing that blocking a population’s mobile phone access for two weeks could improve participants’ attention, as well as their mental health and well-being. Esterman also pointed to the need for increasing measurements outside of laboratory settings to better understand the external validity of fundamental psychological results related to attention. <a href="https://seas.harvard.edu/person/elena-glassman">Elena Glassmann</a> then approached attention from the perspective of an interface designer, emphasizing that platforms actively shape how users direct their attention — often without users realizing it. Glassmann highlighted the danger of decontextualisation, where AI-driven tools summarize content by stripping away critical context and leaving users unaware of biases or omissions, and suggested ways to help people build reality-grounded mental models that provide access to contextual information, rather than hiding complexity. <a href="https://www.khoury.northeastern.edu/people/christo-wilson/">Christo Wilson</a> concluded the panel with an overview of empirical approaches to studying attention platform business models, highlighting his role with David Lazer in creating and hosting the <a href="https://nationalinternetobservatory.org/index.html">National Internet Observatory</a> at Northeastern, a center that offers tools for researchers to study how people behave online in response to particular design features and platform strategies over long periods of time.</p><p>During the lunch keynote, FTC Commissioner and Law Professor <a href="https://www.ftc.gov/about-ftc/commissioners-staff/alvaro-bedoya">Alvaro Bedoya</a> spoke of his effort building a team of doctors and psychologists at the FTC whose focus and expertise includes children’s mental health and well-being. He also spoke of his work advocating for <a href="https://www.ftc.gov/system/files/ftc_gov/pdf/BedoyaStatementonCOPPARuleNPRMFINAL12.20.23.pdf">children’s privacy</a> under <a href="https://www.ftc.gov/legal-library/browse/cases-proceedings/public-statements/statement-commissioner-alvaro-m-bedoya-issuance-notice-proposed-rulemaking-update-childrens-online">COPPA</a> and of the analogies and differences between tobacco, sports gambling and addiction to technological devices and products. Commissioner Bedoya suggested that more research needs to be done to better understand which products, platforms and specific technological features cause addiction and other mental health disorders.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*o2SUIsrakDiwQNcs4G0rKg.jpeg" /></figure><p>The afternoon began with a third panel on media and communication systems for attention capture. <a href="https://as.tufts.edu/anthropology/people/faculty/nick-seaver">Nick Seaver</a> presented a spirited argument that attention may, in fact, not exist at all. While holding and waving a <a href="https://en.wikipedia.org/wiki/Mouse_jiggler">mouse jiggler</a>, Seaver showed that attention is primarily defined or constructed by the way it is measured. Measurement, in turn, serves primarily as a managerial tool of control. While they might appear to be measuring participation, platform designers are in reality disciplining, tracking and controlling populations. <a href="https://cyber.harvard.edu/people/bridget-todd">Bridget Todd</a> spoke of her work in the podcasting world, emphasizing the relation between intimacy and attention: audiences pay attention based on proximity to particular types of content and the emotions that content generates for them. Her view is that the current digital economy prioritizes profitable outrage over thoughtful storytelling, but that we should always push for the latter. <a href="https://www.umass.edu/communication/about/directory/emily-e-west">Emily West </a>presented some of her research on Amazon through the lens of convenience. Attention and addiction to digital products are promoted by appealing to convenience: platforms engineer frictionless experiences to generate user dependencies, producing a culture of learned passivity and inattention that quietly erodes agency. <a href="https://hls.harvard.edu/faculty/rebecca-tushnet/">Rebecca Tushnet </a>spoke of the law of advertising and the doctrines of dilution and confusion under trademarks law, explaining that the law simultaneously invokes but misunderstands the science of, or empirical realities of, human attention, protecting only those parts of attention that can be owned under intellectual property regimes. Similar to Seaver’s argument that attention is effectively what we can measure, Tushnet’s presentation highlighted that we live in an economy of signals and containers of attention.</p><p>The day ended with a panel discussion on legal and policymaking efforts in the attention space. <a href="https://law.northeastern.edu/faculty/daynard/">Richard Daynard</a> shared key takeaways from his litigation experience fighting tobacco and gambling companies. He explained that these industries intentionally engineer their products to addict users while funding research that shows the exact opposite, namely that their products are not addictive and individuals who engage in excessive use are the ones to blame. He added that these companies often lose in product liability litigation, where strict liability regardless of intention is the standard. <a href="https://www.fordham.edu/school-of-law/faculty/directory/full-time/zephyr-teachout/">Zephyr Teachout</a> then offered an overview of the evolving Supreme Court jurisprudence on the First Amendment, arguing that current shifts in the court’s composition and caselaw are opening the door to possible legislation and reform in the attention space, something that until recently seemed largely implausible. <a href="https://hls.harvard.edu/faculty/leah-a-plunkett/">Leah Plunkett</a> discussed state social media laws and described them as providing financial compensation, privacy safeguards for children and workplace protections. She focused on a recent <a href="https://www.sltrib.com/news/2025/03/11/utah-bill-aimed-protecting-child/">Utah law</a> that allows children to sue their parents for compensation when their image is used in their parents’ social media feed for profit and described her involvement in drafting a model law on this theme for the Uniform Law Commission. <a href="https://www.bu.edu/law/profile/woodrow-hartzog/">Woody Hartzog</a> concluded the panel presentations, discussing his work with Neil Richards on <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4845648">wrongful engagement</a>, a tort which would allow individuals to sue digital companies for profiting from their addiction and engagement while neglecting users’ well being.</p><p>The event ended with participants discussing potential research overlaps, future collaborations and potential for advocacy across US regions. In the words of CLIC Director <a href="https://law.northeastern.edu/faculty/roberts/">Alex Roberts</a>, who moderated the last panel, “[i]f an interdisciplinary field of “attention studies” wasn’t already a thing, it is now.”</p><p>This event would not have been possible without support and assistance from Northeastern’s CLIC, Alexandra Roberts, Jennifer Huer, Walaa Al Awad, Natalia Pifferer, Brad Whitmarsh, and Jacob Bouvier. We also thank Harvard Law’s Laura Zeng, and BKC’s Bey Woodward and Jonathan Zittrain for additional enthusiasm and assistance.</p><p>We hope to continue this important conversation in the months and years to come with all of you. If you would like to join future conversations, we have created a regional mailing list which you can sign up to <a href="https://groups.google.com/g/boston-area-human-attention-scholars/?pli=1">here</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Cob4gaD39xcb9ew0izNFGg.jpeg" /></figure><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=7ca7976c546d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/reporting-from-the-battle-for-our-attention-workshop-northeastern-april-11-2025-7ca7976c546d">Reporting from “The Battle for Our Attention” Workshop @ Northeastern, April 11, 2025</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fellows Spotlight: Johanna Wild, Investigative Journalist]]></title>
            <link>https://medium.com/berkman-klein-center/johanna-wild-investigative-journalist-dcf63329feb9?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/dcf63329feb9</guid>
            <category><![CDATA[harvard-university]]></category>
            <category><![CDATA[journalism]]></category>
            <category><![CDATA[journalism-innovation]]></category>
            <category><![CDATA[reporting-tool]]></category>
            <category><![CDATA[open-source-intelligence]]></category>
            <dc:creator><![CDATA[Sam Hinds]]></dc:creator>
            <pubDate>Thu, 11 Jul 2024 13:34:21 GMT</pubDate>
            <atom:updated>2025-02-07T14:37:39.943Z</atom:updated>
            <content:encoded><![CDATA[<p><em>An interview on risks, trends, and tools in OSINT digital research</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*dlw-XO-QDUyZPYjrEanRRA.jpeg" /><figcaption><em>Photo by </em><a href="https://unsplash.com/@emilymorter?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Emily Morter</em></a><em> on </em><a href="https://unsplash.com/photos/question-mark-neon-signage-8xAA0f9yQnE?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash"><em>Unsplash</em></a></figcaption></figure><p>When <a href="https://cyber.harvard.edu/people/johanna-wild"><strong>Johanna Wild</strong></a> entered the Berkman Klein Center at Harvard as a joint <a href="https://nieman.harvard.edu/fellowships/nieman-berkman-fellowship-in-journalism-innovation-2/#:~:text=The%20Nieman%2DBerkman%20Klein%20Fellowship,project%20relating%20to%20journalism%20innovation.">Nieman Foundation</a> innovation fellow, I was intrigued. Wild works for the award-winning international open source (OS) investigative journalism collective <a href="https://www.bellingcat.com/about/awards/">Bellingcat</a>. She is an expert on the <strong>creative deployment of technical approaches</strong> to support a more diverse cohort of <strong>public interest reporters and investigators</strong>, blending automated approaches with human-centered research methodology.</p><p>As someone who supports expert networks in disinformation and conflict documentation, I wanted Wild’s first-hand perspective on the<strong> benefits and risks of using novel open source intelligence (OSINT) tools to enable a broader, more transparent global knowledge base</strong>. We conducted this interview over email between Amsterdam and New York City.</p><p><strong>Sam Hinds: Do you encounter specific types of people or professional backgrounds in the work of investigations and OSINT tool development?</strong></p><p><strong>Johanna Wild: </strong>The great thing about the field of open source research is that it consists of people from various backgrounds. Open source researchers spend a lot of time online. They find pieces of information on social media platforms, in online forums, and databases, and they compare features that they identify in user-generated online videos and photos with locations that can be seen on satellite imagery. This process, called geolocation, is used to verify online images. The nature of open source research allows everyone with an internet connection to do this type of work.</p><p>The open source researcher community is therefore a mix of people who do open source research as part of their job and volunteers who are passionate about contributing to important research in their free time. My surveys and user interviews with our Bellingcat community showed that our community consists of people working for human rights organizations, stay-at-home-parents who use their limited time to do something mentally challenging and useful, cybersecurity specialists, job seekers who want to learn new skills, lawyers, data scientists, people who are retired and many more. When I ask volunteers about their motivation, they often say that they want to contribute to research that reveals issues in the regions where they live, that they want to feel that in these times that are characterized by various conflicts around the world, and global challenges like climate change; they do not just passively sit around but actively contribute to something that creates new knowledge about those issues. Another motivation is to become part of a community with similar interests and to improve their open source research skills.</p><p>Of course there are also many journalists who are part of this community. Nowadays, more and more newsrooms are setting up teams focusing on open source research. However, journalists were more of the late adopters in this field. Most of them only discovered in the last few years how useful this type of research can be, especially if it is combined with traditional journalistic skills and methods. Newsrooms even started hiring skilled open source researchers who are completely self-taught and who have no journalism degree, which is something that is still rather unusual in the news industry.</p><p>Volunteers with a technical background contribute by building tools. These are often <strong>simple command line tools</strong> that are able to do one very specific task, for instance to scrape posts from a specific social media platform or to check whether an online account has been created on a platform using a specific phone number. Those tools do not usually turn into big commercial products; they are built by people from within the open source software community who focus on writing code that is publicly accessible to anyone. Several years ago, I clearly saw that the open source researcher and the open source software community are a very good match for each other, we just needed to bring them together. This is one of the things that we now do at Bellingcat. We organize hackathons, actively invite software developers into our volunteer community, and support them to build their own tools or to contribute to tools built by the Bellingcat team. This group of volunteers consists for example of people who have a full time job in a software company but want to do something meaningful in their free time, of job seekers who want to create their own portfolio of tools, or of academics who are already deep into a technical topic but would like to test its practical application.</p><p>Although the open source researcher and tech communities are very diverse in terms of their professional and personal backgrounds, they are currently still dominated by volunteers and professionals from Western countries, mainly from the US and Europe. The technical tool builder community is also, to date, still male dominated. This lack of representation raises serious questions in terms of who defines the future of our field and who has the power to research topics in regions all around the world. With people in many other regions still excluded from participating in this type of research, they mainly become the subject of Western researchers.</p><blockquote>“While AI tools can be powerful, we should not expect to automate the whole open source research process. Doing open source research is a combination of specific research methods, the use of tools, a good dose of logical thinking and also creativity!”</blockquote><p><strong>SH: Have you seen novel trends emerge in the type of information researchers want today?</strong></p><p><strong>JW: </strong>I definitely observe that researchers, and especially journalists, have become more aware of how useful it is to be able to work with large datasets, to know how to scrape information from websites or to have the skills to build small tools that can speed up some of their research tasks.</p><p>Currently, everyone is of course interested in AI. Less experienced researchers are hoping for a tool that lets them input any picture or video and then spits out the exact location of where it was taken. While <strong>AI tools </strong>can be powerful, we should not expect to automate the whole open source research process. Doing open source research is a combination of specific research methods, the use of tools, a good dose of logical thinking and also creativity! Creativity is needed to spot topics that are worth getting investigated. When deciding where to look next in the vast amount of online information that is out there, creativity helps to connect multiple, often tiny, pieces of verified information which allow researchers to draw conclusions on a certain topic.</p><p>Another trend is the use of <strong>facial recognition tools</strong>. Open source researchers often find pictures that show individuals who have a connection to a certain research case but whose identity they don’t know. In the last few years, several easy to use facial recognition tools have emerged. Researchers can upload a picture of a person and the tool compares this picture with collections of photos from social media platforms. Sometimes, this can reveal the identity of a person, for instance by providing the person’s LinkedIn profile. It is obvious how useful this can be to identify individuals who were involved in serious crimes that require journalistic reporting.</p><p>However, facial recognition tools are a double-edge sword. We all know that they can provide wrong results. Two people might just look very similar and an uninvolved person might be misidentified as someone who is involved in illegal activities. It is therefore important that open source researchers do not use those tools as the <em>only</em> way of identifying someone. On top of that, the use of such tools raises various ethical questions ranging from the risk of stalking random people online, to questions about the data sources on which facial recognition tools rely. At Bellingcat, we reflected on how we can ensure a responsible use of facial recognition technologies and concluded that we will refrain from using these tools extensively, and never as a core element of an investigation. We also never used products from companies like Clearview AI. A good example of how we sometimes use a facial recognition tool as a starting point for further research can be found in our article on how <a href="https://www.bellingcat.com/news/2024/04/06/cartel-king-kinahans-google-reviews-expose-travel-partners/">“Cartel King Kinahan’s Google Reviews Expose Travel Partners”</a>.</p><p><strong>SH: Are there any overlooked tools that you like to highlight in your trainings?</strong></p><p><strong>JW: </strong>The best type of tool really depends on the research topic. Often a combination of several small tools can lead to the best results. For instance, our<a href="https://bellingcat.github.io/name-variant-search/"> Name Variant Search Tool</a> is basically an enhanced search engine for finding information about people. Open source researchers often start with a name and try to find out as much as possible about the person’s online presence. However, the name might be written differently on different sites. “Jane Doe” might also show up as “J. Doe” or “Doe, Jane”. The tool suggests different possible variations of a name and provides search results for all those variations. It is also possible to instruct the tool to search for a name specifically on Linkedin or Facebook.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/904/0*QbabNo0_nkmrIX5G" /></figure><p><em>Example: Name Variant Search results for different variants of the name “Jane Doe”</em></p><p>Our<a href="https://osm-search.bellingcat.com/"> OpenStreetMap search tool</a>, on the other hand, supports the geolocation process. A core task of many open source researchers is to find out where a photo or video that they found online has been taken. To do that, they try to identify specific features and compare those with what is visible on satellite imagery or maps. If researchers already have a rough idea in which region a photo might have been taken, they can input a list of features that are visible in the photo (for instance, a residential street, a school and a supermarket) into our tool, which will try to list all locations in a pre-defined region in which those features show up together. This can really help narrow down possible locations.</p><p><strong>SH: What’s an example of an unusual story or insight one can find from OS tools?</strong></p><p><strong>JW:</strong> If open source researchers have no idea where a picture might have been taken but they know at which time it was captured and the photo shows objects that cast clearly visible shadows, they can try our<a href="https://colab.research.google.com/github/GalenReich/ShadowFinder/blob/main/ShadowFinderColab.ipynb"> ShadowFinder</a> tool which is able to calculate at which locations around the world shadow lengths correspond with what can be seen in the photo at a specific point in time. This helps open source researchers concentrate their geolocation efforts to the areas suggested by the tool instead of searching across the whole world.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/580/0*t4hBjC4fWqqaZhoR" /><figcaption><em>Example of a</em><a href="https://github.com/bellingcat/ShadowFinder"><em> ShadowFinder</em></a><em> tool result: Possible locations are shown by the yellow circle.</em></figcaption></figure><p>Another tool that has gained popularity within the open source researcher community is<a href="https://peakvisor.com/"> PeakVisor</a>, a tool that was originally targeted at helping mountaineers orient themselves but which can also be used for<a href="https://www.bellingcat.com/resources/2023/07/13/more-than-mountaineering-using-peakvisor-for-geolocation/"> geolocation tasks</a>. For instance, we used it to research the location of the<a href="https://www.bellingcat.com/news/2023/12/11/the-sound-of-bullets-the-killing-of-colombian-journalist-abelardo-liz/"> killing of Colombian journalist Abelardo Liz</a>. This example in particular shows that a combination of research skills and the use of tools can go a long way.</p><p><strong>SH: What frustrations or barriers do you see as a trainer, and how could the field democratize knowledge of command line tools?</strong></p><p><strong>JW:</strong> First of all: Teaching open source research is great. People who are interested in learning these methods come from so many different backgrounds which allows everyone to learn new things from each other, including the trainers! The topic is also quite accessible, meaning that everyone can start doing open source research with very simple methods, like using search engines in creative ways. Sometimes, this can lead to surprising results: For instance, just by googling, my colleague Foeke Postma revealed how<a href="https://www.bellingcat.com/news/2021/05/28/us-soldiers-expose-nuclear-weapons-secrets-via-flashcard-apps/"> US soldiers exposed nuclear weapons secrets via flashcard apps</a>.</p><p>Of course not all methods are as simple, and one of the things people are struggling with the most are research tools. During my <a href="https://nieman.harvard.edu/fellowships/nieman-berkman-fellowship-in-journalism-innovation-2/">Nieman-Berkman Klein fellowship</a> my research assistant Cooper-Morgan Bryant and I interviewed forty open source researchers about their use of tools. Their answers confirmed my<a href="https://www.bellingcat.com/resources/2022/08/12/these-are-the-tools-open-source-researchers-say-they-need/"> previous findings</a> on this topic: Open source researchers, who are either beginners or who are looking at a topic that is new to them, find it really difficult to figure out what tool they should use at what stage of the research process and how those tools work. With such a wide variety of online tools, some more useful and some easier to find than others, and <strong>many researchers feel overwhelmed</strong> <strong>by the task of finding their way through the landscape of available tools</strong> spread across various platforms.</p><p>In addition, the majority of open source researchers are not able to use command line tools since this requires a certain degree of technical skills. However, those are exactly the type of small tools that the open source software community is building most frequently. There is a clear divide between those who are building tools for open source researchers and the researcher community itself, for whom those tools often turn out not to be accessible.</p><blockquote>“Open source researchers want complex tools that are easy to use and that are stable and well-developed but such tools need funders and teams who build them, and these conditions are not always easily met in the open source research and journalism space.”</blockquote><p>On the other side, open source researchers are often not aware of the resources that are required to build mature tools that have an easy-to-use interface. It is getting easier now, but tool builders need to invest a lot more time to build such tools and this is difficult for people who do this task in their free time and without any funding. Open source researchers want complex tools that are easy to use, stable, and well-developed, but such tools need funders and teams who build them. These conditions are not always easily met in the open source research and journalism space. I hope that researchers will become a little bit more open to learn some basic technical skills, and even more importantly that they understand that not every tool that is useful for their research has to function like a fully built commercial tool.</p><p>At Bellingcat, we focus on bridging this gap between tool builders and open source researchers. We work with tech communities —often through programs like <a href="https://www.youtube.com/watch?v=sOHj7r0_CKA">hackathons</a> or fellowships — and make them aware of how important good user guides are, even for seemingly easy-to-use tools. On the other hand, we teach open source researchers how to use command line tools. We also launched a video series with the goal to help researchers make their <a href="https://www.youtube.com/watch?v=ymCMy8OffHM">first steps</a> towards the more technical side of research tools.</p><p><strong>SH: Tools take a lot of resources to build. Do any OSINT tools have a complicated provenance in terms of private sector origin or geopolitics?</strong></p><p><strong>JW:</strong> It is definitely problematic that researchers and journalists can be so dependent on tools provided by big tech companies. Meta’s social monitoring platform Crowdtangle will be shut down in August and this has caused a lot of discontent amongst journalists, in particular amongst those who are covering elections. For instance, many of the platforms and tools open source researchers use are provided by Google, like Google Search, Google Maps and Google Earth Pro. We are often at the mercy of the decisions that big tech companies take regarding use of their tools.</p><p>However, their tools are usually provided for free, which is not the case for other commercial tools. Open source researchers definitely need to look into the companies from which they are buying tools. One risk is that tool providers might be able to see what type of keywords people are typing in or on what topic someone is working on. Researchers and journalists need to be sure that their sensitive research topics are safe from being monitored by tool providers.</p><p>At Bellingcat we focus on mostly small open source tools, but those tools come with their own set of challenges. For instance, it is often not clear who is behind a tool that is offered on code-sharing platforms like Github, which can raise security-related questions.</p><blockquote>“I would love to see universities getting more involved in building and maintaining tools for open source researchers and journalists…since both sides have the common goal of advancing research in the public interest”</blockquote><p>This is why I really hope we can build a different tool ecosystem for open source researchers in the future. I would love to see universities getting more involved in building and maintaining tools for open source researchers and journalists. I think that such collaborations could work well since both sides have the common goal of advancing research in the public interest, and many of the tools that are used by open source researchers are equally useful for academic researchers. I also see opportunities to research security-related aspects of widely used tools together, as journalists and open source researchers could definitely use some help in assessing the risks that some of the tools they are using might be posing. If anyone who reads this would like to discuss these topics with me: Feel free to <a href="https://twitter.com/johanna_wild">get in touch</a>!</p><p><strong>SH: Misinformation, disinformation, conspiratorial thinking: What are some of the uses and abuses of “research” you see in these contexts?</strong></p><p><strong>JW: </strong>What is most common — especially during conflicts and wars — is that people share either photos or videos from a different conflict or old imagery and make people believe that they are related to current events. In the context of the Israel-Gaza conflict since October 2023, this phenomenon has reached a new scale with countless examples circulating online. For instance,<a href="https://www.bellingcat.com/news/2023/10/11/hamas-attacks-israel-bombs-gaza-and-misinformation-surges-online/"> Bellingcat</a> found videos that were shared with the claims that one showed rockets that were fired at Israel by Hamas and another that claimed to show recent Israeli strikes on Hamas; both turned out to be recycled videos that had been uploaded to YouTube several years prior.</p><blockquote>“People who post such pictures might sometimes think they are doing ‘research’ and that they are sharing relevant information about an ongoing conflict, without realizing that they are actually sharing incorrect information.”</blockquote><p>What is dangerous is that some of those posts go viral and are able to reach significant numbers of people who will never know that they fell for misinformation. People who post such pictures might sometimes think they are doing “research” and that they are sharing relevant information about an ongoing conflict, not realizing the information is incorrect. Others, however, will do it on purpose to evoke emotions either in favor or against one of the conflict parties. Users of online platforms cannot really do much to prevent being confronted with such posts. This is another reason it is essential that we all learn to question what we see online and to <strong>invest some time in learning </strong><a href="https://www.bellingcat.com/resources/2021/11/01/a-beginners-guide-to-social-media-verification/"><strong>basic verification skills</strong></a>.</p><p>What we have also been seeing is that supporters of conspiracy ideologies are increasingly using open source research tools and presenting the information as journalistic findings. For example,<a href="https://www.bellingcat.com/news/2021/11/24/toy-rabbits-chemtrails-and-german-qanon-fanatics-how-not-to-conduct-open-source-investigations/"> Qanon supporters</a> in German-speaking countries started using flight-tracking sites to search for flights which they falsely believed were circling above “deep underground military bases” in which children were hidden and mistreated. This is problematic since people who are not aware of the methods and standards of open source research might not be able to differentiate between serious research and the distorted version of it.</p><p><strong>SH: What are some of your favorite guidelines or best practices for journalists who aim to cover (and fact-check) broad conspiratorial thinking enabled by OS information?</strong></p><p><strong>JW:</strong> Looking at their business models can often be a very promising approach. More often than not, conspiracy-minded communities have business-savvy people amongst them who manage to benefit financially from those communities’ beliefs. When I was researching QAnon online communities in Germany, big platforms like Amazon and eBay had started implementing measures to ban QAnon products from their platforms. However, this seemed to have created new opportunities for QAnon influencers who were offering merchandise via their own small online shops. On top of that, customers in Germany were able to buy QAnon products from abroad, for instance from Chinese or British companies who offered products targeted specifically at German-speaking customers. It was interesting but also concerning to see how international today’s conspiracy merchandise markets are.</p><p>To research online shops, it is always worth researching what<a href="https://www.bellingcat.com/resources/2024/03/26/how-to-get-started-investigating-payment-gateways-online/"> payment options</a> those shops are using and to look into their potential use of<a href="https://www.bellingcat.com/resources/how-tos/2019/03/26/how-to-track-illegal-funding-campaigns-via-cryptocurrency"> cryptocurrencies</a>. It is also important to take some time to learn the terminology a certain group is using. If you are looking into the far-right, for instance, it is crucial to learn how to<a href="https://www.bellingcat.com/resources/2023/04/04/how-not-to-interpret-far-right-symbols/"> interpret the symbols</a> they use.</p><blockquote>”Open source researchers are often portrayed as some type of ‘nerdy hero’ who spends time on his laptop to research ‘the bad guys’ and is celebrated once he succeeds. The idea of one hero figure who solves all the research challenges is really the exact opposite of how open source research works best…”</blockquote><p><strong>SH: How might international organizations build stronger support for women, femme-identified, and gender-nonconforming media and research professionals?</strong></p><p><strong>JW: </strong>In the field of open source research, there are definitely tendencies that I would like to see changed in the future. It is well established that women and gender-nonconforming people have traditionally had a much harder time to enter and succeed in the space of investigative journalism. Those issues are far from being overcome, but the journalism world has started to talk more openly about it, and the fact that academic researchers have published work on this topic has also been helpful.</p><p>My impression is that as open source researchers, we have not yet put enough effort into reflecting on what is happening in our own field. Maybe we thought that since it is relatively new, those issues would not appear as strongly. Unfortunately, however, they do, and it’s time to recognize this.</p><p>There are definitely many contributing factors, but one that has had a strong effect on me is that open source researchers are often portrayed as some type of “nerdy hero” who spends time on his laptop to research “the bad guys” and is celebrated once he succeeds. The idea of one lone wolf who solves all the research challenges on their own is really the exact opposite of how open source research works best, which is by nature collaborative and often requires the efforts of many to put together various small pieces of verified online sources for a specific research case. For those of us who don’t want, and are also not able to fit into this commonly portrayed male hero picture, this field might not necessarily feel like a good fit.</p><p>However, since more and more traditional newsrooms are setting up open source research units right now, I see more women entering the field and hopefully, this will also change how we publicly talk about open source research over time. To everyone who organizes a public event on open source research, I recommend to not only approach the few already well known voices in the field but to take the effort to find and invite speakers who can contribute new perspectives and who have done research on topics that are not always in the spotlight.</p><p><strong>SH: What were the most meaningful conversations you had during your time at the Berkman Klein Center? Do you plan to use any of your connections or insights from the fellowship in your future work?</strong></p><p><strong>JW:</strong> I am very grateful that I was able to be a Berkman Klein Fellow this year. It was a great opportunity to be part of a community of people who all reflect on how we integrate new technologies in our lives but from various different angles. Each fellow and community hour provided me with insights into a different technology-related topic and I liked the “surprise” effect of being able to learn new things about topics I usually don’t have the time to think about. This has definitely had an impact on how I approached my own projects with Bellingcat. I feel that being immersed in such a knowledgeable and collaborative community has unlocked my creativity and I am looking forward to continuing to learn from everyone in the Berkman Klein sphere in the future.</p><p><em>Johanna Wild was a joint 2023–2024 </em><a href="https://nieman.harvard.edu/fellowships/nieman-berkman-fellowship-in-journalism-innovation-2/#:~:text=The%20Nieman%2DBerkman%20Klein%20Fellowship,project%20relating%20to%20journalism%20innovation"><em>Nieman-Berkman Fellow in Journalism Innovation</em></a><em>, a joint fellowship administered between the </em><a href="https://nieman.harvard.edu/"><em>Nieman Foundation for Journalism</em></a><em> and the </em><a href="https://cyber.harvard.edu/"><em>Berkman Klein Center for Internet &amp; Society</em></a><em> at Harvard University. Wild is currently Investigative Tech Team Lead at Bellingcat.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dcf63329feb9" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/johanna-wild-investigative-journalist-dcf63329feb9">Fellows Spotlight: Johanna Wild, Investigative Journalist</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Global AI Regulation: Protecting Rights; Leveraging Collaboration]]></title>
            <link>https://medium.com/berkman-klein-center/global-ai-regulation-protecting-rights-leveraging-collaboration-b0da8e6a704d?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/b0da8e6a704d</guid>
            <category><![CDATA[policy]]></category>
            <category><![CDATA[global-policy]]></category>
            <category><![CDATA[ai]]></category>
            <category><![CDATA[harvard]]></category>
            <category><![CDATA[technology]]></category>
            <dc:creator><![CDATA[Elisabeth Sylvan, PhD]]></dc:creator>
            <pubDate>Thu, 13 Jun 2024 16:24:05 GMT</pubDate>
            <atom:updated>2024-08-09T16:54:33.176Z</atom:updated>
            <content:encoded><![CDATA[<p><em>Policy experts from Africa, Europe, Latin America, and North America outlined next steps for global AI regimes and networked capacity building</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sbPOZN4XT66C4MvcOHy3og.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@nasa?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">NASA</a> on <a href="https://unsplash.com/photos/photo-of-outer-space-Q1p7bh3SHj8?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></figcaption></figure><p><em>By Lis Sylvan &amp; Niharika Vattikonda</em></p><p>Nearly a year and a half after the introduction of ChatGPT, artificial intelligence remains in the regulatory hot seat. While the EU AI Act put the so-called Brussels Effect into play, more regions across the globe are now weighing risks, rights, economic opportunities, and regional needs. On May 28th, the <a href="https://networkofcenters.net/">Global Network of Centers of Internet &amp; Society Research Centers</a> (NoC) and the <a href="https://cyber.harvard.edu/">Berkman Klein Center for Internet &amp; Society</a> at Harvard University (BKC) hosted a group of policy experts from Africa, Latin America, the US, and the EU to discuss this state of global AI regulation and outline next steps for collaboration across continents.</p><p><strong>Lis Sylvan</strong>, Senior Director of Strategy and Programming at BKC, moderated the discussion with <strong>Carlos Affonso de Souza</strong> (Director of the Institute of Technology and Society of Rio de Janeiro), <strong>Mason Kortz </strong>(Clinical Instructor at the Cyberlaw Clinic at BKC), <strong>Gabriele Mazzini</strong> (European Commission, chief architect of the EU AI Act), and <strong>Ridwan Oloyede</strong> (Certa Foundation, coauthor of their recent “State of AI Regulation in Africa” report), with NoC Executive Director <strong>Armando Guio</strong> providing behind-the-scenes support. The group delved into how governments are weighing sectoral versus horizontal regulatory approaches; the role of the administrative state and existing data protection and competition regulators; the new models of AI regulation in Rwanda and Brazil; the impact of the EU AI Act across all jurisdictions; and the potential for truly global governance.</p><h4><strong>Origins and Approaches</strong></h4><p>De Souza contextualized the current moment of global AI regulation as a decade-long journey of AI regulation that started with charters and declarations of governing principles from various governments and entities. Over time, those charters and principles were reflected in <strong>national AI strategies</strong>, which have been in the works for five years and can be seen as the precursor to AI regulation; Brazil’s AI regulatory evolution, for example, closely followed this time frame. De Souza highlighted the impact of the European Union’s General Data Protection Regulation (GDPR) on this evolution; after GDPR took effect, countries have established data protection authorities that have largely been the main point of contact for early AI governance. As a result of GDPR, he said, “data protection may be an accelerator, may be an entry point for countries in the majority world, because that’s the conversation that we have been having in the last decade, and that’s where resources [and] attention had been moving forward in those countries.” However, he <strong>cautioned against using data protection law as the sole basis of AI regulation</strong>, because the data protection framework does not necessarily address the full scope of challenges raised by the development of AI.</p><p>Mazzini explained that the technical discussions about the EU’s proposed AI legislation date back to 2019. One of the key concerns with a sectoral approach, he said, was the risk of privileging certain sectors over others. The horizontal approach, though, results in added complexity as regulators needed to find regulations that would work across sectors and avoid repetitions; moreover, the scope of EU legislation is limited by the exclusion of national security, military, and defense sectors. While the EU AI Act takes an omnibus approach, Mazzini said it did not make sense to regulate AI as its own technology but rather a general-purpose tool with a variety of applications.</p><p>“What was clear to me since the get-go is that It didn’t make sense to regulate AI as a technology as such, because indeed what we are dealing with is a general purpose technology that has a variety of applications that we don’t even foresee today…” said Mazzini, “…and therefore, from my perspective, the idea to establish rules for the technology as such, regardless of its use, didn’t make any sense…We came up with this approach of establishing rules depending on the specific use to which that account is put, with the greatest burden, from a regulatory point of view, being on the high risk,” which Mazzini outlined to include applications of the technology that are linked to health and safety, including medical devices, automated cars, and drones.</p><h4><strong>Sectoral and Regional Approaches</strong></h4><p>In the U.S. and in the African Union, regulatory agencies have found it <strong>more effective to apply existing laws — across data protection, competition, consumer protection, employment, and other sectors — to govern AI</strong>, often taking a <strong>sectoral approach</strong>. Oloyede said that data protection authorities and competition authorities have largely driven the initial AI regulatory agenda, as these authorities are best equipped to enforce consumer protection, data protection, intellectual property, and competition laws as the basis for national AI governance strategies. “We might see some sort of like a clearinghouse model image where not every country in Africa, for example, will try to come up with a specific AI regulation,” Oloyede said.</p><p>Oloyede indicated that the sector-based approach has been dominant on the African continent, with countries including Nigeria, Kenya, South Africa, Rwanda, and Egypt beginning to develop roadmaps for AI governance and establish regulatory task forces. Oloyede said the sectoral approach has allowed regulators to develop specific policies for the deployment of AI in healthcare, for example.</p><p>According to Mason Kortz, this sectoral approach is typically favored in the U.S. because the U.S. regulatory approach values subject-matter expertise over technical expertise. The U.S. will likely have subject-matter experts regulate AI in their own domains, Kortz said — for example, the Department of Housing and Urban Development would regulate AI for housing. The U.S. approach relies on the country’s strong administrative state and directs specific federal agencies to take on different pieces of AI regulation. Meanwhile, certain state laws have sought to regulate specific use cases of AI in housing and employment contexts.</p><p>Kortz also noted that the current approach in the U.S. is a <strong>confirmation that existing rights-based regimes will be applied or extended to harms resulting from the use of AI systems</strong>; with a notoriously slow legislature, he said, only making small changes as needed is an advantageous approach, particularly when existing enforcement agencies may already have the power to make those changes. The U.S. common law system is well-suited to this approach, he said, as it lends judges relatively strong power to reinterpret the law in ways that are binding on lower courts without necessarily having to rewrite civil code.</p><p>“When it comes to some of the more rights-based statutes we have,” Kortz said, “I think, actually, we have a pretty good governance model right there, and we just need some small adjustments around the edges to modernize those statutes and bring them in line, not just with AI, but hopefully, if not future-proof them, at least provide a little more stability for whatever comes next after AI.” However, Kortz allowed that AI is so fundamentally transformative that certain existing laws, such as intellectual property law and copyright doctrine, may not be enough and global harmonization of AI laws should be a priority.</p><h4><strong>Global Collaboration and Capacity</strong></h4><p>Oloyede indicated that African countries have introduced solutions at the level of the Global Privacy Congress, although these solutions will need to reflect differing national and regional interests. Mazzini noted that <strong>generative AI and general-purpose AI create additional issues that require international collaboration </strong>— fighting misinformation, he said, will require such collaboration. However, de Souza cautioned that regulatory transformation must keep in mind how those laws will be applied in the future. In some cases, he noted, <strong>new liability regimes for AI are now stricter than the remaining body of law</strong>; Costa Rica, for example, has adopted a strict liability approach for high-risk uses of AI.</p><p>“If we turn out to have the chapters of liability on our AI laws more severe than what we have in our general law for other situations, if we are all in agreement that, in the future, AI is going to be in everything, the legislators that are designing those laws today, they are designing general laws on liability, because we will have AI in almost all sectors,” de Souza remarked. “So the decisions that we’re making today on liability, they might end up scrapping the provisions that you have on your civil code, consumer protection code, because the AI law will be the law that is more recent, more specific, and that may be the one that will be applying in most cases.”</p><p>This international collaboration will require <strong>capacity building across the globe</strong>, and Mazzini emphasized that the EU AI Act has prompted additional work to support the authorities in the EU that will implement and enforce the regulation. Although the AI Act will impact multiple private sectors, he said, its public enforcement will require both financial and knowledge-based resources. De Souza noted that the Brussels Effect will prompt a need for <strong>global bureaucracy to support global compliance with the EU AI Act</strong>, and well-resourced national authorities are needed to support that implementation. Oloyede, however, said that lessons learned from the GDPR rollout may inform a better approach to implementing the EU AI Act with a more nuanced understanding of the local context. While the EU AI Act will require capacity building to support new governance bodies with funding and resources, he said, it is essential to <strong>preserve existing collaborations with data protection and competition authorities</strong> and empower those authorities to address AI in their own domains.</p><p>Despite different countries taking more sectoral versus horizontal approaches, the global community is working to establish flexible approaches to AI governance in their respective regions. As Oloyede said, “AI is here today. Tomorrow is going to be a different technology. And <strong>we can’t keep legislating for every new technology that we have</strong>.” Mazzini described a need for international coordination when he said, “when it comes to this new type of AI that is sometimes is called ‘generative AI’ or ‘general purpose AI’ that we have specifically regulated in the EU — notably in the last few weeks, in final stages of the negotiations — I think I would like to see there certainly more international coordination, because there we are dealing with a number of questions that I think are pretty common across jurisdictions.”</p><p>Though approaches across the globe may be different, a common cross-cutting theme of the work is <strong>balance</strong>: protecting rights versus supporting innovation, legislating a critical technology while its capacity and impact is still developing, and providing necessary limitations while allowing nimble innovation.</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fs7NQ-5oYYl4%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Ds7NQ-5oYYl4&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fs7NQ-5oYYl4%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/b1949215482ddfaf375cb3b27a7752f0/href">https://medium.com/media/b1949215482ddfaf375cb3b27a7752f0/href</a></iframe><p><em>The </em><a href="https://networkofcenters.net/"><em>Network of Internet &amp; Society Research Centers (NoC)</em></a><em> is a collaborative initiative among academic institutions with a focus on interdisciplinary research on the development, social impact, policy implications, and legal issues concerning the Internet. The Berkman Klein Center at Harvard University served as NoC Secretariat from 2020–2023 and continues to participate in cross-national, cross-disciplinary conversation, debate, teaching, learning, and engagement.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b0da8e6a704d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/global-ai-regulation-protecting-rights-leveraging-collaboration-b0da8e6a704d">Global AI Regulation: Protecting Rights; Leveraging Collaboration</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Accuracy, Incentives, Honesty: Insights from COVID-19 Exposure Notification Apps]]></title>
            <link>https://medium.com/berkman-klein-center/accuracy-incentives-honesty-insights-from-covid-19-exposure-notification-apps-aa944664c844?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/aa944664c844</guid>
            <category><![CDATA[contact-tracing]]></category>
            <category><![CDATA[technology]]></category>
            <category><![CDATA[privacy]]></category>
            <category><![CDATA[pandemic-reflections]]></category>
            <category><![CDATA[healthcare]]></category>
            <dc:creator><![CDATA[Elissa M. Redmiles]]></dc:creator>
            <pubDate>Thu, 14 Mar 2024 16:43:25 GMT</pubDate>
            <atom:updated>2024-03-14T16:43:25.538Z</atom:updated>
            <content:encoded><![CDATA[<p><em>The next pandemic response must respect user preferences or risk low adoption</em></p><p>By <strong>Elissa M. Redmiles</strong> and <strong>Oshrat Ayalon</strong></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*B4phgPlOL3S68KlnsRgliw.jpeg" /><figcaption>Photo by <a href="https://unsplash.com/@kommumikation?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Mika Baumeister</a> on <a href="https://unsplash.com/photos/person-holding-black-android-smartphone-PfMXXv8XXgs?utm_content=creditCopyText&amp;utm_medium=referral&amp;utm_source=unsplash">Unsplash</a></figcaption></figure><p>Four years after <a href="https://www.cdc.gov/museum/timeline/covid19.html">COVID-19 was first declared a pandemic</a>, policy makers, companies and citizens alike have moved on. The CDC no longer <a href="https://www.cdc.gov/media/releases/2024/p0301-respiratory-virus.html">offers separate guidance for COVID-19</a>. Apple and Google have <a href="https://developers.google.com/android/exposure-notifications">shut down their exposure notification infrastructure</a>, which was used heavily in the US and Europe. As COVID-19 spread, technologists were called to serve by building and deploying exposure notification apps to scale parts of the <a href="https://ajph.aphapublications.org/doi/full/10.2105/AJPH.2022.306949">contact tracing</a> process. These apps allowed users to report when they tested positive for COVID-19 and to notify other users when they had been in the vicinity of an infected user. But getting people to use exposure notification apps during the pandemic proved challenging.</p><p><a href="https://www.who.int/data/stories/the-true-death-toll-of-covid-19-estimating-global-excess-mortality">More than three million</a> lives have been lost to COVID-19 over the past four years. Any hope of losing fewer lives during the next pandemic rests on reflection: what did we do, what can we learn from it, and what can we do better next time? Here, we offer <strong>five key lessons-learned</strong> from research on COVID-19 apps in the US and Europe that can help us prepare for the next pandemic.</p><p><strong>Privacy is important, but accuracy also matters</strong></p><p>Privacy was the primary focus in early exposure notification apps, and rightfully so. The apps all trace their users’ medical information and movements in various ways, and may store some or all of that information in a central database in order to inform other users of potential infection. The misuse of this information could easily result in unintentional, or even intentional, harm.</p><p>However, research into whether (and how) people used exposure notification apps during the pandemic showed that <a href="https://www.statnews.com/2020/07/28/quality-issues-stumbling-block-contact-tracing-apps/">privacy might not be</a> the most important factor. <strong>People care about </strong><a href="https://dl.acm.org/doi/10.1145/3488307"><strong><em>accuracy</em></strong></a><strong>, or an app’s rate of incorrect reports of COVID-19 exposure (both false positives and false negatives),</strong> which may have also<strong> influenced rates of public app adoption.</strong> Yet, <a href="https://europeanaifund.org/wp-content/uploads/2022/10/European-AI-Fund-Tech-and-Covid-grants-synthesis-1.pdf">we still know little</a> about how effective the deployed exposure notification apps were. Future apps will need to have measurement tools and methods designed into them before they are released to accurately track their usefulness.</p><p><strong>We need to better understand the role of incentives</strong></p><p>Researchers discovered that using <em>direct incentives</em>, <a href="https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0258945">such as monetary compensation</a>, to get people to install exposure notification apps worked at first, but had little effect in the long term. In fact, <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3786245">one field study</a> found that people who received money were <em>less</em> likely to still be using the app eight months later than those who didn’t. Paying people to download a contact tracing app is <a href="https://www.usenix.org/conference/usenixsecurity23/presentation/ayalon">even less effective</a> when the app is perceived to be bad quality or inaccurate. However, monetary incentives may be able to “compensate” when the app is perceived to be costly in other ways, such as eating up mobile data.</p><p>Given the ethical problems and lack of success with direct incentives, focusing on <em>indirect incentives</em>, such as <em>functionality</em>, may be key to increasing adoption. Exposure notification apps have the potential to serve a greater purpose during pandemics than merely exposure notification. <strong>Our research found that people using exposure notification apps wanted them to </strong><a href="https://medium.com/ubicomp-iswc-2023/how-people-used-covid-19-contact-tracing-apps-and-why-insights-from-belgiums-coronalert-48e57c1a0fe1"><strong>serve as a “one-stop-shop”</strong></a> for quick receipt of test results, information on the state of public health in their region, and assistance finding testing centers.</p><p>Future app design needs to examine user wants and expectations to ensure widespread adoption. This is hardly a new concept — every successful “fun” app begins with this user-centered model. Apps that provide these extra benefits to users will not only be better adopted, they will also see more frequent and prolonged use.</p><blockquote>…Over a third of the Coronalert app users we interviewed believed that it tracked their location, despite repeated communications over the course of a year that it used proximity rather than location to detect possible exposures.</blockquote><p><strong>Honesty is the most effective communication strategy</strong></p><p>Exposure notification apps are often framed to the public as having inherent <em>individual</em> benefits: if you use this app, you’ll be able to tell when you’ve been exposed to a disease. In reality, exposure notification apps have a stronger <em>collective</em> benefit of preventing the overall spread of disease in communities. Being honest with potential users about the true benefits is more effective than playing up the less significant individual benefit. When examining <a href="https://dl.acm.org/doi/abs/10.1145/3491102.3501869">how to best advertise</a> Louisiana’s exposure notification app, <strong>we found that people were most receptive to the app when its collectivistic benefits were centered.</strong></p><p>Honesty and openness in privacy is also essential, especially when it comes to data collection and storage. Despite this transparency, however, people may still make assumptions based on false preconceptions or logic. For example, over a third of the Coronalert app users we interviewed believed that it tracked their location, despite repeated communications over the course of a year that it used proximity rather than location to detect possible exposures.</p><p><strong>Integration with existing health systems is essential</strong></p><p>There was a disconnect between COVID-19 exposure notification apps and public healthcare systems, even in countries with universal healthcare and government-supported apps. <a href="https://medium.com/ubicomp-iswc-2023/how-people-used-covid-19-contact-tracing-apps-and-why-insights-from-belgiums-coronalert-48e57c1a0fe1">Belgium’s Coronalert app</a>, for example, allowed users to receive their test results faster by linking their test to their app using a unique code. But, testing center staff were not trained on the app and failed to prompt users for that code. Not only was receiving test results a primary motivator in getting people to use the app; failing to link positive results to specific app users reduced the app’s efficacy.</p><p>This disconnect may be far greater in countries without universal healthcare or where exposure notification apps are privately created. In order for these apps to be effective, <strong>developers must collaborate with public health workers to develop a shared understanding of how testing centers operate</strong>, determine the information needed to provide accurate tracking, and decide on the <a href="https://www.scientificamerican.com/article/how-to-fix-covid-contact-tracing/">best way to follow up</a> on potential infections.</p><p><strong>Resourcing technical capacity is critical</strong></p><p>A wide range of exposure notification apps were developed to combat COVID-19, and by many different organizations. In the absence of immediate government action, many of the earliest efforts were led by universities or volunteer efforts. Academics developed the <a href="https://fortune.com/40-under-40/2020/carmela-troncoso/">DP3T proximity tracing protocol, which guided </a>Google and Apple’s development of exposure notification infrastructure for Android and iOS phones.</p><p>However, privatization of exposure notification infrastructure created an enormous potential for private medical and other information to fall into the hands of corporations who are in the business of big data. It also subjected exposure notification technology to private company’s rules (and whims).</p><p>Google and Apple released exposure notification infrastructure in April 2020 but did not release direct-to-user exposure notification functionality <a href="https://www.pathcheck.org/en/blog/pathcheck-launches-solutions-for-en-express-unveiled-by-google-apple">until later in the pandemic</a>. This decision <strong>left the development of exposure notification apps to public health agencies that lacked the resources and technical capacity to do so</strong>. Volunteers stepped in to fill this void. For example, the PathCheck foundation developed exposure notification apps for <a href="https://www.pathcheck.org/en/impact">7 states and countries</a> on top of the Google-Apple Exposure Notification infrastructure.</p><blockquote>“…We need to eliminate these scattered responses, align incentives, and integrate the strengths and perspectives of public, private, and academic bodies to develop protocols, models, and best practices.”</blockquote><p>While it is natural for universities to support the public good, and encouraging that private citizens volunteered so much of their time and resources to do so, they should not have to in the next pandemic. To respond to future pandemics, we need to eliminate these scattered responses, align incentives, and integrate the strengths and perspectives of public, private, and academic bodies to develop protocols, models, and best practices.</p><p><strong>Applying the lessons learned</strong></p><p>Building tech responsibly means not just considering privacy, but providing <a href="https://dighum.ec.tuwien.ac.at/perspectives-on-digital-humanism/the-need-for-respectful-technologies-going-beyond-privacy/">technology that respects user preferences</a>. <strong>When people give up their data, they expect a benefit</strong> — be that a collective benefit, such as fighting a pandemic or helping cancer research, or an individual one. They likewise expect <strong>utility</strong>: apps that are accurate, achieve their goals, and provide an holistic set of features.</p><p>If we continue to build tech based on our <em>assumptions</em> of what users want, we risk low adoption of these technologies. And during times of crisis, such as this still-ongoing COVID-19 pandemic, the consequences of low adoption are dire.</p><p><a href="https://elissaredmiles.com/"><em>Elissa M. Redmiles</em></a><em> is a computer scientist specializing in security and privacy for marginalized &amp; vulnerable groups at Georgetown University and Harvard’s </em><a href="https://cyber.harvard.edu/"><em>Berkman Klein Center</em></a><em>.</em></p><p><a href="https://www.oshratayalon.com/"><em>Oshrat Ayalon</em></a><em> is a human-computer interaction researcher focusing on privacy and security at the University of Haifa.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=aa944664c844" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/accuracy-incentives-honesty-insights-from-covid-19-exposure-notification-apps-aa944664c844">Accuracy, Incentives, Honesty: Insights from COVID-19 Exposure Notification Apps</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[You Know, For Kids]]></title>
            <link>https://medium.com/berkman-klein-center/you-know-for-kids-47731a0a72f8?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/47731a0a72f8</guid>
            <category><![CDATA[youth]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[media-literacy]]></category>
            <category><![CDATA[digital-divide]]></category>
            <dc:creator><![CDATA[Bill Shribman]]></dc:creator>
            <pubDate>Thu, 09 Nov 2023 16:11:46 GMT</pubDate>
            <atom:updated>2025-07-01T12:21:11.021Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Child with a drawing of a screen superimposed over their face." src="https://cdn-images-1.medium.com/max/900/0*QG9jVou_kzc-QQ_8" /><figcaption>Image courtesy of GBH.</figcaption></figure><h4><strong>the state of media literacy for young people in the age of generative AI</strong></h4><p>It’s ambiguous: is artificial intelligence a tool, a weapon, or both? The twist with AI, I think, is much like the paradox in the Schrödinger’s Cat thought experiment: it’s a black box that might <a href="https://www.safe.ai/statement-on-ai-risk#open-letter">harm us or save us</a>. We must hold both ideas at the same time as we think about what we, and our kids, need to know about AI, its negatives and positives.</p><p>A tools and weapons approach is a useful rubric for thinking about what guidance about the power and opportunity afforded by AI — broadly, what media literacy — we all need.</p><p>Here’s what I believe we need to consider in making media literacy effective for young people.</p><h4>The Underlying Tenets of <a href="https://namle.net/resources/media-literacy-defined/">Media Literacy</a> Still Hold True</h4><p>I created and currently produce two media literacy series for PBS KIDS — <a href="https://www.wgbh.org/search-it-up">Search It Up</a> and <a href="https://www.youtube.com/playlist?list=PLa8HWWMcQEGQ_wCRS1ybsZ-xPYsQd4G9y">Ruff Ruffman: Humble Media Genius</a><em> — </em>as part of my 25 years producing digital content for public media at GBH in Boston. We are using new episodes to showcase what AI is and what it can do, including how kids are using it to make art and text. I’m also working with my colleagues at the Berkman Klein Center on several media literacy initiatives around generative artificial intelligence.</p><p>Adults are excited, intrigued, amused, or are wringing their hands in equal measure around the growth of advanced computing and tools that can generate original text, images, audio, or video — loosely called generative AI. But what does this really mean in the context of kids?</p><p>In talking to young people, I get a sense that they need as much support in understanding AI as we adults do. And maybe they now need a tad more as generative AI further blends into their technology-rich lives.</p><p>They still need to know how media is made, that they themselves can make media, and that it has a purpose — even if it’s AI-assisted.</p><p>So, let’s start with some context: What do these three have in common?<br><em>The government.<br>An everyday person.<br>Elon Musk.</em></p><p>The answer is that they are the most common responses I’ve found in talking to 5th graders about who is responsible for what they find on the internet. There’s a similar range of replies when these 10-year-olds are asked who fact-checks the internet. Popular answers here include the government, no-one, and the ubiquitous Mr. Musk.</p><p>These kids are often called “digital natives” — born long after the demise of rotary phones, dial-up, Blockbuster, Myspace, and waiting for a letter in the mail. They are deemed native as if they are somehow born with an ability to reset the router or to attach PDFs to emails. I think they are not. Fish are surrounded by water but may be able to tell you little about it. Our young people need to learn, or be shown, how to stay safe online and how to benefit from the many opportunities access to boundless information affords.</p><figure><img alt="Colorful cartoon dog, Ruff Ruffman, Humble Media Genius." src="https://cdn-images-1.medium.com/max/878/0*SAIgBAkElScOyNmd" /><figcaption><em>Ruff Ruffman: Humble Media Genius.</em> Image courtesy of GBH.</figcaption></figure><p>It’s worth noting that AI is not new. It’s in many of the tools that have been in our hands for a while. For instance, Siri uses predictions to complete a text — and has been trying to break up my marriage long before it was fashionable for <a href="https://www.nytimes.com/2023/02/16/technology/bing-chatbot-transcript.html">more advanced AI</a> to do so. (“I’m in the woods with Natalie,” I texted my wife when she was out of town. My English accent and flaws in Siri’s speech recognition had turned our dog, Nellie, with whom I was enjoying a woodland adventure, into Natalie, my daughter’s twentysomething math coach, with whom I was not.)</p><p>Kids are already using AI every day if they’re online or on their phones. What do they actually know about it? The following responses are from 10-year-olds:</p><p><em>“It’s really smart, it’s so smart it can go to websites in its memory chip; it can take all the information and put it inside its brain.”</em><br>OK, that’s a little robot overlord-y, but it’s close.</p><p><em>“It’s not good the first time, it learns as it plays.”</em><br>That’s pretty much exactly how AI has beaten grandmasters in <a href="https://www.scientificamerican.com/article/ais-victories-in-go-inspire-better-human-game-playing/">Go</a>, <a href="https://spectrum.ieee.org/how-ibms-deep-blue-beat-world-champion-chess-player-garry-kasparov">Chess</a>, and <a href="https://www.techrepublic.com/article/ibm-watson-the-inside-story-of-how-the-jeopardy-winning-supercomputer-was-born-and-what-it-wants-to-do-next/">Jeopardy</a>.</p><p><em>“The AI is making the picture, and the AI is coded by humans.”</em><br>That’s a pretty accurate view of generative art, although it bypasses issues of intellectual property. And of course, these new images based on real people are now pretty convincing. I still can’t believe that the <a href="https://www.cinemablend.com/movies/keanu-reeves-deepfake-account-is-so-good-it-has-fans-wondering-if-its-really-the-actor">deepfakes of Keanu</a> are <em>not</em> Keanu. But maybe I don’t want to believe they’re fake.</p><p>As I share this now, it’s worth noting that we sometimes deliberately share fake information just as willingly as if we know it to be true. This is one of the confounding challenges around stemming more harmful misinformation and disinformation.</p><figure><img alt="AI-generated image of Keanu Reeves washing dishes in a pink apron." src="https://cdn-images-1.medium.com/max/458/0*kBVcIxsAuJ-XVgL-" /><figcaption>Image by <a href="https://www.tiktok.com/@unreal_keanu">@unreal_keanu</a> via TikTok.</figcaption></figure><p><em>“If it can tell you are sick with a disease of some sort and can tell you about it before it gets too serious by noticing unusual things that don’t always happen on a daily basis.”<br></em>This is a great encapsulation of the medical world’s hope for AI, with already-proven success in <a href="https://www.science.org/doi/10.1126/science.370.6521.1144">protein folding</a>.</p><p><em>“The world is full with AI’s and no one can be really sure.”<br></em>No, we cannot. And so, we should still consider how to help kids thrive in a world where ideas of provenance, authorship, intention, bias, and even why we share information with each other, are increasingly fuzzy.</p><h4><strong>We Can Use AI as a Tool to Challenge Disinformation</strong></h4><p>There is a belief that AI could weaponize phishing to be more targeted and more plausible. Conversely, reverse image search, an AI-assisted tool, let me investigate a suspicious friend request from someone who looked like a Danish sea captain and was, it transpired, a Danish sea captain — at least the pictures used were. His affable images, I discovered, had been misappropriated and used as a siren call all over the world<a href="https://www.dailymail.co.uk/news/article-6339769/Australian-woman-58-duped-10-000-scam-fraudster-pretend-Danish-captain.html"> in phishing attacks</a>.</p><figure><img alt="Facebook friend request panel, showing a request from Chancel Ndongala Ndongala of Paris, Kentucky." src="https://cdn-images-1.medium.com/max/561/0*sa4WGSzvFe7gKENE" /></figure><h4><strong>We Should Avoid Exacerbating Inequalities</strong></h4><p>If we are not vigilant, new technologies can have a tendency to exacerbate existing digital divides by, for example, creating a heavy reliance on expensive devices or tools. The current generation of generative AI tools relies, at minimum, on having an internet-connected device. Although the mechanics for data sent as cellular data, wi-fi, or Bluetooth are perhaps similar, their differences can be huge for those with limited means, limited data plans, or low bandwidth connectivity. Unless we think intentionally about ensuring equitable access, many children will be under-equipped to use new AI technologies.</p><h4><strong>We Must Think Creatively about the Medium of Media Literacy</strong></h4><p>School-based media literacy may provide some of the answers to helping kids learn more, but the presence of <a href="https://medialiteracynow.org/impact/">formal instruction varies by state</a>, from none to some. The demands on the school day and the multiplicity of technologies can make integrating media literacy instruction challenging for any educator. We must understand the needs of teachers as we develop in-classroom supports and scaffold them with professional development materials as needed.</p><p>That said, we know media literacy messaging works well when it’s either baked into media that kids are already consuming or is standalone content that they gravitate to, whether that’s through video, social media, or digital games. For example, we use both of these approaches at GBH; our episodes of <a href="https://www.youtube.com/playlist?list=PLa8HWWMcQEGRhqI2vfh30TPvRO1m4FLWS">Molly of Denali</a> often model positive uses of media and technology.</p><figure><img alt="Eight panels from the cartoon Molly of Denali." src="https://cdn-images-1.medium.com/max/787/0*Vk8IHaiCPalUjTGN" /><figcaption>Molly of Denali. Images courtesy of GBH.</figcaption></figure><h4><strong>Future Proofing Media Literacy Education Is Key</strong></h4><p>The usage of generative AI is moving swiftly, with over 5<a href="https://www.futurepedia.io/">,000 tools</a> now claiming to have AI support and with many being integrated into tools and software kids are already using. Being strategic about what kinds of media literacy to address is key.</p><p>This is especially true for those of us making media about technology, professional video, or a high quality media literacy game. These can take months to produce, if we even find the funding to begin with. Our resulting work often has a long tail of use, so for both of these reasons we must be careful to future-proof what we provide, and to not overly focus on one single tool. The <em>Ruff Ruffman: Humble Media Genius</em> videos have been viewed over 100 million times, so getting the message right, with as much timelessness as possible, is important.</p><p>And as a new generation of AI tools become intertwined with what our kids interact with — in their searches, in the algorithms that suggest what to listen to or, more importantly perhaps, what friends see their posts, and in the work they do at school — we should take stock and assess whether we’re headed into stormy seas or wide open blue oceans. (That Danish sea captain clearly has left his mark on me.) We should provide media literacy that kids want to engage with; it can’t feel like just another civics lesson.</p><h4><strong>We Can Learn from the Past to Inform Our Future</strong></h4><p>There is very little research yet about generative AI, and so we in public media, and many of our colleagues across academia, are trying to conduct it. As a stop-gap, we’re leaning on studies of related technology: for example, how kids interact with chatbots can be informed by 10 years’ study into how they interact with digital voice assistants like Siri; these in turn often look back at prior research into <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2019.00008/full">Human Computer Interaction</a>.</p><p>How we have evolved to use other digital tools can help us consider how we might use AI. For example, many of us have grown to trust Wikipedia but perhaps wouldn’t use it for a crucial medical diagnosis. And when was the last time you crossed-checked Google Maps directions with a paper map before trusting your computer-proposed itinerary? In other words, we decide the trust we place in every new tool we use.</p><p>Generative AI is in many ways exciting, new and challenging, and I believe we can and must equip young people with the critical thinking skills to help them use AI effectively.</p><p><em>This essay is part of the </em><a href="https://medium.com/berkman-klein-center/generative-futures/home"><em>Co-Designing Generative Futures series</em></a><em>, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the </em><a href="https://cyber.harvard.edu/workshops/gai2023/"><em>Co-Designing Generative Futures conference</em></a><em> in May 2023. All opinions expressed are solely those of the author.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=47731a0a72f8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/you-know-for-kids-47731a0a72f8">You Know, For Kids</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Preserving Social Connections against the Backdrop of Generative AI]]></title>
            <link>https://medium.com/berkman-klein-center/preserving-social-connections-against-the-backdrop-of-generative-ai-abcaebd45c3d?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/abcaebd45c3d</guid>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[synthetic-media]]></category>
            <category><![CDATA[connection]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[trust]]></category>
            <dc:creator><![CDATA[Alexa Hasse]]></dc:creator>
            <pubDate>Thu, 09 Nov 2023 16:11:08 GMT</pubDate>
            <atom:updated>2024-01-05T19:29:39.300Z</atom:updated>
            <content:encoded><![CDATA[<h4>Considerations and Questions</h4><p>Social connection is a fundamental human need. From both a developmental and evolutionary standpoint, <a href="https://www.hachettebookgroup.com/titles/john-bowlby/attachment/9780465005437/?lens=basic-books">nurturing relationships matter</a>. Our social connections with others can help support our <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4375548/">basic needs for survival</a>, provide a <a href="https://psycnet.apa.org/record/2016-23915-002">source of resilience</a>, and enable us to gain a <a href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8095671/">sense of belonging</a> and <a href="https://journals.sagepub.com/doi/10.1177/07342829211057640">mattering</a> in our social and cultural world.</p><p>The U.S. Surgeon General recently released a <a href="https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf">report</a> on an “epidemic of loneliness,” suggesting that a lack of social connection poses major threats to <a href="https://local.psy.miami.edu/faculty/dmessinger/c_c/rsrcs/rdgs/emot/PerspectivesonPsychologicalScience-2015-Holt-Lunstad-227-37.pdf">individual</a> and <a href="https://www.sciencedirect.com/science/article/pii/S1054139X0400165X">societal health</a>. As noted in the report, the <a href="https://pubmed.ncbi.nlm.nih.gov/28880099/">mortality impact</a> of feeling disconnected from others is similar to that of smoking 15 cigarettes every day. Research also <a href="https://pubmed.ncbi.nlm.nih.gov/32504808/">indicates</a> that loneliness increases the risk of both anxiety and depression among children and adolescents and that such risks continued to exist nine years after loneliness was initially measured. Conversely, social connection can enhance individual-level <a href="https://pubmed.ncbi.nlm.nih.gov/23548810/">physical</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/28803484/">mental well-being</a>, <a href="https://www.jstor.org/stable/23074587">academic achievement</a> and <a href="https://pubmed.ncbi.nlm.nih.gov/29359473/">attainment</a>, <a href="https://pubmed.ncbi.nlm.nih.gov/14640811/">work satisfaction</a> and <a href="https://www.gallup.com/workplace/397058/increasing-importance-best-friend-work.aspx">performance</a>, and community-level <a href="https://knightfoundation.org/press/releases/got-love-for-your-community-it-may-create-economic/">economic prosperity</a> and <a href="https://direct.mit.edu/rest/article-abstract/103/1/18/97759/The-Effect-of-Social-Connectedness-on-Crime">safety</a>.</p><figure><img alt="Colorful silhouettes of diverse individuals are arranged suggestively, some alone, some in groups, on a beige background." src="https://cdn-images-1.medium.com/max/1024/1*ZkFTjbNHdVc12MOg-4TEaw.jpeg" /><figcaption>Cover illustration from <a href="https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf">Our Epidemic of Loneliness and Isolation: The Surgeon General’s Advisory on the Healing Effects of Social Connection and Community</a> (2023).</figcaption></figure><p>Over the past year, there has been a rising interest in, and media coverage of, generative AI or what linguist Dr. Emily Bender terms “<a href="https://www.youtube.com/watch?v=eK0md9tQ1KY">synthetic media machines</a>” — that is, systems by which one can generate images or, as with large language models (LLMs), “plausible-sounding” text. Despite the hype, these systems are not completely new. The 1940s <a href="https://sanketp.medium.com/language-models-the-beginnings-8824df2eacc0">marked</a> initial forays into language models. What <em>is</em> <a href="https://www.youtube.com/watch?v=gSRN_3pkTsc&amp;t=3137s">new</a> is how these systems — which are “<a href="https://www.youtube.com/watch?v=gSRN_3pkTsc&amp;t=3137s">more ‘auto-complete’ than ‘search engine</a>’” — are being promoted: they are being made available to the broader public.</p><p>How do different users perceive these systems? Preliminary <a href="https://www.ideo.com/journal/will-ai-interfere-with-relationships-gen-z-thinks-so">research from IDEO</a> sought out the perspectives of twelve participants ages 13 to 21 in the U.S. around the ways generative AI may impact social connection (among other themes). The company first distilled key sentiments associated with these systems based on large quantities of social media posts and then presented participants with AI-driven hypothetical <a href="https://www.ideo.com/journal/is-gen-z-ready-to-embrace-ai-its-complicated">products</a>, such as “Build a FrAInd: Your ideal bestie come to life, based on celebs and influencers you love” and “New AI, New Me: An avatar trained on your preferences that has experiences for you.” Participants had varying levels of familiarity with generative AI and diverse life experiences (e.g., some participants were in school and others not, some had international backgrounds, etc.). When asked for their thoughts on these products, they emphasized that relationships are all “about you learning as you go,” that humans must “<a href="https://www.ideo.com/journal/will-ai-interfere-with-relationships-gen-z-thinks-so">remain at the helm</a>.”</p><p>In IDEO’s <a href="https://cyber.harvard.edu/publication/2021/youth-participation-in-a-digital-world">youth-focused</a> research, respondents also voiced concern around <em>trust.</em></p><blockquote>In the context of<em> human-to-human </em>connection, an important question arises: How will generative AI, such as LLMs, <a href="https://www.youtube.com/watch?v=XUCUNvu8QUg">influence the trust we have in other people</a>?</blockquote><p>A study from a Stanford and Cornell research team <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2208839120">demonstrated</a> that when asked to discern whether online dating, professional, and lodging profiles were generated by an LLM or a human, participants only selected the correct answer about half of the time. Whereas participants could sometimes identify specific markers of text generated by LLMs (i.e. <a href="https://www.youtube.com/watch?v=qpE40jwMilU">synthetic text</a>) such as repetitive wording, they also pointed to cues such as grammatical mistakes or long words, which, in the study’s data, were more representative of language written by a human. Additional features that participants used to discern human-written text, including first-person pronouns or references to family, were equally present in both synthetic and human-written profiles. Rather than interpreting results as evidence of machine “<a href="https://techwontsave.us/episode/163_chatgpt_is_not_intelligent_w_emily_m_bender">intelligence,</a>” the Cornell and Stanford team suggested that individuals may use flawed heuristics to detect synthetic text.</p><p>The authors <a href="https://www.pnas.org/doi/abs/10.1073/pnas.2208839120">proposed</a> that such heuristics may be indicative of human vulnerability: “People are unprepared for their encounters with language-generating AI technologies, and the heuristics developed through . . . social contexts are dysfunctional when applied to . . . AI language systems.” Concerningly, individuals are <a href="https://pure.uva.nl/ws/files/44537255/Ischen2020_Chapter_PrivacyConcernsInChatbotIntera.pdf">more likely</a> to share personal information and follow recommendations by nonhuman entities that they view as “human,” raising key privacy questions. At the same time — <a href="https://www.youtube.com/watch?v=XUCUNvu8QUg&amp;t=1632s">at least in the short term</a> — they may begin to distrust those who they think are using synthetic text in their communication.</p><p>Issues of bias are also central given that systems such as LLMs absorb and amplify the biases in training data. Against the backdrop of the race towards ever larger LLMs, as outlined in Bender and colleagues’ ground-breaking paper “<a href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</a>,” the wider web is not representative of the ways that different people view the world. A number of factors impact 1) who has access to the Internet, 2) who feels comfortable sharing their thoughts and worldviews online, 3) who is represented in the parts of the Internet chosen for the training data, and 4) how the basic filtering applied to training data produces more distortion.</p><p>For instance, per the second factor, whereas user-generated content sites (e.g., Reddit) portray themselves as welcoming platforms, structural elements (e.g., moderation practices) may make these sites less accessible to underrepresented communities. Harassment on X (formerly Twitter), for example, is <a href="https://medium.com/@agua.carbonica/twitter-wants-you-to-know-that-youre-still-sol-if-you-get-a-death-threat-unless-you-re-a5cce316b706">experienced</a> by “a wide range of overlapping groups including domestic abuse victims, sex workers, trans people, queer people, immigrants, medical patients (by their providers), neurodivergent people, and visibly or vocally disabled people.” As the authors of “Stochastic Parrots” point out, there are selected subgroups that can more easily contribute data, which produces a systemic pattern that exacerbates inclusion and diversity. In turn, this pattern initiates and perpetuates a feedback loop that diminishes the impact of data from underrepresented communities and <a href="https://proceedings.neurips.cc/paper/2021/hash/2e855f9489df0712b4bd8ea9e2848c5a-Abstract.html#:~:text=We%20propose%20a%20Process%20for,predetermined%20set%20of%20target%20values.">privileges</a> hegemonic viewpoints.</p><p>Automated facial recognition software is another example. Before the widespread use of generative AI, Dr. Joy Buolamwini and Dr. Timnit Gebru found that popular facial recognition systems exhibited <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__chicagounbound.uchicago.edu_cgi_viewcontent.cgi-3Farticle-3D1052-26context-3Duclf&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=0yugrTPkekTevS7UIYctxBI92gb_jA-3P8jDEAoxmcw&amp;e=">intersectional biases</a>: the systems <a href="https://urldefense.proofpoint.com/v2/url?u=http-3A__proceedings.mlr.press_v81_buolamwini18a_buolamwini18a.pdf&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=3LfrQHN6bCUgvfAhVFsYfXifcs5ISiNLKErJzkxZqvk&amp;e=">performed significantly worse</a> on individuals of color and, in particular, on women of color. Biases in AI systems have major <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.penguinrandomhouse.com_books_670356_unmasking-2Dai-2Dby-2Djoy-2Dbuolamwini_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=_FH3KXOum2_5OIkrmnIApRtax7BGCgh0nc4gd9BksGY&amp;e=">real-world harms</a> across areas like <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.brookings.edu_articles_for-2Dsome-2Demployment-2Dalgorithms-2Ddisability-2Ddiscrimination-2Dby-2Ddefault_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=ug9Nk0g8dIU5tCv8DuTJOoDNoXdZBy_m7qEC13Vj008&amp;e=">employment</a>, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__amnesty.ca_podcast-2Dfacial-2Drecognition-2Dand-2Dpolicing-2Dprotesters_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=yNFiqRD23Aplx6cIzRpaa5_VYVsK1k9at2RmZmmmmhI&amp;e=">law enforcement</a>, and <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.hrw.org_news_2019_06_21_facial-2Drecognition-2Dtechnology-2Dus-2Dschools-2Dthreatens-2Drights&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=oq9V0pH7AbByOZQmAdv8WH8CWhGqM_PXViaGWXuws7yCagXC_wEq0m5a9M2U2plK&amp;s=Y1JLpFN9oFm9p5aMXkEWGB2wn4Yubczmd3EWEflEeHo&amp;e=">education</a>. As more synthetic media is produced, such content is then fed back into future systems, creating a pernicious cycle and <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__nyupress.org_9781479837243_algorithms-2Dof-2Doppression_&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=QTr0RrANr1k2kEFos3HWYwHKD4_EMSzcKLDvfRTPMi7o6e5QDaOrida18UOTWEHy&amp;s=Uo0fOBA_BsvcwPZ36EL6_BNKGpTi5F6-tfSF3MjdCh4&amp;e=">perpetuating biases</a> connected to, as a few examples, <a href="https://urldefense.proofpoint.com/v2/url?u=https-3A__www.taylorfrancis.com_books_mono_10.4324_9780203900055_black-2Dfeminist-2Dthought-2Dpatricia-2Dhill-2Dcollins&amp;d=DwMFaQ&amp;c=WO-RGvefibhHBZq3fL85hQ&amp;r=3yoMkF0b_gwqt7HwvTSFQzixfJIAZc4UvuZy3uNIbYI&amp;m=QTr0RrANr1k2kEFos3HWYwHKD4_EMSzcKLDvfRTPMi7o6e5QDaOrida18UOTWEHy&amp;s=NKyYkSLpkqyMKs0Qz14hyfQzGqhfpUTNMHuE15QmTYw&amp;e=">race, class, and gender</a>.</p><p>In practical terms, what might considerations like these mean for human-to-human connection?</p><p>Let’s imagine you are a parent emailing your child’s school counselor to begin a conversation about a behavioral challenge your child is experiencing. You receive a response, but wonder: Was part of this email produced by ChatGPT? If so, which part(s)? Why would the system be used to respond to such a sensitive concern? What might that indicate about the counselor? Perhaps about the school as a whole? Would you fully trust the counselor to assist in the referral of your child?</p><p>Furthermore, what if you knew about the significant biases built into and amplified by generative AI? Or about other <a href="https://www.dair-institute.org/blog/letter-statement-March2023/">ongoing harms</a> connected to these systems, such as <a href="https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/">labor force</a> exploitation, <a href="https://arxiv.org/abs/1906.02243">environmental costs</a> that exacerbate <a href="https://philpapers.org/rec/ADAFOE-2">environmental racism</a>, and <a href="https://www.wsj.com/articles/ai-chatgpt-dall-e-microsoft-rutkowski-github-artificial-intelligence-11675466857">massive data theft</a>? Would this knowledge further erode your trust in communicating with someone whom you suspect <em>may</em> have responded with synthetic text, and, if so, to what degree? Whereas trust may not be the ultimate end goal of human communication, it is still a <em>vital</em> part and outcome of a positive, healthy connection.</p><p>There are a number of key questions moving forward. How can we <a href="https://www.buzzsprout.com/2126417">counter the generative AI hype</a> and educate individuals to be critical consumers of these systems — with the understanding that, as Dr. Rumman Chowdhury has pointed out, AI <a href="https://www.politico.com/newsletters/the-recast/2023/07/14/bias-ai-rumman-chowdhury-twitter-00106412">“is not inherently neutral, trustworthy, nor beneficial</a>”? While acknowledging this <a href="https://www.youtube.com/watch?v=gRFaow12xo0&amp;t=980s">nuanced landscape</a>, how do we develop regulations that emphasize <a href="https://dl.acm.org/doi/10.1145/3442188.3445918">accountability</a> on the part of the companies that develop and deploy generative AI (especially through a lens of <a href="https://www.youtube.com/watch?v=7g0l8iDyCSw&amp;t=3096s">algorithmic justice</a> as described by Deborah Raji); <a href="https://newrepublic.com/article/172454/great-ai-hallucination-chatgpt">transparency</a> (e.g., the knowledge that one has encountered synthetic media and an understanding of how the system was trained; e.g., “<a href="https://www.consentfultech.io/">consentful tech</a>”); and the <a href="https://www.markey.senate.gov/imo/media/doc/letter_to_artificial_intelligence_companies_on_data_worker_labor_conditions_-_091323pdf1.pdf">prevention of exploitative labor</a>?</p><p>Returning to social connection and human-to-human communication, when we use language, we do so for a given purpose — to ask another person a question, explain an idea to someone, or just to socialize. In the context of LLMs, it is important <a href="https://aclanthology.org/2020.acl-main.463.pdf">not to conflate word form and meaning</a>. Referents, actual things and ideas in the world around us, like tulips or compassion, are needed to produce<em> meaning</em>. This meaning is unable to be learned from form alone. Given that LLMs are trained on form, these systems do not necessarily learn “meaning,” but instead some “<a href="https://aclanthology.org/2020.acl-main.463.pdf">reflection of meaning into the linguistic form</a>.” As Dr. Bender <a href="https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html">notes</a>, language is relational by its very nature.</p><p>Moving forward, it is essential that we preserve the sanctity of genuine human-to-human connection, with its conflicts, its awkwardness, and its spaces for cultivating relationships built on consistent trust, belonging, and mattering to those in one’s life.</p><p>Are you interested in continuing the conversation around social connection? Please fill out the following <a href="https://docs.google.com/forms/d/e/1FAIpQLSeFbWj5E-2cqa7a5hQYqOO1WTr2uht9DAUN0165wPKdiwdHgg/viewform">form</a>! In addition, would you recommend resources that should be included in this piece? Other feedback? Please feel free to reach out to me at any time (alexandra.hasse2556@gmail.com); I am still learning in this space and I so much value learning from you.</p><p><em>This essay is part of the </em><a href="https://medium.com/berkman-klein-center/generative-futures/home"><em>Co-Designing Generative Futures series</em></a><em>, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the </em><a href="https://cyber.harvard.edu/workshops/gai2023/"><em>Co-Designing Generative Futures conference</em></a><em> in May 2023. All opinions expressed are solely those of the author.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=abcaebd45c3d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/preserving-social-connections-against-the-backdrop-of-generative-ai-abcaebd45c3d">Preserving Social Connections against the Backdrop of Generative AI</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Thinking Through Generative AI Harms Among Users on Online Platforms]]></title>
            <link>https://medium.com/berkman-klein-center/thinking-through-generative-ai-harms-among-users-on-online-platforms-79cf9b2af49a?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/79cf9b2af49a</guid>
            <category><![CDATA[harm]]></category>
            <category><![CDATA[privacy]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[safety]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <dc:creator><![CDATA[Sameer Hinduja]]></dc:creator>
            <pubDate>Thu, 09 Nov 2023 15:59:20 GMT</pubDate>
            <atom:updated>2023-11-09T16:51:00.843Z</atom:updated>
            <content:encoded><![CDATA[<p>As a social scientist who views online phenomena through the lenses of trust, safety, security, privacy, and transparency, I seek to understand the potential for misuse and abuse in this current environment of giddy euphoria related to Generative AI (GenAI). Below, I briefly discuss some forms of victimization that the makers and regulators of these tools must consider, and suggest ways to reduce the frequency and impact of potential harms that may emerge and proliferate.</p><figure><img alt="Words stream over a glossy purple surface: perceive, synthesize, interfere." src="https://cdn-images-1.medium.com/max/1024/0*29EQFR6tZA5CABmd" /><figcaption>Photo by <a href="https://unsplash.com/@googledeepmind?utm_source=medium&amp;utm_medium=referral">Google DeepMind</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a>.</figcaption></figure><p>If you’ve spent any meaningful amount of time on social media, you’ve likely been exposed not only to harassment, but also to the presence of bots that spam or otherwise annoy you with irrelevant or intrusive content. GenAI allows for the automatic creation of harassing or threatening messages, emails, posts, or comments on a wide variety of platforms and interfaces, and systematizes their rapid spread. In addition, given that malicious social media users have employed <a href="https://www.theverge.com/2021/9/10/22666953/twitch-sues-alleged-hate-raiders-harassment-streamers">thousands of bots to flood online spaces with hateful content</a>, it is reasonable to assume that GenAI can facilitate this at an even greater scale. Indeed, since GenAI bots can converse and interact in more natural ways than traditional bots, responding to the problem may be much more challenging than using typical content moderation methods.</p><p>Imagine this occurring in the comment thread of your latest Instagram post, or among the community of friends you’ve carefully built in your Twitch or Discord channel, or on your recent LinkedIn post seeking new employment opportunities. Imagine a flood of bots when you’re trying to seek a romantic partner on a dating app. Recently, an AI chatbot was created to identify the type of women a person is interested in and then initiate flirtatious conversation with them <a href="https://futurism.com/the-byte/cupidbots-dating-app-ai-harassment">until they agree to a date or share their phone number</a>. Another chatbot has been accused of pursuing prospective romantic partners when they were clearly not interested, even <a href="https://www.giantfreakinrobot.com/tech/ai-program-replika-accused-of-sexual-harassment.html">becoming sexually aggressive and harassing</a>. One can easily envision the problematic possibilities when these technologies are combined, refined, and exploited.</p><p>Relatedly, I am very concerned about the dissemination and amplification of hate speech, given the ability of GenAI to be used to create and propagate text, memes, deepfakes, and related harmful content that targets specific members of marginalized groups or attacks the group as a whole.</p><blockquote>Even if the hate speech is created by human users, accounts created by GenAI can increase the visibility, reach, and virality of existing problematic content by fostering large upswings in engagement for those posts through high volumes of likes, shares, and comments.</blockquote><p>It is not clear how proficient platforms are in detecting unnatural behavior of this ilk, and malicious users can easily program frequencies and delays to mimic typical human activity.</p><p>Many of us are familiar with how deepfakes have been used over the last decade to compromise the integrity of the information landscape through <a href="https://www.nytimes.com/2023/02/07/technology/artificial-intelligence-training-deepfake.html">disinformation campaigns</a> and <a href="https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/">image-based sexual violence</a>. GenAI technologies not only greatly assist in the creation of deepfakes, but also can intersect with sextortion, catfishing, doxing, stalking, threats, and identity theft. Imagine this: A malicious individual creates a new account on a dating app. An unsuspecting user is then fooled into believing they are talking with a real person in their town, even though the chat conversation is facilitated by GenAI. Soon, the unsuspecting user begins candidly sharing personal information as they build a strong emotional bond with the fake account. When the malicious individual begins to send nude photo and video content to deepen intimacy, the unsuspecting user is unable to discern that it is manufactured. After responding in kind with genuine, private, sexual photos, extortion and threats ensue. Even after the victim responds to the demands, the malicious individual still shares the victim’s private information publicly on other message boards. It’s reasonable to expect new iterations of GenAI tools that can live-search the Internet and integrate queried information, organize it, connect it with other sources, and build a detailed dossier about a person. This would contribute to additional privacy violations, stalking, and threats against the unsuspecting user, as well as fraudulent activity (e.g., counterfeit documents, wide-scale phishing attacks) and identity theft.</p><p>Given how many forms of abuse can be aided and abetted by GenAI, an essential question surfaces: <em>What can be done here to mitigate risk and harm?</em></p><p>Initiatives that might be considered low-hanging fruit often involve education of end users to augment their ability to recognize GenAI creations as synthetic and to interpret and react to them accordingly. This can occur in part through <a href="https://www.wired.com/story/how-to-spot-generative-ai-text-chatgpt/">improved detection algorithms</a>, <a href="https://techcrunch.com/2023/05/23/microsoft-pledges-to-watermark-ai-generated-images-and-videos/">labeling/watermarking</a>, <a href="https://twitter.com/CommunityNotes/status/1663609484051111936?s=20">notifications/warnings</a>, and in-app or in-platform educational content of a compelling nature (e.g., when TikTok asked top influencers to create short videos that <a href="https://newsroom.tiktok.com/en-us/helping-users-manage-their-screen-time">encourage users to take breaks from screentime</a> or <a href="https://newsroom.tiktok.com/en-us/create-kindness-on-tiktok">teach viewers how to counter online bullying</a>).</p><p>Outside of these platform-centric efforts, <a href="https://www.timeshighereducation.com/campus/teaching-ai-literacy-how-begin">media literacy education</a> in schools must also require instruction in the use (and possible misuse) of GenAI tools, given their growing adoption among young people. Other theoretically simple solutions involve the ability for creators to easily attach Do Not Train flags to certain pieces of output that should not end up as training data in large language models (LLMs) (e.g., Adobe’s <a href="https://contentauthenticity.org/">Content Authenticity Initiative</a> is advocating for this on an industry-wide level (h/t <a href="https://cyber.harvard.edu/people/nfreitas">Nathan Freitas</a>)). New, elegant, privacy-forward solutions to quickly and consistently verify authentic users — their identity, their voice, their persona in photo and video (and, subsequently, remove non-human users) — must be developed and deployed. To be sure, though, protections must be in place so that human users (especially those historically marginalized) are not algorithmically misclassified because of existing biases in training datasets.</p><p>Can tech companies that provide GenAI models to their user base also reasonably mandate rule compliance? That is, can the tool itself (and the messaging that surrounds it) be crafted in a way that deters misuse and promotes prosocial or at least neutral output? Can it be presented to users with both excitement and cautions? Can clear examples of appropriate and inappropriate use be provided? Since being logged-in is likely required, can the platform remind the users that logs are kept to facilitate investigations should policy violations occur? And can gentle reminders and prompts periodically jog the memory of users that appropriate use is expected? All of this seems especially important if the tool is provided seamlessly and naturally within the in-app experience on hugely popular platforms (e.g., <a href="https://help.snapchat.com/hc/en-us/articles/13266788358932-What-is-My-AI-on-Snapchat-and-how-do-I-use-it-">My AI</a> on Snapchat was rolled out to 750 million monthly users and <a href="https://newsroom.snap.com/early-insights-on-my-ai">fielded 10 billion messages from over 150 million users within two months</a>).</p><p>Employees at all levels within AI research and development firms must operate within an ethos where “do no harm” is core to what they build. To be sure, tech workers are learning on the fly in this brave new world, and some must now retrofit solutions that ground human dignity, privacy, security, and the mitigation of bias into their products and services. It is critical. Not only will this reduce the incidence of various risks and harms, but it can contribute to further adoption and growth of their models as the signal to noise ratio of accurate, objective, and prosocial content creation improves.</p><p>Partnerships between academia and tech companies continue to hold promise to identify solutions to technological problems, and more initiatives focused on GenAI issues should be supported and promoted. Can researchers gain increased access to publicly available data mined via platform APIs to identify historical and current behavioral clues — as well as anonymized account data (date of creation, average frequency of engagement, relevant components of the social network graph) that readily point to synthetic users? Might they somehow obtain anonymized access not just of adult users but also minors (those 17 years of age and younger) given their comparatively greater vulnerability to the internalization and externalization of harm? And <a href="https://eber.uek.krakow.pl/index.php/eber/article/view/2113/852">what can be learned from the financial and pharmaceutical sectors</a> when it comes to government involvement and regulation to prevent ethical violations, biases and discriminatory practices, economic disparities, and other outcomes of misuse with GenAI? For instance, can risk profiles be established for all AI applications with baselines for rigor of assessment, mitigation of weaponization and exploitation, and processes for recovery? Those with the highest scores would likely gain the most market share, and keeping those scores would motivate quality control and constant refinement.</p><blockquote>Finally, we cannot keep moving ahead at breakneck speed without carefully designed regulatory frameworks for GenAI that establish standards and legal parameters and that set in place sanctions for those entities that transgress.</blockquote><p>This includes clearly describing and prohibiting (and designing prevention mechanisms for) edge cases where victimization will likely result. Moreover, proper governance requires detailed protocols for <a href="https://www.brookings.edu/articles/the-us-government-should-regulate-ai/">audits, licensing, international collaboration, and non-negotiable safety practices</a> for public LLMs. <a href="https://www.whitehouse.gov/ostp/ai-bill-of-rights/">The Blueprint for an AI Bill of Rights</a> from the US Office of Science and Technology Policy is a good start with great macro-level intentions, but it reads more like a strong suggestion rather than a directive with applied specificity. With regard to data privacy and security in general, the US has failed to keep pace with the comprehensive, forward thinking efforts of other countries. Urgency is needed so this does not happen yet again with GenAI, so that we can grow in confidence that its positives do measurably outweigh its negatives.</p><p><em>This essay is part of the </em><a href="https://medium.com/berkman-klein-center/generative-futures/home"><em>Co-Designing Generative Futures series</em></a><em>, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the </em><a href="https://cyber.harvard.edu/workshops/gai2023/"><em>Co-Designing Generative Futures conference</em></a><em> in May 2023. All opinions expressed are solely those of the author.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=79cf9b2af49a" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/thinking-through-generative-ai-harms-among-users-on-online-platforms-79cf9b2af49a">Thinking Through Generative AI Harms Among Users on Online Platforms</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building Knowledge about Generative AI with Mobile Populations]]></title>
            <link>https://medium.com/berkman-klein-center/building-knowledge-about-generative-ai-with-mobile-populations-806c80e899b1?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/806c80e899b1</guid>
            <category><![CDATA[participatory-methods]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[refugees]]></category>
            <category><![CDATA[border-control]]></category>
            <dc:creator><![CDATA[Petra Molnar]]></dc:creator>
            <pubDate>Thu, 02 Nov 2023 14:16:55 GMT</pubDate>
            <atom:updated>2023-11-02T16:25:31.683Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="Border wall along El Camino Del Diablo, the Devil’s Highway, stretching away, desolately, into the distance." src="https://cdn-images-1.medium.com/max/1024/0*CCcZpmEa4xhk4gSe" /><figcaption>Border wall along El Camino Del Diablo, the Devil’s Highway. Photo by Petra Molnar, February 2022.</figcaption></figure><p>Like a wound in the landscape, the rusty border wall cuts along Arizona’s Camino Del Diablo, the Devil’s Highway. Once the pride and joy of the Trump Administration, this wall is once again the epicenter of a growing political row. President Biden’s May 2023 repeal of the Trump Administration’s Covid-era policy of using <a href="https://www.pewresearch.org/short-reads/2022/04/27/key-facts-about-title-42-the-pandemic-policy-that-has-reshaped-immigration-enforcement-at-u-s-mexico-border/">Title 42</a><em> </em>comes with the introduction of new hardline policies preventing people from claiming asylum in the United States, undergirded by a growing commitment to a <a href="https://www.tni.org/en/article/the-everywhere-border">virtual smart border extending far beyond its physical frontier</a>.</p><p><a href="https://static1.squarespace.com/static/5f99b75728e98b061732d4a8/t/5fab946a5e6bfa61e39ca33e/1605080175624/A-75-590-AUV_race-tech-borders.pdf">Racism, technology, and borders create a cruel intersection</a>. From <a href="https://www.statewatch.org/analyses/2021/border-surveillance-drones-and-militarisation-of-the-mediterranean/">drones</a> used to prevent people from reaching the safety of European shores, to <a href="https://theintercept.com/2019/07/26/europe-border-control-ai-lie-detector/">artificial intelligence (AI) lie detectors</a> at various airports worldwide, to planned <a href="https://www.theborderchronicle.com/p/robo-dogs-and-refugees-the-future?utm_source=url">robodogs patrolling the US-Mexico border</a>, people on the move are caught in the crosshairs of an unregulated and harmful set of technologies. These projects are touted to control migration, bolstering a <a href="https://www.tni.org/en/publication/financing-border-wars">lucrative multi-billion-dollar border industrial complex</a>. Coupled with increasing international environmental destabilization, more and more people are ensnared in a growing and global surveillance dragnet. <a href="https://www.washingtonpost.com/opinions/2022/02/22/robotic-dogs-arizona-border-will-not-solve-migrant-crisis/">Thousands have already died</a>. The rest <a href="https://link.springer.com/chapter/10.1007/978-3-030-81210-2_3">experience old and new traumas</a> provoked and compounded by omnipresent surveillance and automation.</p><h4><strong>What do new tools like generative AI mean for this regime of border control</strong>?</h4><p>I have spent the last five years tracking how new technologies of border management — surveillance, automated decision making, and various experimental projects — are playing out in migration control. Through years of my travels from Palestine to Ukraine to Kenya to US/Mexico, the power of comparison shows me time and again how these spaces allow for frontier mentalities to take over, creating environments of silence and violence.</p><p>In this era of generative technologies, this work is underpinned by broader questions: Whose perspectives matter when talking about innovation, and whose priorities take precedence? What does critical representation and meaningful participation look like — representation that foregrounds people’s agency and does not contribute to the <a href="https://www.fmreview.org/photo-policy#:~:text=We%20have%20decided%20that%20we,correct%20way%20to%20do%20this">“poverty porn” that is so common in representations coming from spaces of forced migration</a>? And who gets to create narratives and generate stories that underpin the foundations of tools like GPT-4 and whatever else is coming next?</p><figure><img alt="A grid of six images showing: high-tech refugee camp on Kos Island in Greece; surveillance tower in Arizona; two women cross the Ukraine-Poland border; memorial site in the Sonora desert; protest against new refugee camp on Samos; Calvin, a medical doctor, holds keys from his apartment in Ukraine after escaping across the Hungary border." src="https://cdn-images-1.medium.com/max/1024/0*8liMURG_P2okAkqn" /><figcaption><em>Clockwise from top left: High-tech refugee camp on Kos Island in Greece; surveillance tower in Arizona; two women cross the Ukraine-Poland border; memorial site in the Sonora desert; protest against new refugee camp on Samos; Calvin, a medical doctor, holds keys from his apartment in Ukraine after escaping across the Hungary border. Photos by Petra Molnar, 2021–2022.</em></figcaption></figure><p>Tools like generative AI are socially constructed by and with particular perspectives and value systems. They are a reflection of the so-called Global North and can encode and perpetuate biases and discrimination. In August of this year, to test out where generative AI systems are at, I ran a simple prompt through the Canva and Craiyon image generation software: “What does a refugee look like?”</p><figure><img alt="Grid of Craiyon-generated images of “refugees” dominated by forlorn and emaciated faces of Black children." src="https://cdn-images-1.medium.com/max/920/0*sUiyGeACp7tjTigk" /></figure><figure><img alt="Grid of Craiyon-generated images of “refugees” dominated by forlorn and emaciated faces of Black children and women, some wearing headscarves." src="https://cdn-images-1.medium.com/max/902/0*So8PMsE7Lm351RiL" /></figure><figure><img alt="Grid of Canva-generated images of “refugees,” dominated by vaguely Middle Eastern people smiling in expectation of being rescued." src="https://cdn-images-1.medium.com/max/708/0*QN4DhK__HYDuEeI1" /></figure><figure><img alt="Grid of Canva-generated images of “refugees,” dominated by vaguely Middle Eastern people smiling in expectation of being rescued." src="https://cdn-images-1.medium.com/max/700/0*9RVABBAKUHIUmBfM" /></figure><h4><strong>What stories do these images tell? What perspectives do they hide?</strong></h4><p>It is telling that for generative AI, the concept of a “refugee” elicits either forlorn and emaciated faces of Black children or else portraits of doe-eyed and vaguely Middle Eastern people waiting to be rescued. When I sent these depictions to a colleague who is currently in a situation of displacement and identifies as a refugee, she laughed and said, “I sure as hell hope I don’t look like this.”</p><p>Generative AI is also inherently exploitative. Its training data are scraped and extracted often without the knowledge or consent of the people who created or are in the data. Menial tasks that allow the models to function fall on underpaid labor outside of North America and Europe. The benefits of this technology do not accrue equally, and generative AI looks to replicate the vast power differentials between those who benefit and those who are the subjects of high-risk technological experiments.</p><h4><strong>How can we think more intentionally about who will be impacted by generative AI and work collaboratively–and rapidly–with affected populations to build knowledge?</strong></h4><p>The production of any kind of knowledge is always a political act, especially since researchers often build entire careers on documenting the trauma of others, “<a href="https://doi.org/10.1093/jhuman/huq004">stealing stories” as they go along</a>. Being entrusted with other people’s stories is a deep privilege. Generating any type of knowledge is not without its pitfalls, and academia is in danger of falling into the same trap with generative AI research: creating knowledge in isolation from communities, failing to consider the expertise of those we’re purporting to learn from. How can researchers and storytellers limit the extractive nature of research and story collection? Given the power differentials involved, research and storytelling can and should be uncomfortable, and we must pay particular attention to why certain perspectives in the so-called Global North are given precedence while the <a href="https://datasociety.net/library/a-primer-on-ai-in-from-the-majority-world/">rest of the world continues to be silenced</a>. This is particularly pertinent when we are talking about a vast system of increasingly autonomous knowledge generation through AI.</p><p>The concept of story and knowledge stewardship may be helpful, a concept from Indigenous learnings which recognizes that the storyteller is not exempt from critical analysis of their own power and privilege over other people’s narratives and should instead hold space for stories to tell themselves. This type of framing continually places responsibility at the center (see for example the work of the Canadian <a href="https://fnigc.ca/">First Nations Information Governance Centre</a>). Storytelling and sharing is also a profound act of resistance to simplified and homogenized narratives, often common when there is a power differential between the researcher and their topic. Established methods of knowledge production are predicated on an outside expert parachuting in, extracting data, findings, and stories, using their westernized credentials to further their careers as the expert.</p><p>True commitment to participatory approaches requires ceding space, meaningfully redistributing resources, and supporting affected communities in telling their own stories. And real engagement with decolonial methodologies requires an iterative understanding of these framings, a re-framing process that is never complete. By decentering so-called Global North narratives and not tokenizing people with lived experience as research subjects or afterthoughts, researchers can create opportunities that recognize their privilege and access to resources — and then redistribute those resources through meaningful participation, creating an environment for people to tell their own stories. It is this commitment to participatory approaches that we need in generative AI research, especially as it meets up with border control technologies.</p><figure><img alt="Headshots of Veronica Martinez, Nery Sataella, Wael Qarssifi, Simon Drotti, and Rajendra Paudel, captioned by “Meet Our 2022–2023 MTM Fellows.”" src="https://cdn-images-1.medium.com/max/1024/0*VSi3FXOoGaVr90pk" /></figure><p>One small example is the <a href="https://www.migrationtechmonitor.com/">Migration and Technology Monitor</a> project at the York University’s <a href="https://refugeelab.ca/">Refugee Law Lab</a>, where I am Associate Director. , Migration and Technology Monitor is a platform and an archive with a focus on migration, technology, and human rights. Our recently launched <a href="https://www.migrationtechmonitor.com/2023fellows">fellowship program</a> aims to create opportunities for people with lived experience to meaningfully contribute to research, storytelling, policy, and advocacy conversations from the start, not as an afterthought. Among our aims is to generate a collaborative, intellectual, and advocacy community committed to border justice. We prioritize opportunities for participatory work, including the ability to pitch unique and relevant projects by affected communities themselves. Veronica Martinez, Nery Sataella, Simon Drotti, Rajendra Paudel, and Wael Qarssifi are part of our first cohort of fellows from mobile communities from Venezuela to Mexico to Uganda to Nepal to Malaysia. Our hope is that our fellowship creates a community which provides spaces of collaboration, care, and co-creation of knowledge. We are specifically sharing resources with people on the move who may not be able to benefit from funding and resources readily available in the EU and North America. People with lived experiences of migration must be in the driver’s seat when interrogating both the negative impacts of technology as well as the creative solutions that innovation can bring to the complex stories of human movement, such as using generative AI to compile resources for mobile communities.</p><p>Participatory methodologies that foreground lived experience as the starting place for generating knowledge inherently destabilize established power hierarchies of knowledge production. These approaches encourage researchers and tech designers to critically interrogate their own positionality and how much space their own so-called expertise takes up in the generation of knowledge at the expense of other realities. These framings and commitments are paramount, especially in context with fraught histories and vast power differentials–for example, where mobile populations are the abject and feared others and where generative AI models learn on these realities. Especially pertinent for scholars, technologists, and researchers who are themselves part of the so-called Rest of World, a re-imagination of expertise and knowledge must come from the ground up and any tools which are created must recognize and fight against these power differentials.</p><p>It is through participatory methodologies that we may come a step closer towards seeing a world in which many worlds fit, a phrase which as my BKC colleague Ashley Lee reminds us, comes from the Zapatista Indigenous resistance movement — a world where “nothing about us without us” moves beyond an old community organizer motto towards a real commitment to participation, story stewardship, and public scholarship which honors and foregrounds lived experience.</p><p><em>Thank you to Madeline McGee for her suggestions which greatly improved this piece and to Sam Hinds for her careful edits.</em></p><p><em>This essay is part of the </em><a href="https://medium.com/berkman-klein-center/generative-futures/home"><em>Co-Designing Generative Futures series</em></a><em>, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the </em><a href="https://cyber.harvard.edu/workshops/gai2023/"><em>Co-Designing Generative Futures conference</em></a><em> in May 2023. All opinions expressed are solely those of the author.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=806c80e899b1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/building-knowledge-about-generative-ai-with-mobile-populations-806c80e899b1">Building Knowledge about Generative AI with Mobile Populations</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The EU AI Act]]></title>
            <link>https://medium.com/berkman-klein-center/the-eu-ai-act-e73e1a35a843?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/e73e1a35a843</guid>
            <category><![CDATA[eu-ai-act]]></category>
            <category><![CDATA[regulation]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[governance]]></category>
            <dc:creator><![CDATA[Samson Esayas]]></dc:creator>
            <pubDate>Thu, 02 Nov 2023 14:16:05 GMT</pubDate>
            <atom:updated>2023-11-02T14:16:04.936Z</atom:updated>
            <content:encoded><![CDATA[<h4><strong>A Real-Time Experiment to Regulate Generative AI</strong></h4><figure><img alt="Flags of the member states of the European Union in front of the European Commission building in Brussels." src="https://cdn-images-1.medium.com/max/1024/0*3mZzohhJZSF6RKmm" /><figcaption>Photo by <a href="https://unsplash.com/@christianlue?utm_source=medium&amp;utm_medium=referral">Christian Lue</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a>.</figcaption></figure><p>The public release of ChatGPT in November 2022 represents a significant breakthrough in generative AI — systems that craft synthetic content based on patterns learned from extensive datasets. This development has heightened concerns about AI’s impact on individuals and society at large. In the brief period since this breakthrough, there has been a surge in lawsuits pertaining to <a href="https://www.nytimes.com/2023/07/10/arts/sarah-silverman-lawsuit-openai-meta.html">copyright</a> and <a href="https://www.verdict.co.uk/microsoft-and-openai-sued-for-3bn-in-privacy-complaint/">privacy</a> violations, as well as<a href="https://www.reuters.com/technology/australian-mayor-readies-worlds-first-defamation-lawsuit-over-chatgpt-content-2023-04-05/"> defamation</a>. One lawyer learned a hard lesson about the dangers of AI “hallucination” after citing seemingly genuine but <a href="https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-lawsuit-chatgpt.html">bogus</a> judicial precedents generated by ChatGPT in a legal brief submitted to court. There are even reports that such systems have been implicated in an individual’s decision to commit <a href="https://www.vice.com/en/article/pkadgm/man-dies-by-suicide-after-talking-with-ai-chatbot-widow-says">suicide</a>.</p><p>Given these concerns, there is a growing demand for regulatory action. OpenAI’s CEO, Sam Altman, addressed the US Congress in May and called upon legislators to act.</p><p>The EU has taken the lead in legislative endeavors. In April 2021, the European Commission proposed a Regulation on AI (<a href="https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A52021PC0206">AI Act</a>), marking the first step toward a comprehensive global legal framework on AI. This landmark legislation aims to foster a human-centric AI, directing its development in a way that respects human dignity, safeguards fundamental rights, and guarantees the security and trustworthiness of AI systems.</p><blockquote>The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.</blockquote><p>The proposed AI Act adopts a risk-based approach, categorizing AI systems into three main risk levels: unacceptable risk, high risk, and limited risk. This classification depends on the potential risk posed to health, safety, and fundamental rights. Certain AI systems such as those that generate “trustworthiness” scores, akin to the Chinese Social Credit System, are considered to present unacceptable risks and are completely prohibited. AI systems used in hiring processes and welfare benefit decisions fall into the high-risk category and are subject to stringent obligations. These include conducting a conformity assessment and adhering to certain data quality and transparency requirements. Meanwhile, chatbots and deepfakes are considered limited risk, subject to relatively minimal transparency requirements.</p><p>Shortly after the proposal was drafted, and after the release of ChatGPT, it became clear that the Commission’s draft contained a significant hole: it did not address general purpose AI or “foundational models” like Open AI’s GPT-n series, which underpins ChatGPT. Fortunately, due to the EU’s multistage legislative process, the release of ChatGPT occurred while the European Parliament was deliberating on the AI Act. This provided a timely opportunity to include new provisions specifically targeting foundational models and generative AI.</p><p>Under an <a href="https://www.europarl.europa.eu/doceo/document/TA-9-2023-0236_EN.pdf">amendment</a> adopted by the European Parliament in June, providers of foundational models would be required to identify and reduce risks to health, safety, and fundamental rights through proper design and testing before placing their models on the market. They must also implement measures to ensure appropriate levels of performance and adopt strategies to minimize energy and resource usage. Moreover, these AI systems must be registered in an EU database, with details on their capabilities, foreseeable risks, and measures taken to mitigate these risks, including an account of risks that remain unaddressed. The amendment would impose additional obligations on foundational models employed in generative AI. These obligations include transparency requirements, ensuring users are aware that content is machine generated, and implementing adequate safeguards against the generation of unlawful content. Providers must also publish a detailed summary of copyrighted content used to train their systems.</p><p>While the final version of the AI Act will be determined by the trilogue among the European Commission, European Parliament, and European Council, its current form already marks an ambitious and real-time attempt to regulate generative AI, highlighting the challenges of regulating a rapidly evolving target.</p><p>On this occasion, the EU’s legislative process kept pace with the latest advancements before the laws were set in stone. However, it raises the question: how often can we count on such fortunate timing, and what proactive measures should be taken?</p><p>We must embed flexibility into such laws. Indeed, the EU has taken some steps in this direction, granting the Commission the authority to adapt the law by adding new use cases into the risk categories. Yet, considering previous experiences with the Commission’s implementation of delegated acts, it’s debatable whether such mechanisms alone can keep up with the rapid pace of AI development.</p><p>The agile innovation process that permeates the software world, where distributed technologies are frequently released in early stages and iteratively refined based on usage data, necessitates a regulatory system that is designed to learn and adapt.</p><p>It is important to embrace a variety of techniques for adaptive regulation, such as regulatory experimentation through pilot projects and embedding systematic and periodic review and revision mechanisms into legislation. Adaptive regulation further necessitates openness to a diversity of approaches across jurisdictions. It encourages learning from one another, which implies that the EU should resist its inclination to solely dictate global standards for AI regulation, and instead regard its efforts as contributions to a collective pool of learning resources.</p><blockquote>While adaptive regulation does come with its own costs, clinging to static regulation designed for a hardware world with fully-formed products manufactured in centralized facilities, could potentially prove to be even more costly in the face of rapidly advancing technology.</blockquote><p>Simultaneously, the amendment has significantly broadened the Act’s scope. While the Commission’s draft focused on mitigating harms to health, safety, and fundamental rights, the European Parliament’s version extends these concerns to include democracy, the rule of law, and environmental protection. Consequently, providers of high-risk AI systems and foundational models are required to manage risks associated with all these areas. However, this raises concerns that the Act might transform into a catch-all regulation with diluted impact, thereby creating a considerable burden on providers to translate these broad goals into concrete guardrails.</p><p>This amendment has exacerbated existing concerns that these broad requirements and accompanying compliance costs might stifle innovation. In an open <a href="https://drive.google.com/file/d/1wrtxfvcD9FwfNfWGDL37Q6Nd8wBKXCkn/view">letter</a> to EU authorities, over 150 executives from companies including Siemens, Airbus, Deutsche Telekom, and Renault criticized the AI Act for its potential to “undermine Europe’s competitiveness and technological autonomy.” One of the significant concerns raised by these companies relates to the legislation’s strict requirements aimed at generative AI systems and foundational models. The letter equates the importance of generative AI with the invention of the internet, considering its potential to shape not only the economy but also culture and politics. The signatories caution that the compliance costs and risks embedded in the AI Act could “result in highly innovative companies relocating their operations overseas, investors retracting their capital from the development of European foundational models, and European AI in general.”</p><p>OpenAI has already <a href="https://www.theverge.com/2023/5/25/23737116/openai-ai-regulation-eu-ai-act-cease-operating">warned</a> about potentially exiting the EU if the conditions of the AI Act prove too restrictive. There are also indications that even major players are cautious when rolling out their latest services. The launch of Google Bard was delayed in the EU by two months due to compliance concerns with the General Data Protection Regulation. However, it was ultimately introduced with improved <a href="https://www.computerworld.com/article/3702768/google-bard-launches-in-eu-overcoming-data-privacy-concerns-in-the-region.html">privacy</a> safeguards, highlighting the EU’s role in shaping global data policies of such organizations.</p><p>For its part, the EU contends that the AI Act is designed to stimulate AI innovation and underscores key enabling measures included in the Act. These encompass regulatory sandboxes, which serve as test beds for AI experimentation and development, an industry-led process for defining standards that assist with compliance, and safe harbors for AI research.</p><p>Of course, the concerns from the industry about the AI Act’s impact on innovation, as well as the EU’s responses to these matters, represent an essential part of balancing the inevitable trade-offs inherent in regulating any emerging technology, and time will tell which direction the pendulum swings. During the trilogue negotiations, it is likely that the European Council will push back on some of the amendments from the Parliament. Indeed, there is merit in carefully weighing the benefits of introducing broad objectives such as democracy and the rule of law without concrete measures in place to support these goals. One might argue that efforts are better spent strengthening the safeguards for fundamental rights, which is crucial for safeguarding both democracy and the rule of law. Numerous civil society organizations have already emphasized the need for incorporating fundamental rights impact assessments and empowering individuals and public interest organizations to file complaints and seek redress for harms inflicted by AI.</p><p>Moreover, it would be beneficial to concentrate on tangible guardrails, such as facilitating researchers’ access to foundational models, data, and parameters. This approach is likely to be more effective in promoting accountability, democracy, and the rule of law compared to a general requirement to conduct risk assessments based on such broad concepts.</p><p>Regardless of the final form of the text, the AI Act is poised to significantly shape AI development and the regulatory landscape in the EU and beyond. Therefore, the AI community must prepare for its impact.</p><p><em>This essay is part of the </em><a href="https://medium.com/berkman-klein-center/generative-futures/home"><em>Co-Designing Generative Futures series</em></a><em>, a collection of multidisciplinary and transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence. These articles are authored by members of the Berkman Klein Center community and expand on discussions that began at the </em><a href="https://cyber.harvard.edu/workshops/gai2023/"><em>Co-Designing Generative Futures conference</em></a><em> in May 2023. All opinions expressed are solely those of the author.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e73e1a35a843" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/the-eu-ai-act-e73e1a35a843">The EU AI Act</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Co-Designing Shared Futures]]></title>
            <link>https://medium.com/berkman-klein-center/co-designing-shared-futures-3d15f9883773?source=rss----cdd8dc4c5fc---4</link>
            <guid isPermaLink="false">https://medium.com/p/3d15f9883773</guid>
            <category><![CDATA[artificial-intelligence]]></category>
            <category><![CDATA[generative-ai]]></category>
            <category><![CDATA[governance]]></category>
            <category><![CDATA[dialogue]]></category>
            <category><![CDATA[global]]></category>
            <dc:creator><![CDATA[Elisabeth Sylvan, PhD]]></dc:creator>
            <pubDate>Thu, 02 Nov 2023 14:15:36 GMT</pubDate>
            <atom:updated>2023-11-09T20:01:55.014Z</atom:updated>
            <content:encoded><![CDATA[<h4>Global Collaboration Creates Ethical Generative AI</h4><p>Though the underlying technology was based on years of AI research by many individuals and organizations, the launch of ChatGPT by OpenAI in November of 2022 captured the collective imagination in an extraordinary way. The release started an ongoing conversation about the potential of the technology to improve our lives, and to harm them. In the public consciousness, AI has remained confusing, overwhelming, and a bit scary — even its name can seem imprecise and distracting. And with generative AI, phrases like “hallucination” and “job replacement” only foment more fear. Yet beneath the hype, doomerism, and techno-utopianism sits the fundamental question of what kind of societies we want to live in — and what choices we should make to realize them.</p><p>One month after the release of ChatGPT, a group of collaborators — the Nordic Centre at BI Norwegian Business School, the Institute for Technology and Society of Rio de Janeiro, the Technical University of Munich, and the Berkman Klein Center — decided it was time to discuss the implications of generative AI. We knew that if generative AI were to realize its true pro-social potential and to have its harms mitigated, then cross-sector, cross-disciplinary, and cross-national conversation was needed. Many in our community had already begun to explore use cases, governance, accountability, and the systems’ social impact broadly. It was time to rebuild bridges weakened by the COVID pandemic era.</p><figure><img alt="Sabelo Mhlambi in conversation with Jenn Louie at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023." src="https://cdn-images-1.medium.com/max/1024/1*n7ZDPsz4opi99v6E64KESg.jpeg" /><figcaption>Sabelo Mhlambi and Jenn Louie at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.</figcaption></figure><p>In May 2023, the Berkman Klein Center hosted “Co-Designing Generative Futures: A Global Conversation about AI” in Cambridge, USA, bringing together colleagues old and new with backgrounds from academia, civil society, government, and industry, from over two dozen countries and all of the continents other than Antarctica. We discussed the need for access to data for research purposes and the need for real study.</p><p>At the same time, many emphasized that policymakers do <em>not </em>have the time; they need to act <em>now</em>. Action and study need to happen in parallel.</p><figure><img alt="Samson Esayas speaks through a microphone to participants at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023." src="https://cdn-images-1.medium.com/max/1024/1*XYftv4pqSfGImUEMu643bg.jpeg" /><figcaption>Samson Esayas at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.</figcaption></figure><blockquote>Today we introduce the <a href="https://medium.com/berkman-klein-center/generative-futures/home">Co-Designing Generative Futures series</a>, a collection of multidisciplinary, transnational reflections and speculations about the transformative shifts brought on by generative artificial intelligence, as seen by members of the Berkman Klein Center community.</blockquote><p>In the first set of essays, <a href="https://medium.com/berkman-klein-center/the-eu-ai-act-e73e1a35a843"><strong>Samson Esayas</strong></a> addresses the potential implications of policy, speculating on the role the European Union’s AI Act may play in the governance and development of generative technologies. And <a href="https://medium.com/berkman-klein-center/building-knowledge-about-generative-ai-with-mobile-populations-806c80e899b1"><strong>Petra Molnar</strong></a> challenges us to consider the potential impact of generative AI on the surveillance of borders and of migrants, urging us to engage displaced people using participatory methods to understand their perspectives and protect their safety.</p><p>The second installment addresses how generative AI challenges fundamental concerns about our experiences of being human. <a href="https://medium.com/berkman-klein-center/preserving-social-connections-against-the-backdrop-of-generative-ai-abcaebd45c3d"><strong>Alexa Hasse</strong></a> examines how generative AI tools might change trust in our human social relationships. <a href="https://medium.com/berkman-klein-center/thinking-through-generative-ai-harms-among-users-on-online-platforms-79cf9b2af49a"><strong>Sameer Hinduja</strong></a> offers a deep dive into the sobering potential of generative AI tools to perpetuate online harassment at massive scale. And <a href="https://medium.com/berkman-klein-center/you-know-for-kids-47731a0a72f8"><strong>Bill Shribman</strong></a> brings the perspective of a children’s media producer as he explores how media literacy education may need to shift in light of advancements in AI technology.</p><figure><img alt="Maroussia Lévesque and Petra Molnar share a moment of levity at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023." src="https://cdn-images-1.medium.com/max/1024/1*MNrP2O4lzWTI96E3EWO_KA.jpeg" /><figcaption>Maroussia Lévesque and Petra Molnar at “Co-Designing Generative Futures: A Global Conversation about AI,” May 2023.</figcaption></figure><p>As members of our community continue to study generative AI, both the urgency and the need for deeper consideration persist. Policymakers are moving quickly towards substantial legislation in multiple regions across the world. The technology keeps improving. Technology innovators are finding new applications for generative AI, and we likely have only scratched the surface of what is to come. Researchers are publishing new findings about concerns regarding bias, data privacy, data ownership, disinformation, and vast inequities across regions and communities. And they urgently need more access to data. Looking forward, the necessity for continued collaboration across sectoral, national, and disciplinary boundaries seems all the more critical.</p><p>As Senior Director of Programs and Strategy at the Berkman Klein Center, I am committed to sustaining co-design efforts. We welcome fresh perspectives and opportunities for engagement from around the globe, and from new sectors and stakeholders.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3d15f9883773" width="1" height="1" alt=""><hr><p><a href="https://medium.com/berkman-klein-center/co-designing-shared-futures-3d15f9883773">Co-Designing Shared Futures</a> was originally published in <a href="https://medium.com/berkman-klein-center">Berkman Klein Center Collection</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>