tag:blogger.com,1999:blog-195867192024-03-07T10:19:50.163-08:00The Visual LinguistThe blog of Neil Cohn, PhD, exploring the structure and cognition of drawing, visual communication, and the visual language of comics.Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.comBlogger560125tag:blogger.com,1999:blog-19586719.post-34909099110755136702021-12-31T12:00:00.003-08:002021-12-31T12:00:49.817-08:00Changing blog address!<p>My blog has moved! I'm no longer using this blog address, and have moved the whole blog to its new address at <a href="http://www.visuallanguagelab.com/blog">http://www.visuallanguagelab.com/blog</a></p><p>I'll soon be issuing redirects, but please update your links and feeds!</p>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-38544291060691342452021-10-29T07:55:00.003-07:002021-10-29T07:57:28.313-07:00Creating new face emoji<p>It's been awhile since I've done a blog post, but here's some fun news... I helped create several of the new emoji that have recently been added to phones, and it seems people got excited about it! </p><p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtm7d4YNOjsi0jSif9sy7cMLThYEbHxVIks-IEn1IkPQSjXiQldqkBrUOaYJ7UW5-E1lc47ubWOIayktHK8IAYm_Wdr1gQQDT5AMm0YvVuOIB2EE4XRPY8DZvrTel4MsKaWE4N/s554/83d51851-fbff-4901-99b6-c0a216d8c938.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="318" data-original-width="554" height="184" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgtm7d4YNOjsi0jSif9sy7cMLThYEbHxVIks-IEn1IkPQSjXiQldqkBrUOaYJ7UW5-E1lc47ubWOIayktHK8IAYm_Wdr1gQQDT5AMm0YvVuOIB2EE4XRPY8DZvrTel4MsKaWE4N/s320/83d51851-fbff-4901-99b6-c0a216d8c938.jpg" width="320" /></a></div><br />I was contacted several years ago by <a href="https://www.httpcolonforwardslashforwardslashwwwdotjenniferdanieldot.biz" target="_blank">Jennifer Daniel</a> at Google, who works on their emoji and is the Unicode Subcommittee Chair for emoji. Together we proposed several new face emoji, many of which have now been approved to be added to the emoji vocabulary set. Ours include the breath face đŽâđ¨, the melting face (also designed by Erik Carter), holding back tears, and dotted line emoji. Our approved emoji seem to have gotten some people excited, because the melting face emoji was then<a href="https://www.nytimes.com/2021/09/29/style/melting-face-emoji-unicode.html" target="_blank"> written up by the <i>New York Times</i></a>, which prompted Stephen Colbert to have a section about it in his opening monologue:<p><br />
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/C9brKqeopUA" title="YouTube video player" width="560"></iframe>
</p><p></p><br />Then, people over here in the Netherlands found out about it, and my contributions to the emoji were<a href="https://www.bd.nl/tilburg-e-o/smeltend-gezicht-en-wolkje-voor-de-mond-nieuwe-emojis-voor-alle-smartphones-hebben-een-tilburgs-tintje~abad7271/" target="_blank">written up in the Brabant Dagblad newspaper</a>, and the story then (very surprisingly!) appeared on the front page of most of the newspapers in the country. The newspaper article was also accompanied by a video interview on the <a href="https://www.ad.nl/video/genre/news/productie/neil-onderzoekt-en-verzint-nieuwe-emoji-s-259259" target="_blank">AD news website</a>. <div><br /></div><div>This led to a flurry of additional interviews with media around the Netherlands. I was on a segment of <a href="https://www.omroepbrabant.nl/tv/programma/3332139/Brabant-Vandaag/aflevering/3920575/Brabant-Vandaag" target="_blank">Omroep Brabant</a> (4 minutes in), and this nice segment on the Khalid and Sophie show on the channel NPO1:<p></p></div><div><br /></div>
<iframe allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/Jyyq4o0DDpg" title="YouTube video player" width="560"></iframe> <div><br /></div><div> Plus, I was interviewed in a story on whether <a href="https://www.rtlnieuws.nl/editienl/artikel/5263349/nieuwe-emoji-behoefte-aan-smartphones-whatsapp" target="_blank">we have "enough" emoji </a>for RTL news, which also had a video segment on EditieNL (I'll post if it becomes available).</div><div><br /></div><div>It's been a wild and fun few weeks of emoji media, and I hope people get as much use of the emoji as we hope. The experience is perhaps summed up by one of them nicely: đŽâđ¨</div>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-57565723507657755682021-03-23T13:47:00.001-07:002021-03-23T15:29:50.156-07:00From "learning to draw" to "acquiring a visual vocabulary"<p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"></p><div class="separator" style="clear: both; text-align: right;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhEzcgKxFSZmLaGilR6DmjPNZYNOyot_gDoeLn79luUoVBdu7enpxGyj1aNdHf0EQ4zfYq0ZiiUOuZrbezmKADZ0djz8bNJmvqisjWTV5A4HEUUUEoyCVbDnaTO7BpJg2I2tnH/s2048/lw355.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1704" data-original-width="2048" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhhEzcgKxFSZmLaGilR6DmjPNZYNOyot_gDoeLn79luUoVBdu7enpxGyj1aNdHf0EQ4zfYq0ZiiUOuZrbezmKADZ0djz8bNJmvqisjWTV5A4HEUUUEoyCVbDnaTO7BpJg2I2tnH/w320-h266/lw355.jpg" width="320" /></a></div><a name="_GoBack"></a><span style="font-family: Helvetica;">Many people feel they âcanât drawâ, which seems odd given assumptions about drawing as a direct pathway to visual concepts. Most of us can see, so why canât we draw? This was originally <a href="https://twitter.com/visual_linguist/status/1373665959790190611?s=20">a thread on Twitter</a>, but here I've turned it into a blog post about why everything you know about learning to draw is wrong<o:p></o:p></span><p></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">First off, hereâs some of the predominant beliefs about learning to draw:<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">1. Drawing is about what you see, either by eye or in your âimaginationâ<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">2. People have talent or they donât<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">3. Having your âownâ style is good<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">4. Thus copying is bad<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Do these sound familiar?<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"></span></p><div class="separator" style="clear: both; text-align: left;"><span style="font-family: Helvetica;">These beliefs are a relatively recent invention, and date back to the philosophy of Jean Jacques Rousseau, who proposed that culture might taint our more ânaturalâ instincts. This is where they âcopying is badâ part comes in.</span></div><span style="font-family: Helvetica;"><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgehzM5jwCLW_3UdZZTTM8sMU5j0uErZEVA_Cy5MNl7Ajuga8P8qbcYW35PFtZuFf6iXM3s2RtvG5Df_Y2OAF3bPo2yPJjTYaLCo7dJ2BFPVPNZ55jKOq8gpIQR0_cfn1GjPQ88/s225/Unknown-1.jpeg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="225" data-original-width="225" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgehzM5jwCLW_3UdZZTTM8sMU5j0uErZEVA_Cy5MNl7Ajuga8P8qbcYW35PFtZuFf6iXM3s2RtvG5Df_Y2OAF3bPo2yPJjTYaLCo7dJ2BFPVPNZ55jKOq8gpIQR0_cfn1GjPQ88/w200-h200/Unknown-1.jpeg" width="200" /></a></div></span><p></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Rousseauâs ideas was invoked for drawing in the 1800s by the Austrian painter and educator Franz CiĹžek, who proposed that childrenâs true âinner artistic creativityâ could only emerge if we prevented them from copying others, because imitation let in that âbad cultural influence. </span><span style="font-family: Helvetica;">CiĹžekâs framework was quickly taken up and spread in art education, which pointed to the skilled drawings produced by CiĹžekâs students, where it pushed for people to develop their own âuniqueâ styles, and copying was considered a boundary to individuality.</span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">This anti-imitation mindset also reinforced a ânever-ending avant gardeâ, which became popular at the time, since styles would vary by individuals. Art education has since viewed drawing proficiency in terms of these fairly unmeasurable traits.<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Yet⌠there was really no *evidence* for this idea. In the 1970s, <a href=" https://www.tandfonline.com/doi/pdf/10.1080/00043125.1977.11649876">art educators Brent and Marjorie Wilson</a> started studying childrenâs drawings and found that nearly all of them copied! And, the ones who were most skilled copied more and were more creative, than those who didnât copy.<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Childrenâs drawings from Japan were shown to be most proficient of all children they studiedâwithout the âdrop offâ in the progression of drawing ability around puberty shown elsewhereâsince all children read and copy Japanese manga, which have a consistent visual vocabulary.</span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMxv7yUXtiTFo_yDHvf4HH6k7Ny43RcmDEpGFyowAjN7mwLFCCB2Xly-irvEst9Og3YE7cVeSc7hLMjbEZDNO6wpCoyxH6BmK2GSssopNcMSNeVSFbDO-O05R7QhrOidwySs64/s1066/Screen+Shot+2021-03-21+at+4.55.52+PM.png" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="606" data-original-width="1066" height="228" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjMxv7yUXtiTFo_yDHvf4HH6k7Ny43RcmDEpGFyowAjN7mwLFCCB2Xly-irvEst9Og3YE7cVeSc7hLMjbEZDNO6wpCoyxH6BmK2GSssopNcMSNeVSFbDO-O05R7QhrOidwySs64/w400-h228/Screen+Shot+2021-03-21+at+4.55.52+PM.png" width="400" /></a></div><p></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">In fact, reanalysis of the creations of CiĹžekâs own students showed that they too copied, from each other! So, though he pushed an ideology, it was mostly based on an internally developed âhouse styleâ not pure uninfluenced talent.<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"></span></p><div class="separator" style="clear: both; text-align: center;"><span style="font-family: Helvetica;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFZercBbDjJDbjlWwRlH2X7v_b6Vz6ZZwx8Tj7xk7tTXciShukiPo9YwTKAtO35kYp5cJKXl5raipt5gu01YneEBxeqGX29Gk7zbB1BcyEBfDKo0veXAVwJ9tG6h-tFr6Rm2Jz/s1473/Figure_1.jpg" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1356" data-original-width="1473" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhFZercBbDjJDbjlWwRlH2X7v_b6Vz6ZZwx8Tj7xk7tTXciShukiPo9YwTKAtO35kYp5cJKXl5raipt5gu01YneEBxeqGX29Gk7zbB1BcyEBfDKo0veXAVwJ9tG6h-tFr6Rm2Jz/s320/Figure_1.jpg" width="320" /></a></span></div><span style="font-family: Helvetica;"><br />This supports an alternate framing of how drawing works, which Iâve outlined throughout my <a href="http://visuallanguagelab.com/papers.html">papers</a>. I argue that drawing is structuredâand learnedâthe same as language. We develop a visual vocabulary that we pull from when we draw, not just drawing what we see by eye or by mind. </span><span style="font-family: Helvetica;">If drawing works like language, then it should be learned the same way: by acquiring the visual vocabulary in your environment. So, the whole idea of âlearning to drawâ is framed wrong. Itâs not âlearning to drawâ itâs actually âacquiring a visual vocabulary.â</span><p></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">How do we learn a vocabulary? By copying! Imitation is the engine of language learning, whether itâs speaking or drawing. Yet, because we now have a cultural conception of drawing that says *not to copy* the result is: âI canât draw.â<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">So, effectively, this notion that âcopying othersâ drawings is bad because it limits creativityâ actually suppresses peopleâs ability to learn to draw in the first place! This is why people âcanât drawâ: because the cultural notions of drawing oppress their development to acquire a visual vocabulary.<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Some closing references: I talk about all this stuff in my pair of papers: <o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"><a href="https://www.karger.com/Article/Fulltext/341842">Explaining âI canât draw</a>'<o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"><a href="https://bit.ly/3qwSdKj">Framing âI canât drawâ</a></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Background on this comes from excellent work by Marjorie and Brent Wilson, who have a nice practical book on <i><a href="https://www.amazon.com/Teaching-Children-Draw-Marjorie-Wilson/dp/1615280057">Teaching Children to Draw</a>.</i><o:p></o:p></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"> </span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Lots more background comes from the excellent academic work by John Willats in <a href="https://www.routledge.com/Making-Sense-of-Childrens-Drawings/Willats/p/book/9780805845389 "><i>Making Sense of Childrenâs Drawings</i></a>, who reviews the history of Cizekâs art school and itâs detrimental effects on art education.</span><br clear="all" /><o:p></o:p></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"><br /></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;"></span></p><p class="MsoNormal" style="font-family: Calibri, sans-serif; margin: 0in 0in 0.0001pt;"><span style="font-family: Helvetica;">Final thought: All this is to say that learning to draw is NOT about who does or does not have "talent". Everyone starts out with the same potential for drawing, but it requires nurturing by acquiring a visual vocabulary.<o:p></o:p></span></p>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-44218387729460423862019-12-21T08:20:00.000-08:002019-12-21T08:20:00.427-08:002019: My publications in reviewIt's now become an annual tradition for me to summarize my publications from the past year (<a href="http://www.thevisuallinguist.com/2016/12/2016-my-publications-in-review.html">2016</a>, <a href="http://www.thevisuallinguist.com/2017/12/2017-my-publications-in-review.html">2017</a>, <a href="http://www.thevisuallinguist.com/2018/12/2018-my-publications-in-review.html">2018</a>). Well, 2019 has been an exciting year of papers for me, mostly because almost all of them are review papersâmany of which I'd been working on for years! So, here's what came out in 2019...<br />
<br />
<b>Your brain on comics</b> (<a href="http://www.thevisuallinguist.com/2019/04/new-paper-your-brain-on-comics.html">blog</a>, <a href="https://onlinelibrary.wiley.com/doi/10.1111/tops.12421">open access paper</a>) - This paper presents a model of the mechanisms the brain uses to processes a sequence of narrative images, informed by my studies on (neuro)cognition over the past 10 years. It proposes that there are two levels of representation involved in comprehensionâsemantics and narrative structureâand thus proposes the Parallel Interfacing Narrative-Semantics (PINS) Model. These neurocognitive mechanisms are then compared with those used in other domains, such as language processing.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjO1Ys07trCwOLHT1ReSJqR9KUTLSss3fiKe2KtSYjsvkN-n7m-ODFNcvlaJG_7eDcFbmPz4eg5om7ggnY57kMoYzihV2qhCS5o3mFjk-b5fWBNjLNagoxbJCOUfucRq2NO-6rG/s1600/VNS3_Inference_techniques2.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1188" data-original-width="1600" height="237" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjO1Ys07trCwOLHT1ReSJqR9KUTLSss3fiKe2KtSYjsvkN-n7m-ODFNcvlaJG_7eDcFbmPz4eg5om7ggnY57kMoYzihV2qhCS5o3mFjk-b5fWBNjLNagoxbJCOUfucRq2NO-6rG/s320/VNS3_Inference_techniques2.jpg" width="320" /></a></div>
<b>Being explicit about the implicit</b> (<a href="http://www.thevisuallinguist.com/2019/05/new-paper-being-explicit-about-implicit.html">blog</a>, <a href="https://www.cambridge.org/core/journals/language-and-cognition/article/being-explicit-about-the-implicit-inference-generating-techniques-in-visual-narrative/AEBDBD7A09A3892D860463AB57588112/core-reader">open access paper</a>) - Inference is often discussed about how comics communicate, but scholarship about it often remains very general. This paper categorizes specific patterns used in visual narratives to evoke inferences in a reader. Because such techniques are used (in the image to the right), it shows that inferences don't just happen by chance, but are directed in specific ways by an author's choices and narrative patterns.<br />
<br />
<b>Visual narratives and the mind</b> (<a href="http://www.thevisuallinguist.com/2019/06/new-paper-visual-narratives-and-mind.html">blog</a>, <a href="http://www.visuallanguagelab.com/P/2019.PLM.NC.pdf">pdf preprint</a>) - This review paper explores the stages of processing involved with comprehending a sequence of images. It then explores the degree to which these mechanisms might overlap with those from other domains, such as language, and explores the stages of development that kids go through in learning to comprehend visual narratives. It's a bit less technical than the "Your brain on comics" article, making it good for a wider audience.<br />
<br />
<b>The neurophysiology of event processing in language and visual events</b> (<a href="http://www.thevisuallinguist.com/2019/06/new-paper-neurophysiology-of-event.html" target="_blank">blog</a>, <a href="http://www.visuallanguagelab.com/P/2019.OHOE.NCMP.pdf" target="_blank">pdf paper</a>) - This book chapter explores what neurocognitive research tells us about how we comprehend events. Specifically, it notes the similarities in neurocognitive mechanisms used to comprehend language and perceived visual events, and those drawn in visual narratives like comics.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5aQ2yyyeFiwdC9VOnMVLRRlUHegUE7mksqBSLyggjczPkcxXEhqYwqO8QQQrZd88V1CNKFX8kQStO1LQQ6ocB95cXCn8BuGqEzW5D44ztUYzweYDCGCcibO60S7L4B5sxbz3-/s1600/NCX_Conj%2526Ref_trees.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="918" data-original-width="801" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi5aQ2yyyeFiwdC9VOnMVLRRlUHegUE7mksqBSLyggjczPkcxXEhqYwqO8QQQrZd88V1CNKFX8kQStO1LQQ6ocB95cXCn8BuGqEzW5D44ztUYzweYDCGCcibO60S7L4B5sxbz3-/s320/NCX_Conj%2526Ref_trees.jpg" width="279" /></a></div>
<b>Structural complexity in visual narratives</b> (<a href="http://www.thevisuallinguist.com/2019/06/new-paper-structural-complexity-in.html">blog</a>, <a href="http://www.visuallanguagelab.com/P/2019.NCM.NC.pdf" target="_blank">pdf preprint</a>) - This chapter in the book <i><a href="https://www.amazon.com/gp/product/080329686X/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=vislanlab-20&creative=9325&linkCode=as2&creativeASIN=080329686X&linkId=95e086e7ba82583368556c0c4c2eb1f8" target="_blank">Narrative Complexity</a> </i>explores questions of complexity regarding the structure of narrative patterns. I explore how various narrative schema combine to create complex patterns (image to the right), and then do a cross-cultural analysis of those patterns to show that they differ in how much they are used between Western and Asian comics. I then close with a review of the neurocognition of visual narratives.<br />
<br />
<b>Visual narrative comprehension: Universal or not?</b> (<a href="http://www.thevisuallinguist.com/2019/12/new-paper-visual-narrative.html" target="_blank">blog</a>, <a href="https://link.springer.com/article/10.3758/s13423-019-01670-1" target="_blank">open access paper</a>) - This review paper asks to what degree visual narrative sequences are universally transparent to understand. I review cross-cultural work on people who have difficulty comprehending sequences of images, developmental work on when children start comprehending image sequences, and clinical work on autism, developmental language disorder, and aphasia examining the limitations of their comprehension. These results all show that understanding sequential images requires a fluency acquired from exposure to comics and drawn visual narratives.<br />
<br />
<br />
Besides these papers, I was ecstatic to learn I'd received an <a href="http://www.thevisuallinguist.com/2019/09/erc-starting-grant-for-visual-language.html">ERC Starting Grant</a> along with some other funding for projects related to visual narratives and autism (with Emily Coderre and co.) and developmental language disorder (with Annika Anderson and co.). So, here's looking forward to an exciting 2020!Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-50192725001914033432019-12-18T15:31:00.001-08:002019-12-18T15:31:56.431-08:00New paper: Visual narrative comprehension: Universal or not?My latest paper has now been published in <a href="https://link.springer.com/article/10.3758/s13423-019-01670-1" target="_blank"><i>Psychonomic Bulletin and Review</i> entitled "Visual Narrative Comprehension: Universal or not?"</a> This paper explores to what degree sequences of images are universally transparent, and questions this basic assumption that everyone can easily understand a sequence of images with no learning or decoding.<br />
<br />
This has been a pervasive assumption amongst many scholars, particularly ones who protest my notion of a visual language, and yet I continually found evidence against this view. I had several researchers tell me of experiences they had where participants in their cross-cultural research could not understand a sequence of images. Then, in developmental research with kids, it became apparent that they didn't understand image sequences until around age 4-6.<br />
<br />
This research was troubling because many researchers were using visual narratives in their experiments as stimuli, without questioning how they worked or whether they were understood. This was especially true in research with young kids, where visual narratives were used as stimuli to study the developmental trajectory of different abilities, yet the kids were often too young to understand the stimuli themselves!<br />
<br />
Another place where visual narratives were used as stimuli was in clinical research. Visual narratives are frequent stimuli for neurodivergent populations like individuals with autism or developmental language disorder. They are also used in studies with people who have brain damage that affects language, in aphasia.<br />
<br />
So, I decided to research all of these topics, and found that all of these contexts have results where people <b><i>do not</i></b> comprehend a sequence of images in a "universal" or transparent way. This paper is the result of over five years of research on this topic, and it actually left out quite a lot! (it will thus be the topic of my next book, out next year)<br />
<br />
You can find my open access paper <a href="https://link.springer.com/article/10.3758/s13423-019-01670-1" target="_blank">online here.</a><br />
<br />
Abstract: <br />
<br />
<i>Visual narratives of sequential images â as found in comics, picture stories, and storyboards â are often thought to provide a fairly universal and transparent message that requires minimal learning to decode. This perceived transparency has led to frequent use of sequential images as experimental stimuli in the cognitive and psychological sciences to explore a wide range of topics. In addition, it underlines efforts to use visual narratives in science and health communication and as educational materials in both classroom settings and across developmental, clinical, and non-literate populations. Yet, combined with recent studies from the linguistic and cognitive sciences, decades of research suggest that visual narratives involve greater complexity and decoding than widely assumed. This review synthesizes observations from cross-cultural and developmental research on the comprehension and creation of visual narrative sequences, as well as findings from clinical psychology (e.g., autism, developmental language disorder, aphasia). Altogether, this work suggests that understanding the visual languages found in comics and visual narratives requires a fluency that is contingent on exposure and practice with a graphic system.</i><br />
<br />
Full reference (in Early View):<br />
<br />
<b><i>Cohn, Neil. 2019. "Visual narrative comprehension: universal or not?" Psychonomic Bulletin & Review. 1-20. doi: 10.3758/s13423-019-01670-1.</i></b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-5943023433755546862019-10-27T07:58:00.000-07:002019-10-27T07:58:13.829-07:00Interview with A. David LewisI had the pleasure of being interviewed on a streaming video with the comics scholar A. David Lewis recently, and he's now posted the video online! His primary line of questioning is whether my neurocognitive research could be considered a complementary side of Graphic Medicine (the field that uses graphics and comics to communicate and explore health related concerns). Here's our discussion...<br />
<br />
<iframe allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/i3DsMuGUqnU" width="560"></iframe>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-3580975264874787102019-09-03T05:58:00.000-07:002019-09-03T05:58:05.078-07:00ERC Starting Grant for Visual Language researchI'm very happy to officially announce that I have received an <a href="https://erc.europa.eu/news/StG-recipients-2019" target="_blank">ERC Starting Grant</a>! This is my first major individual research grant (after many many tries), and I'm very excited to have the chance to work on a project I've been planning for over 10 years.<br />
<br />
My project "Visual narra<u>ti</u>ves as a wi<u>n</u>dow into language and cogni<u>ti</u>o<u>n</u>" (nicknamed "TINTIN") is going to build tools for analyzing visual and multimodal information, and then incorporate it into a corpus of data. All of these tools and data will be made publicly accessible for other researchers to explore, though we'll be using them to study whether there are cross-cultural patterns in the visual languages used in comics of the world, and whether those patterns connect to the spoken languages of their authors. In the coming months I'll be hiring a team of students and researchers to put this project into motion.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://visuallanguagelab.com/images/cc.png" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://visuallanguagelab.com/images/cc.png" data-original-height="254" data-original-width="338" height="240" width="320" /></a></div>
This project is a follow up and expansion from my previous corpus work in the <a href="http://visuallanguagelab.com/vlrc/" target="_blank">Visual Language Research Corpus</a>, which capped out around 300 comics (+ 4,000 Calvin and Hobbes strips). We're finishing writing up this data, which has already appeared in papers about <a href="http://www.thevisuallinguist.com/2018/11/new-paper-cultural-pages-of-comics.html" target="_blank">cross-cultural page layouts</a>, and American <a href="http://www.thevisuallinguist.com/2016/11/new-paper-changing-pages-of-comics.html" target="_blank">page layouts</a> and <a href="http://www.thevisuallinguist.com/2017/06/new-paper-picture-is-worth-more-words.html" target="_blank">storytelling</a> over time. However, since the TINTIN project will be launching a new, more sophisticated coding scheme and methods, I plan on making the data of the VLRC publicly available soon as well.<br />
<br />
Here's my official description of the TINTIN project:<br />
<br />
"Drawn sequences of images are a fundamental aspect of human communication, appearing from instruction manuals and educational material to comics. Despite this, only recently have scholars begun to examine these visual narratives, making this an untapped resource to study the cognition of sequential meaning-making. The emerging field analysing this work has implicated similarities between sequential images and language, which raises the question: Just how similar is the structure and processing of visual narratives and language? I propose to explore this query by drawing on interdisciplinary methods from the psychological and linguistic sciences. First, in order to examine the structural properties of visual narratives, we need a large-scale corpus of the type that has benefited language research. Yet, no such databases exist for visual narrative systems. I will thus create innovative visual annotation tools to build a corpus of 1,500 annotated comics from around the world (Stage 1). With such a corpus, I will then ask, do visual narratives differ in their properties around the world, and does such variance influence their comprehension (Stage 2)? Next, we might ask why such variation appears, particularly: might differences between visual narratives be motivated by patterns in spoken languages, thereby implicating cognitive processes across modalities (Stage 3)? Thus, this proposal aims to investigate the domain-specific (Stage 2) and domain-general (Stage 3) properties of visual narratives, particularly in relation to language, by analysing both production (corpus analyses) and comprehension (experimentation). This research will be ground-breaking by challenging our knowledge about the relations between drawing, sequential images, and language. The goal is not simply to create tools to explore a limited set of questions, but to provide resources to jumpstart a budding research field for visual and multimodal communication in the linguistic and cognitive sciences."<br />
<br />
Be ready to hear a lot more about this project over the next 5+ years!Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-88105351127195688432019-06-01T08:05:00.000-07:002019-06-01T08:05:49.309-07:00New paper: Structural complexity in visual narratives<div class="separator" style="clear: both; text-align: center;">
<a href="https://images-na.ssl-images-amazon.com/images/I/51OCEq17yKL._SX331_BO1,204,203,200_.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="499" data-original-width="333" height="320" src="https://images-na.ssl-images-amazon.com/images/I/51OCEq17yKL._SX331_BO1,204,203,200_.jpg" width="213" /></a></div>
2019 so far has been a flurry of published papers for me, and here's yet another. My paper "Structural complexity in visual narratives: Theory, brains, and cross-cultural diversity" is now published in the book collection <i><a href="https://www.amazon.com/gp/product/080329686X/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=vislanlab-20&creative=9325&linkCode=as2&creativeASIN=080329686X&linkId=95e086e7ba82583368556c0c4c2eb1f8" target="_blank">Narrative Complexity and Media: Experiential and Cognitive Interfaces</a></i>. The book is an extensive resource (468 pages!) including many chapters about the cognitive study of narrative. Mine is one of several that discusses visual narratives, along with complementary chapters by Joe Magliano and James Cutting. So, the book is highly recommended!<br />
<br />
In this paper, I tackle the issue of "narrative complexity" in three ways. First, I describe the way in which sequences of images are built in terms of their underlying structure. This complexity comes from the narrative structure, and how various schematic principles combine to create patterns with "complexity" in their architecture similar to what is found in syntactic structure in sentences.<br />
<br />
The second level of complexity comes in how these narrative patterns manifest in different types of comics from around the world. We coded the properties of various comics to see how comics from Europe, the United States, and Asia might differ in their narrative patterns. We found that they indeed vary, with comics from Asia (Japan, Korea, Hong Kong) using more complex sequencing patterns than those from Europe or the United States. This is important because such diversity is systematic, implying that they are <a href="http://www.thevisuallinguist.com/2017/05/new-paper-whats-your-neural-function.html" target="_blank">encoded in the minds of their authors and readers</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyOs1ZRe_7-ygP4eaYnXbIyOiQaBOBtPFzHThD7jS7gi52az9BOBqoHZZcPFNtv5QoWXLodtolr53c4dIUymqtBgQFm5ivlTv-I_M-n0Um9ZrlWoF43I99fXGN7q1QalwiRjXl/s1600/NCX_Conj%2526Ref_trees.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="918" data-original-width="801" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjyOs1ZRe_7-ygP4eaYnXbIyOiQaBOBtPFzHThD7jS7gi52az9BOBqoHZZcPFNtv5QoWXLodtolr53c4dIUymqtBgQFm5ivlTv-I_M-n0Um9ZrlWoF43I99fXGN7q1QalwiRjXl/s400/NCX_Conj%2526Ref_trees.jpg" width="346" /></a></div>
The third level of complexity comes in how visual narratives like comics are processed. Many theories posit that we understand comics by simply linking meanings between panels. This implies a fairly uniform process guided only by updating meaning from image to image. However, <a href="http://www.thevisuallinguist.com/2019/04/new-paper-your-brain-on-comics.html" target="_blank">neurocognitive research implies that the brain actually uses several interacting mechanisms in the processing of narrative image sequences</a>, balancing both meaning and a narrative structure of the type described in the previous sections.<br />
<br />
Altogether, this paper outlines a balance between theoretical, cross-cultural, and neurocognitive research that identifies complexity at multiple levels.<br />
<br />
The paper is available in <i><a href="https://www.amazon.com/gp/product/080329686X/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=vislanlab-20&creative=9325&linkCode=as2&creativeASIN=080329686X&linkId=95e086e7ba82583368556c0c4c2eb1f8" target="_blank">the book itself</a>, </i>but a <a href="http://www.visuallanguagelab.com/P/2019.NCM.NC.pdf" target="_blank">downloadable preprint version is available here</a> or on my <a href="http://www.visuallanguagelab.com/papers.html" target="_blank">downloadable papers page</a>.<br />
<br />
<br />
<b>Cohn, Neil. 2019. Structural complexity in visual narratives: Theory, brains, and cross-cultural diversity. In Grishakova, Marina and Maria Poulaki (Ed.). <i><a href="https://www.amazon.com/gp/product/080329686X/ref=as_li_qf_asin_il_tl?ie=UTF8&tag=vislanlab-20&creative=9325&linkCode=as2&creativeASIN=080329686X&linkId=95e086e7ba82583368556c0c4c2eb1f8" target="_blank">Narrative Complexity and Media: Experiential and Cognitive Interfaces</a></i> (pp. 174-199). Lincoln: University of Nebraska Press</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-36411624187256868712019-06-01T03:16:00.002-07:002019-06-01T03:16:38.080-07:00New paper: The neurophysiology of event processing in language and visual eventsIn yet another one of my recent publications, here is a book chapter that's been awaiting publication for <i>many </i>years. My paper with my dear departed friend, <a href="http://www.thevisuallinguist.com/2018/01/my-friend-martin-paczynski.html" target="_blank">Martin Paczynski</a>, "The neurophysiology of event processing in language and visual events" is now finally published in the <i>Oxford Handbook of Event Structure</i>.<br />
<br />
Our chapter gives an overview of research on the understanding of events from the perspective of cognitive neuroscience, particularly research using EEG. We actually wanted the original paper to be titled "Events electrified" but the book collection wanted less punchy titles. Our focus is on the N400 and P600 ERP effects, as they manifest in both language about events and in the perception of visual events themselves.<br />
<br />
The paper can be downloaded <a href="http://www.visuallanguagelab.com/P/2019.OHOE.NCMP.pdf" target="_blank">here </a>or at my <a href="http://www.visuallanguagelab.com/papers.html" target="_blank">downloadable papers page</a>.<br />
<br />
First paragraph:<br />
<br />
"Events are a fundamental part of human experience. All actions that we undertake, discuss, and view are embedded within the understanding of events and their structure. With the increasing complexity of neuroimaging over the past several decades, we have been able for the first time to examine how this tacit knowledge is processed and stored in peopleâs minds and brains. Among the techniques used to study the brain, electroencephalography (EEG) offers one of the few ways in which we can directly study information processed by the brain. Unlike functional imaging, whether PET or fMRI, which rely on metabolic consequences of neural activity, the EEG signal is generated by post-synaptic potentials in pyramidal cells which make up approximately 80% of neurons within the cerebral cortex. As such, EEG offers a temporal resolution measured in milliseconds, rather than seconds, making it well suited for exploring the rapid nature of language processing. Though there are numerous ways in which the EEG signal can be analyzed, in the current chapter we will focus our attention on the most common measure: event-related potentials (ERPs), the portion of the EEG signal time-locked to an event of interest, such as a word, image, or the start of a video clip."<br />
<br />
<b>Cohn, Neil and Martin Paczynski. 2019. The neurophysiology of event processing in language and visual events. In Truswell, Robert (Ed.). <i>Handbook of event structure</i>. (pp. 624-637). Oxford: Oxford University Press.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-1500495085739216482019-06-01T02:13:00.004-07:002019-06-01T02:16:06.990-07:00New paper: Visual narratives and the Mind<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ_333aEY4XpcQvwCAC72GelUcWLd9uV1AAvMX6gt5AzXsaehKZQjrgvDI_hNWOP-yXpn2QNM3XVPcPlCm1DDXerXuzo22otJnBsqbV2a70MBLE1k4qqq4XaVfegwyqxkBiSPX/s1600/Pang_VNS.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="933" data-original-width="1600" height="232" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgQ_333aEY4XpcQvwCAC72GelUcWLd9uV1AAvMX6gt5AzXsaehKZQjrgvDI_hNWOP-yXpn2QNM3XVPcPlCm1DDXerXuzo22otJnBsqbV2a70MBLE1k4qqq4XaVfegwyqxkBiSPX/s400/Pang_VNS.jpg" width="400" /></a></div>
My <a href="http://www.visuallanguagelab.com/P/2019.PLM.NC.pdf" target="_blank">latest paper</a>, "Visual narratives and the mind: Comprehension, cognition, and learning" is published in the collection <i><a href="https://www.sciencedirect.com/science/article/pii/S0079742119300027" target="_blank">Psychology of Learning and Motivation</a></i>. This paper integrates a few threads of research that I've been working on lately.<br />
<br />
The first section presents the cognitive processes that go into understanding a sequence of images, integrating two of the most recent psychological models on the issue. These include my own neurocognitive model of sequential image understanding that integrates both semantic and narrative structures, and an approach from some of my colleagues emphasizing aspects of scene perception and event cognition.<br />
<br />
The second section then asks, given these cognitive processes related to visual narrative understanding, how much of them are specialized for that specifically? Are these general mechanisms that also apply to other aspects of cognition, like language? I argue for two levels of this: more specialized processing mostly has to do with the modalities themselves: how you engage written text might be different from how you engage pictures. However, the "back end" processesâhow you compute meaning and order them into sequencesâlikely is more connected across other domains.<br />
<br />
Finally, I then examine the relation between these cognitive processes and how children learn to understand a sequence of images. A wide literature points to children only starting to understand the sequential aspects of visual narratives between ages 4 and 6. So, I discuss the stages in children's development of understanding sequential images, and link this to the cognitive processes discussed in the first section.<br />
<br />
You can find a direct <a href="http://www.visuallanguagelab.com/P/2019.PLM.NC.pdf" target="_blank">preprint pdf version of the paper here</a>, as well as on my <a href="http://www.visuallanguagelab.com/papers.html" target="_blank">downloadable papers page</a>. Here's the abstract:<br />
<i><br />
The way we understand a narrative sequence of images may seem effortless, given the prevalence of comics and picture stories across contemporary society. Yet, visual narrative comprehension involves greater complexity than is often acknowledged, as suggested by an emerging field of psychological research. This work has contributed to a growing understanding of how visual narratives are processed, how such mechanisms overlap with those of other expressive modalities like language, and how such comprehension involves a developmental trajectory that requires exposure to visual narrative systems. Altogether, such work reinforces visual narratives as a basic human expressive capacity carrying great potential for exploring fundamental questions about the mind.</i><br />
<br />
<br />
<b>Cohn, Neil. 2019. Visual narratives and the mind: Comprehension, cognition, and learning. In Federmeier, Kara D. and Diane M. Beck (Eds). <i>Psychology of Learning and Motivation: Knowledge and Vision. Vol. 70</i>. (pp. 97-128). London: Academic Press</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-52288783192409349722019-05-04T06:48:00.003-07:002019-05-04T06:48:36.260-07:00New paper: Being explicit about the implicitMy cascade of recent new papers continues with my latest paper, "<a href="https://www.cambridge.org/core/journals/language-and-cognition/article/being-explicit-about-the-implicit-inference-generating-techniques-in-visual-narrative/AEBDBD7A09A3892D860463AB57588112/core-reader" target="_blank">Being explicit about the implicit: inference generating techniques in visual narrative</a>", which has recently been published open access in <i>Language and Cognition</i>. This is a paper that was gestating for quite awhile, and it's fun to finally see it published.<br />
<br />
This paper is about how inference is generated in visual narratives like comicsâi.e., how you get meaning when it is not provided overtly. This has been a primary focus of studies of how comics communicate at least since McCloud's notion of "closure" in <i>Understanding Comics, </i>and many other scholars have posited how we "fill the gaps" for knowing what we don't see.<br />
<br />
However, much of this work has posited vague principles (closure, arthrology, etc.) for saying <i>that</i> people generate inference, but without discussing the specific cues and techniques that are used to motivate that inference in the first place. As I hope I demonstrate in this paper, inference is not a happenstance thing, and it also doesn't occur "in the gaps between panels," as most in comics studies seem to argue.<br />
<br />
Rather, specific techniques motivate readers to create inference. These techniques are patterned ways of showing, or not showing, information that in turn signals to readers that they need to make an inference. The figure below provides a handy-dandy summary of <i>some</i> of these techniques mentioned in the paper (though it isn't a figure in the paper). A high-res version for printing is available <a href="http://visuallanguagelab.com/images/NC_Inference_techniques.tif" target="_blank">here</a> if you want to use it for personal use.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRXWiM7TzisdzjUXQYkmsVO2z3t_JMwgCWLePqcGhOUV56KDsJ325Zl6jX3-OX1I_sE8IkQ6XlfAr-cM1ClW0O_FVeVM89NIftdUjKLCQk6SyLA3zhTM000J_Z1QjDz-nrBV3t/s1600/VNS3_Inference_techniques2.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1188" data-original-width="1600" height="474" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRXWiM7TzisdzjUXQYkmsVO2z3t_JMwgCWLePqcGhOUV56KDsJ325Zl6jX3-OX1I_sE8IkQ6XlfAr-cM1ClW0O_FVeVM89NIftdUjKLCQk6SyLA3zhTM000J_Z1QjDz-nrBV3t/s640/VNS3_Inference_techniques2.jpg" width="640" /></a></div>
<br />
<br />
The overarching argument thus is that it's not enough to posit broad generalities for how visual narratives like comics are comprehended, but rather research should explore the specific methods and techniques that motivate that comprehension.<br />
<br />
Not only does this paper list off these various techniques, but I also provide an analytical framework for characterizing their underlying features. This analysis actually goes back to about 5 years ago when my former students <a href="https://youtu.be/4hd9mp2QyA0" target="_blank">Kaitlin Pederson</a> and <a href="https://youtu.be/Rf7zg2OMscU" target="_blank">Ryan Taylor</a> met with me in my office at UCSD to brainstorm about inference, resulting in this scrawling whiteboard which laid the foundation for the table at the end of the article:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizQumFlGLx6RKBiua48zEwCSnYXkTTjsGbaVXEyBIJOwU873gqinZag93pO9JPI2C4RYWo8IQT5jn7AQ8JVzRtI4zpb_Ar2p1TlE-tnyANi_xUmnSqwfYEsf60XbU8cdPlShwr/s1600/2015-08-26+17.36.22.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="480" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEizQumFlGLx6RKBiua48zEwCSnYXkTTjsGbaVXEyBIJOwU873gqinZag93pO9JPI2C4RYWo8IQT5jn7AQ8JVzRtI4zpb_Ar2p1TlE-tnyANi_xUmnSqwfYEsf60XbU8cdPlShwr/s640/2015-08-26+17.36.22.jpg" width="640" /></a></div>
<br />
<br />
You can find <a href="https://www.cambridge.org/core/journals/language-and-cognition/article/being-explicit-about-the-implicit-inference-generating-techniques-in-visual-narrative/AEBDBD7A09A3892D860463AB57588112/core-reader" target="_blank">the full article online here</a>, or a <a href="http://www.visuallanguagelab.com/P/2019.LC.NC.pdf" target="_blank">pdf file here</a> and via <a href="http://www.visuallanguagelab.com/papers.html" target="_blank">my downloadable papers</a> page.<br />
<br />
Abstract<br />
<br />
<i>Inference has long been acknowledged as a key aspect of comprehending narratives of all kinds, be they verbal discourse or visual narratives like comics and films. While both theoretical and empirical evidence points towards such inference generation in sequential images, most of these approaches remain at a fairly broad level. Few approaches have detailed the specific cues and constructions used to signal such inferences in the first place. This paper thereby outlines several specific entrenched constructions that motivate a reader to generate inference. These techniques include connections motivated by the morphology of visual affixes like speech balloons and thought bubbles, the omission of certain narrative categories, and the substitution of narrative categories for certain classes of panels. These mechanisms all invoke specific combinatorial structures (morphology, narrative) that mismatch with the elicited semantics, and can be generalized by a set of shared descriptive features. By detailing specific constructions, this paper aims to push the study of inference in visual narratives to be explicit about when and why meaning is âfilled inâ by a reader, while drawing connections to inference generation in other modalities.</i><br />
<br />
<br />
<b>Cohn, Neil. 2019. Being explicit about the implicit: inference generating techniques in visual narrative. <i>Language and cognition</i>.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-41111906808175938652019-04-13T05:22:00.000-07:002019-04-13T05:25:51.792-07:00New paper: Your brain on comics<div class="separator" style="clear: both; text-align: center;">
<a href="https://wol-prod-cdn.literatumonline.com/cms/attachment/ca4103a1-85bd-46a9-beec-a562da2d8929/tops12421-fig-0005-m.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="627" data-original-width="800" height="312" src="https://wol-prod-cdn.literatumonline.com/cms/attachment/ca4103a1-85bd-46a9-beec-a562da2d8929/tops12421-fig-0005-m.jpg" width="400" /></a></div>
I'm very excited to announce the publication of my newest paper,"Your brain on comics: A cognitive model of visual narrative comprehension" in <i><a href="https://onlinelibrary.wiley.com/doi/10.1111/tops.12421" target="_blank">Topics in Cognitive Science</a></i>. This journal issue is actually a themed issue edited by me about visual narratives, and this paper is my personal contribution.<br />
<br />
This paper in many ways is a culmination of about 10 years of experimental research asking "how do we comprehend a sequence of images?" Much of this work comes from my studies measuring people's brainwaves while they read comics, but it integrates this work with research from fields of discourse, event cognition, and other related disciplines. Here, I tie this work together in a cognitive model to provide an explanation for what happens in the brain when you progress through a sequence of images. My emphasis on brain studies gives the overall endeavor a neurocognitive focus, although the model itself is not specific to the brain.<br />
<br />
The primary paper focuses on the evidence for two levels of representation in processing a sequence of images: a semantic structure, that computes the meaning, and a narrative structure, which organizes and presents that meaning in sequencing. In addition, I discuss how these mechanisms are connected to other aspects of cognition, like language and music processing, and I discuss the role of expertise and fluency in comprehending sequential images.<br />
<br />
Overall, this is the first full processing theory of visual narrative comprehension, making it a significant marker in the growth of this research field.<br />
<br />
The paper is <a href="https://onlinelibrary.wiley.com/doi/10.1111/tops.12421" target="_blank">readable online with Open Access</a>, though a downloadable pdf is available <a href="http://www.visuallanguagelab.com/P/2019.TopiCS.NC.pdf" target="_blank">here</a>, and via my <a href="http://visuallanguagelab.com/papers.html" target="_blank">downloadable papers page</a>. Here's the abstract:<br />
<br />
<br />
<i>The past decade has seen a rapid growth of cognitive and brain research focused on visual narratives like comics and picture stories. This paper will summarize and integrate this emerging literature into the Parallel Interfacing Narrative-Semantics Model (PINS Model)âa theory of sequential image processing characterized by an interaction between two representational levels: semantics and narrative structure. Ongoing semantic processes build meaning into an evolving mental model of a visual discourse. Updating of spatial, referential, and event information then incur costs when they are discontinuous with the growing context. In parallel, a narrative structure organizes semantic information into coherent sequences by assigning images to categorical roles, which are then embedded within a hierarchic constituent structure. Narrative constructional schemas allow for specific predictions of structural sequencing, independent of semantics. Together, these interacting levels of representation engage in an iterative process of retrieval of semantic and narrative information, prediction of upcoming information based on those assessments, and subsequent updating based on discontinuity. These core mechanisms are argued to be domain-generalâspanning across expressive systemsâas suggested by similar electrophysiological brain responses (N400, P600, anterior negativities) generated in response to manipulation of sequential images, music, and language. Such similarities between visual narratives and other domains thus pose fundamental questions for the linguistic and cognitive sciences.</i><br />
<br />
<br />
<b><br />
Cohn, N. (2019). Your brain on comics: A cognitive model of visual narrative comprehension. <i>Topics in Cognitive Science</i>. doi:10.1111/tops.12421</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-70882941806408424302019-04-05T07:56:00.000-07:002019-04-05T07:56:08.618-07:00Knowing the rules of comic page layouts<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfdJ6usLh4eXlQHTYao68yo56PXtUanhYXwllXx1ofERy_EgkJdOwIRTI0u83cYKpPJ64CyOZNjXGKZhAq9wN47CA04LAJwHNrjIUv4Nq9ap6wHkRcSBgiH9TN8ny1BYB24A9v/s1600/Blockage.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="277" data-original-width="360" height="152" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgfdJ6usLh4eXlQHTYao68yo56PXtUanhYXwllXx1ofERy_EgkJdOwIRTI0u83cYKpPJ64CyOZNjXGKZhAq9wN47CA04LAJwHNrjIUv4Nq9ap6wHkRcSBgiH9TN8ny1BYB24A9v/s200/Blockage.jpg" width="200" /></a></div>
One of my more engaged-with <a href="http://www.thevisuallinguist.com/2016/08/dispelling-myths-about-comics-page.html">blog post</a>s of recent memory reviewed the data for whether the panel arrangement on the right was âconfusing.â So, hereâs a post with some additional thoughts on this and the ârulesâ of comic page layouts**âŚ<br />
<br />
First off, let me remind people that I've given this layout a name: When you have a vertical stack of panels next to a tall panel, I call it "blockage." You can find terms (and science!) related to page layout in <a href="http://visuallanguagelab.com/vloc.html" target="_blank">my book </a>and <a href="http://visuallanguagelab.com/papers.html" target="_blank">my scientific papers</a> (also linked throughout).<br />
<br />
Most of the claims I make about page layouts are based on the experiments that I and others have done about them. For this layout, the key<a href="http://www.thevisuallinguist.com/2015/03/new-paper-on-comic-page-layouts.html" target="_blank"> experimental findings</a> came from two studies presenting people with empty page layouts, and then asking them to choose the order that they would read the panels.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZMaVBX1TTMvGdbJs6XyNUYxHtQ5aNDol56wGtIYPu8UOJdJ477Vh6W4MJFZvUgRL4qZ4lUGJrtBXX1bpoOZp41AQQBGe6JorHP2sscFjHFJNfKHwnn7KGCtq4M3tw2enjSYgm/s1600/NCHC_blockagedata.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="519" data-original-width="660" height="313" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjZMaVBX1TTMvGdbJs6XyNUYxHtQ5aNDol56wGtIYPu8UOJdJ477Vh6W4MJFZvUgRL4qZ4lUGJrtBXX1bpoOZp41AQQBGe6JorHP2sscFjHFJNfKHwnn7KGCtq4M3tw2enjSYgm/s400/NCHC_blockagedata.jpg" width="400" /></a>We found that for blockage layouts, around 90% say âdownâ. Or, conversely put, less than 10% of choices in these situations followed the "left-to-right-and-down" Z-path that follows the order of written text. As I said in my previous blog post, this rate is essentially the inverse of what we find for pure grids. In simple grids, we find 90% of responses choose to follow the Z-path (i.e., go right) instead of choosing other paths.<br />
<br />
<br />
Now, one criticism people have about these studies is that they don't have content in the layouts. Yes, these experiments presented empty panels, which might be different than if content is included. But, there's a good reason for this: the question we were asking wasnât âhow do people read these layouts?â but rather âwhat are peopleâs preferences for ordering these layouts?â Having no content works just fine for doing good science and factoring out confounding variables, and it answers our question of whether people have preferences for orders: yes they clearly do.<br />
<br />
So, these results show that readers have a preference for the proper reading direction. In other words, the âruleâ in their minds is that, they should read downward in blockage layouts. You might think that the âruleâ of reading comic page layouts is âleft-to-right and downâ, like text, and thus this layout is confusing. But, thatâs not the rule. Iâll explain this more in a bitâŚ<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikiWDITm4y8xJ56WBqlb-YnGw1gfbpYJXX6CIiehf-XoTJMnwjoVWdxw7tL3B8trl8aL7wc0YaAKIpR8PiCmIoPYOrfMTLBlQwJmd8nzAxbBxVVUwP_3aGGxDDk_Wj22-JzXe9/s1600/Ram_V.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="1200" data-original-width="780" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEikiWDITm4y8xJ56WBqlb-YnGw1gfbpYJXX6CIiehf-XoTJMnwjoVWdxw7tL3B8trl8aL7wc0YaAKIpR8PiCmIoPYOrfMTLBlQwJmd8nzAxbBxVVUwP_3aGGxDDk_Wj22-JzXe9/s320/Ram_V.jpg" width="208" /></a>When I say that âthis layout is not confusingâ, I mean that readers have these clear intuitions for what to do in these situations. The layout <i>itself</i> is not confusing, since people know what to do with it. What gives confusion then, is when creators donât know or donât obey this "go downward" rule, and still use layouts where blockage is read to the right. This could feasibly create confusion, since it treats this layout as âneutralâ or like there isnât a rule for its order.<br />
<br />
However, there is a clear rule for it, and thinking itâs neutral is wrong according to the experimental results for what people say their preferences are. Grids arenât used as if right and down are equal choices (though theyâre even more physically ambiguous), and nor should this layout. <br />
<br />
Certainly a creator can manipulate the reading path by using the content or balloons to go in a different directly. They do this all the time in effective and creative ways even with grids, like in the layout to the left. But, doing it against the downward path in blockage layouts have to be recognized as âbreaking the ruleâ with artistic intent.<br />
<br />
So, why isnât âleft to right and downâ the real rule of layout? Well, itâs *one* rule in comic page layouts, but <a href="https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00186/full" target="_blank">itâs just a surface choice within a broader overarching set of rules/principles</a>.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrQkkoa92YZXQ1ADsX9jhyphenhyphenBJjTeM2PF7TnkIUgWgodxFvQlpw1-U6jjoCsnk03lEw9WLhwTVdIYJ7wPfO3hEtlhbSPEXt1j02RxnRwQrZ7bFBRCih6996SNnPWkzBPwUc0_KW2/s1600/fpsyg-04-00186-g012.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="439" data-original-width="685" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrQkkoa92YZXQ1ADsX9jhyphenhyphenBJjTeM2PF7TnkIUgWgodxFvQlpw1-U6jjoCsnk03lEw9WLhwTVdIYJ7wPfO3hEtlhbSPEXt1j02RxnRwQrZ7bFBRCih6996SNnPWkzBPwUc0_KW2/s400/fpsyg-04-00186-g012.jpg" width="400" /></a></div>
Readers donât just read a comic page to just go from panel to panel along the âsurfaceâ of the canvas making choices like right, down, etc. While it is likely that surface features like balloons and bubbles can "direct" the eye, layouts themselves have rules that are not dependent on these surface features, as demonstrated by the consistent results using empty layouts.<br />
<br />
(Note: To my knowledge, there are no controlled experimental results showing that content directs readers' eyes through layouts. There is one <a href="http://www.thevisuallinguist.com/2007/11/eye-movements-reading-comic-pages.html" target="_blank">non-controlled study </a>that has some hints about this though.)<br />
<br />
Hereâs the actual rules of layout: Readers go through layouts guided by a desire to create grouped structures out of panels. The surface decisions that they make are on the basis of alignments between the edges of panels, but these choices are subservient to the larger goal of making hierarchic groupings.<br />
<br />
I argue that these grouping mechanisms are what underlie readers choices when they move from panel to panel. This may involve some surface level rules, but there is an overarching principle I call âAssemblageâ that has four basic sub-principles:<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0NhZ8xze_UnfTfu6AlY00DcQA2SY7EGM2qKL_1bElaWpr-g2eAhxmbcDRMOzeXf2HzDoru7EJIDr0gbNXq4W5srIRgEOXs38Xb9VKhc5TYQ90zDh4l2KCbuPMxAZs6Mj8na6Y/s1600/fpsyg-04-00186-g006.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="403" data-original-width="496" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg0NhZ8xze_UnfTfu6AlY00DcQA2SY7EGM2qKL_1bElaWpr-g2eAhxmbcDRMOzeXf2HzDoru7EJIDr0gbNXq4W5srIRgEOXs38Xb9VKhc5TYQ90zDh4l2KCbuPMxAZs6Mj8na6Y/s320/fpsyg-04-00186-g006.jpg" width="320" /></a></div>
<br />
1. Grouped areas > non-grouped areas<br />
2. Smooth paths > broken paths<br />
3. Do not jump over units<br />
4. Do not leave gaps<br />
<br />
The reason so many people agree on going down in this layout is because it facilitates chunking the page into grouped structures, while the rightward path doesnât, and a rightward path violates the Assemblage principles. This is why itâs a ârule.â <br />
<br />
So, if youâre a comic creator, knowing what readers are trying to do while they read can help you design layouts, including how to break those rules with intent if you need to do so artistically.<br />
<br />
<br />
<br />
**This originally appeared as a <a href="https://twitter.com/visual_linguist/status/1113480731219714048" target="_blank">Twitter thread</a>, and has now been expanded for blog format.<br />
<br />Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-73358081250872337442018-12-20T05:06:00.000-08:002019-02-22T14:59:02.261-08:002018: My publications in reviewThe last few years I've closed out the year by summarizing all of my papers that came out (<a href="http://www.thevisuallinguist.com/2016/12/2016-my-publications-in-review.html" target="_blank">2016</a>, <a href="http://www.thevisuallinguist.com/2017/12/2017-my-publications-in-review.html" target="_blank">2017</a>), and so this year I'm doing the same. It's been a diverse year of papers, with some theoretical papers, a few brainwave papers carried out by colleagues, and a corpus study. So, here are the papers that I published in 2018...<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHdzL5T0EsohhCyWYrq4dKDHJ0LgxZw-EDp-NlJppJW7P9zpXrfMgZfaBxlZJQAIqBSyc5sEIejzqpYX2J3952feL6I3jjjSIrVBaxccS_GkMIGxOdliY4yd2ecbzxBVgg7Ozb/s1600/CCECS_Data_arrangements.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="714" data-original-width="1093" height="209" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjHdzL5T0EsohhCyWYrq4dKDHJ0LgxZw-EDp-NlJppJW7P9zpXrfMgZfaBxlZJQAIqBSyc5sEIejzqpYX2J3952feL6I3jjjSIrVBaxccS_GkMIGxOdliY4yd2ecbzxBVgg7Ozb/s320/CCECS_Data_arrangements.jpg" width="320" /></a></div>
<a href="http://www.thevisuallinguist.com/2018/11/new-paper-cultural-pages-of-comics.html" target="_blank"><b>The cultural pages of comics</b></a> (<a href="http://www.visuallanguagelab.com/P/2017.JGNC.NCJAMDRYKP.pdf" target="_blank">PDF</a>) - This paper coauthored with my student assistants followed up our analysis of page layouts in superhero comics by comparing page layouts in 60 comics, 10 each from US superhero comics, US Indy comics, Japanese shonen manga, Hong Kong manhua, French bande desinĂŠe, and Swedish comics. Overall, we found that cultures differ in their page layout features in patterned and systematic ways. For example, layouts in Asian comics use more vertical segments, while those from Europe and US Indy comics use more staggering of panels within horizontal rows. <br />
<br />
<a href="http://www.thevisuallinguist.com/2018/02/new-paper-in-defense-of-grammar-in.html" target="_blank"><b>In defense of a âgrammarâ in the visual language of comics</b></a> (<a href="http://www.visuallanguagelab.com/P/2018.JoP.NC.pdf" target="_blank">PDF</a>) - This theoretical paper reviewed my theory of narrative structure, and defended it against critiques that sequential image comprehension requires only meaningful connections between panels. I review and compare the theories, and lay out arguments for why a narrative structure is both necessary and supported by the experimental evidence. I also take the hard line that any proposal for how visual narrative sequences are understood must account for the cognitive results in experimentation.<br />
<br />
<a href="http://www.thevisuallinguist.com/2018/04/new-paper-combinatorial-morphology-in.html" target="_blank"><b>Combinatorial morphology in visual languages</b> </a>(<a href="http://visuallanguagelab.com/P/2018.CoW.NC.pdf" target="_blank">PDF</a>) - In this chapter from the recent book<i> <a href="https://link.springer.com/chapter/10.1007/978-3-319-74394-3_7" target="_blank">The Construction of Words: Advances in Construction Morphology</a></i>, I try to formalize the linguistic structure of the morphology ("symbology") of visual representations like hearts or lightbulbs above the head, motion lines, and impact stars. It discusses both how these forms use systematic strategies to combine elements, and the ways they derive meaning through symbolic and metaphorical techniques.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2wwHviqwlFNJvXSr2pRW8jvAUisilwTT4UAag0QsCKXm29ueH0pOI2MN70bYvD9S231Y8kWeyAurnxekkaMnjVGuC0Usv_rF14VoLzphxzr9bNaSwuCyCfpMeiiyeXVamHNEm/s1600/AVC1_Stimuli2.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="950" data-original-width="1600" height="190" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi2wwHviqwlFNJvXSr2pRW8jvAUisilwTT4UAag0QsCKXm29ueH0pOI2MN70bYvD9S231Y8kWeyAurnxekkaMnjVGuC0Usv_rF14VoLzphxzr9bNaSwuCyCfpMeiiyeXVamHNEm/s320/AVC1_Stimuli2.jpg" width="320" /></a></div>
<a href="http://www.thevisuallinguist.com/2018/07/new-paper-listening-beyond-seeing.html" target="_blank"><b>Listening beyond seeing</b></a> (<a href="http://visuallanguagelab.com/P/2018.BL.MMNCMDAAPSB.pdf" target="_blank">PDF</a>) - My coauthor <a href="https://www.researchgate.net/profile/Mirella_Manfredi" target="_blank">Mirella Manfredi</a> carried out this cool study which showed people comics, and at the critical panel also played sounds to people. The panel showed an action, while either playing people a spoken onomatopoeia that matched or mismatched the action, or an actual sound effect that matched/mismatched the action. We measured people's brainwaves, and found that their processing of these multimodal meanings partially overlapped, but partially did not. Brainwaves to words and sounds differed at the start of their processing, but in later parts of the processing seemed to not differ, implying some sort of integrative process.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglwOm8-c1xgbIcQ5i6gwi5Bu4YwmITiYWOJaq5Wx3WHClERuGuFf7oP0Mz3qlft45Q209TIVwgOBB8zAChYoPlzmB4jkoVBz6KnN4_OZl-DKND1ODb8gvL4rjJrF5ck04hbLT9/s1600/images.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="906" data-original-width="600" height="200" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEglwOm8-c1xgbIcQ5i6gwi5Bu4YwmITiYWOJaq5Wx3WHClERuGuFf7oP0Mz3qlft45Q209TIVwgOBB8zAChYoPlzmB4jkoVBz6KnN4_OZl-DKND1ODb8gvL4rjJrF5ck04hbLT9/s200/images.jpeg" width="131" /></a></div>
<b><a href="http://www.thevisuallinguist.com/2018/08/new-paper-visual-language-theory-and.html" target="_blank">Visual Language Theory and the scientific study of comics</a> </b>(<a href="http://visuallanguagelab.com/P/2018.ESiC.NC.pdf" target="_blank">PDF</a>) - This chapter appeared in the recent book <i><a href="https://amzn.to/2XscPpU" target="_blank">Empirical Comics Research</a></i>, which has a wide survey of studies using empirical methods (corpus, computational, cognitive) to study comics. My paper provides a review of my Visual Language Theory, and its structures of vocabulary, layout, and narrative structure. I describe how theories of their structure combines with corpus analysis and psychological experimentation to give us a converging view of how visual languages in comics are built. I think it's a relatively decent introductory paper for people who are unfamiliar with my theories.<br />
<br />
<b>Are emoji a poor substitute for words?</b> (<a href="https://mindmodeling.org/cogsci2018/papers/0295/0295.pdf" target="_blank">PDF</a>, <a href="http://www.visuallanguagelab.com/P/2018.MCSS.NCTRRSJE_poster.pdf" target="_blank">Poster</a>) - Our conference paper from the 2018 Meeting of the Cognitive Science Society looked at how people process sentences when emoji are substituted for words. We found that people view emoji slower than words in sentences, but even slower when the emoji mismatches the part of speech (ex. a "noun-ish" emoji in verb position). When people read the next word after seeing a congruous emoji, they process it just as easily as seeing an all text sentence, but words after incongruous emoji are still read slower. This suggests that congruous emoji substituted for words can readily be integrated into the syntax of sentences. We also compared logos and emoji substituted in text, and found they didn't differ in their processing.<br />
<br />
<b><a href="http://www.thevisuallinguist.com/2018/09/new-paper-visual-and-linguistic.html" target="_blank">Visual and linguistic narrative comprehension in autism spectrum disorders</a> </b>(<a href="http://visuallanguagelab.com/P/2018.BL.ECNC.pdf" target="_blank">PDF</a>) - My first paper with my colleague <a href="https://www.emilycoderre.com/" target="_blank">Emily Coderre</a> compares the brainwaves of neurotypical individuals with individuals with autism while they comprehended both verbal and visual narratives. People have often claimed that autistic individuals do better with visual materials, but we show similar processing deficits for both verbal and visual materials, hinting at a more general issue processing meaning across modalities. This is the first of my papers on autism and visual narratives with Emily, and we've got lots more on tap coming soon.<br />
<br />
<a href="http://www.thevisuallinguist.com/2018/04/workshop-how-we-make-and-understand.html" target="_blank">Workshop: How we make and understand drawings</a> - Finally, not a publication, but back in April I gave two workshops at the University of Connecticut with philosopher Gabe Greenberg where we examine the structure and meaning of individual and sequential images. My portion (first day) examines how drawings are structured and how people learn to draw, which starts midway (02:18:15) through this video:<br />
<br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/1rMo49FRmN8?start=138" width="560"></iframe><br />
<br />
On the second day, my portion reviewed my findings about how visual narratives are processed, particularly the combination of narrative structure and meaning. I then presented my multimodal model of language and cognition. That's in the second half of this video (02:04:20), which unfortunately has less good sound:<br />
<br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/66sAvYLrQ_k?start=120" width="560"></iframe><br />
<br />
<br />
Forecasting ahead to next year, I can already say that it's going to be a big year. I have a special issue of a journal that I'm editing that has some great looking papers. I also have two big review papers that should be coming out, one on processing and one on "fluency" of sequential images. Plus, we've now run five (!) brainwave studies in my operational EEG lab here in Tilburg, all of which are being written up. So, here's looking forward to a good 2019...<br />
<br />
These and all my papers are available on my website <a href="http://www.visuallanguagelab.com/papers.html" target="_blank">here</a>.Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-89211398817351202272018-12-17T04:00:00.002-08:002018-12-17T04:00:16.138-08:00Review: Metaphoricity of Conventionalized Diagetic Images in Comics<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTTTkRg-OufVWOeL0IY5peudEnSBpAQmkVuWWTzjR2rNhuBq27nLDn-9tIbvRRY6bem9ixlC3SuJK74u2SWUYsdmND50YxYBJnZ_nFe_2hFooSLI77U1PHa267vC2qtSFci0xd/s1600/41k7zZAUNQL._SR600%252C315_PIWhiteStrip%252CBottomLeft%252C0%252C35_SCLZZZZZZZ_.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="316" data-original-width="222" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjTTTkRg-OufVWOeL0IY5peudEnSBpAQmkVuWWTzjR2rNhuBq27nLDn-9tIbvRRY6bem9ixlC3SuJK74u2SWUYsdmND50YxYBJnZ_nFe_2hFooSLI77U1PHa267vC2qtSFci0xd/s1600/41k7zZAUNQL._SR600%252C315_PIWhiteStrip%252CBottomLeft%252C0%252C35_SCLZZZZZZZ_.jpg" /></a></div>
MichaĹ Szawerna's recent book <i>Metaphoricity of Conventionalized Diegetic Images in Comics: A Study in Multimodal Cognitive Linguistics</i> analyzes a variety of structural aspects of the visual languages of comics by taking a deep dive into Peircean semiotics and cognitive linguistics, particularly conceptual metaphor theory, and cognitive grammar. The book seems to have flown largely under the radar of most discussions of comics theory, but it is interesting in several regards.<br />
<br />
The book opens with an analysis of the history of scholarship on comics, emphasizing the structuralist and <a href="http://www.visuallanguagelab.com/P/NC_Comics&Linguistics.pdf" target="_blank">linguistic analyses</a>. Included in this is a discussion of Polish research, which I had not previously seen discussed in other publications. It also extensively covers the semiotic theories of C.S. Peirce and the developments of conceptual metaphor theory over the past 30 years.<br />
<br />
The substantive chapters then each delve into a different aspect of the structure of comics. This starts with a chapter on the abstract properties of panels and how they convey time across sequences, then progresses to a discussion of depictions of motion (motion lines, polymorphic panels). Chapters then discuss the depictions of sound (balloons), and "mental experiences" (like thought bubbles, upfixes). A concluding chapter then summarizes the overall arguments.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioLT6M12chntGrJajQ03w-ue1_5-g_bvJ8r_zglBGy7Vc8IjfhqtTv_UjWnlPbIY-XNjozRdc7kmfMEX6TidfFoCfOnWNPndHffP38Lp9xmd16sei4IGDz4Ud_WbeZrRbLsQZe/s1600/perlinwerewolfbynight2.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="775" data-original-width="1107" height="224" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEioLT6M12chntGrJajQ03w-ue1_5-g_bvJ8r_zglBGy7Vc8IjfhqtTv_UjWnlPbIY-XNjozRdc7kmfMEX6TidfFoCfOnWNPndHffP38Lp9xmd16sei4IGDz4Ud_WbeZrRbLsQZe/s320/perlinwerewolfbynight2.jpg" width="320" /></a></div>
The book throughout contains several insightful examples and analyses, and at the least makes one consider the complexity of various visual conventions. For example, the chapter on motion discusses what I've called "polymorphic" representations, where a single panel shows a character repeated in an action to imply motion. Here Szawerna observes that this overall pattern extends beyond motion, and can also depict transformations, like a werewolf's shift from a man to wolf-man. I don't think I've seen this representation discussed in any other paper, and it's nice observation of its similarities to other polymorphic panels.<br />
<br />
Other observations seem a little overly strong. For example, in the chapter on comic panels, Szawerna takes on the strong McCloudian position that the width of panels has a direct correspondence to time duration. He also claims that images in sequence are directly mapping to a timeline of episodic events (a space = time metaphor), even comparing comics to the grid pattern of days on a calendar. I've long <a href="http://www.thevisuallinguist.com/2010/04/new-essay-limits-of-time-and.html" target="_blank">pointed out problems with this view</a>, and support against it has been provided by <a href="http://www.thevisuallinguist.com/2017/05/new-paper-whats-your-neural-function.html" target="_blank">several</a> <a href="http://www.thevisuallinguist.com/2012/03/new-article-comics-and-brain.html" target="_blank">experiments</a>.<br />
<br />
This relates to my first critique of the book. Though the book has many good insights, it ultimatley feels like a case of âif all you have is a hammer, then everything looks like a nail.â That is, the metaphorical interpretations run so rampant throughout that no alternative interpretations are offered nor considered. <a href="http://www.visuallanguagelab.com/P/2018.CoW.NC.pdf" target="_blank">I don't disagree with metaphorical interpretations of various conventions</a>, but it seems a metaphorical interpretation should be a "last resort" if a simpler explanation is possible. For example, experimentation of motion lines has implied their understanding is <a href="http://www.visuallanguagelab.com/P/NC_motionlines.pdf" target="_blank">not metaphorical or based on our perception of moving objects</a>, but driven largely by conventionalization.<br />
<br />
Also, while the work is clearly well-researched, at times references seem selective or miss important arguments. For example, in the introductory chapter, Szawerna critiques my notion of visual language on the basis of Hockettâs design features for language, claiming that visual languages cannot be languages because they do not exhibit thing like duality of patterning or arbitrariness. However, these issues are addressed in the second chapter of <a href="http://visuallanguagelab.com/vloc.html" target="_blank">my book</a>, which is cited, and perhaps more importantly, does not acknowledge that those features do not hold up for sign languages, nor are they even consistent descriptors of spoken languages.<br />
<br />
My second main critique of the book relates to cognition. Mostly the book seeks <i>describe</i> what is happening in the visual language of comics, often in very intense details. But, these often amount to just giving labels to things, falling short of <i>explaining </i>the mechanisms and cognitive processes involved in these representations. Granted, description is important too, but I would have hoped for more of a balance.<br />
<br />
More concerning is the repeated invocation for the âpsychological realityâ of the argued analyses, despite no evidence being provided for such interpretations. There are no theoretical diagnostic tests, nor is any empirical literature discussed, even though there has been relevant psychological experiments about many of the issues under analysis. Claims of "psychological reality" need to engage the actual experimental cognitive literature, as should <a href="http://www.visuallanguagelab.com/P/NC_comictheory.pdf" target="_blank">any theoretical claims about how "comics work."</a><br />
<br />
For example, the experimental literature would especially be useful to examine Szawerna's claim that people transparently understand images and conventions in visual languages (which he attributes to <a href="http://www.thevisuallinguist.com/2013/10/review-comics-and-language-by-hannah.html" target="_blank">Miodrag</a>). The empirical literate actually shows cultural differences for many conventions that occur in comics (and even basic drawings). Also, developmental psychology has shown trajectories for learning to understand basic images, image sequences, and morphemes like motion lines and carriers. Szawerna uses the assumption of transparency to ground claims of metaphoric knowledge motivated by universal and embodied understanding, but the literature does not seem to support this (although, non-transparency does not rule out a metaphoric interpretation).<br />
<br />
Finally, it should be noted that stylistically this book is not an easy read, particularly for those who don't often read research on linguistics. It is often weighed heavily by jargon and exceedingly long sentences. Some serious copyediting could beneficially cut at least a third of the book's 490 page length. This would have been useful, as I fear that sometimes the bookâs insights are buried beneath the prose. <br />
<br />
Criticisms aside, the book seems like it would be important for scholars to engage if they are interested in the understanding of these elements of visual vocabulary and/or visual metaphor. In addition, this book seems to be a landmark in the study of the visual language of comics for what it does. It is the first, to my knowledge, to devote a book extensively to rigorously analyzing just a few structural features of the visual domain. Such depth of analysis is indicative of the growing seriousness and sophistication of the linguistic and cognitive approach to visual languages, hopefully making Szawerna's book a harbinger of further works to come.<br />
<br />
<br />
<br />
<b>Szawerna, MichaĹ. 2017. <i>Metaphoricity of Conventionalized Diegetic Images in Comics: A Study in Multimodal Cognitive Linguistics</i>, ĹĂłdĹş Studies in Language 54: Peter Lang Publishing.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-42088903566139790292018-11-27T10:35:00.000-08:002019-04-03T03:24:30.492-07:00New paper: The cultural pages of comics<div class="separator" style="clear: both; text-align: center;">
</div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxg5nriATgGvtyREpMox7axynmDaERXU5zpfqIGdsD-MGsaLQnsY2KRyM108sZcG75uj-sMRKe5wUw1lcDOEpYbutEhB2AQDHP7D3bB1zVrwUzyNIEwKSYDK6Q8gw6v8XjSvtY/s1600/Cx3PXiUWgAAjVV3.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="648" data-original-width="952" height="271" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhxg5nriATgGvtyREpMox7axynmDaERXU5zpfqIGdsD-MGsaLQnsY2KRyM108sZcG75uj-sMRKe5wUw1lcDOEpYbutEhB2AQDHP7D3bB1zVrwUzyNIEwKSYDK6Q8gw6v8XjSvtY/s400/Cx3PXiUWgAAjVV3.jpg" width="400" /></a></div>
I'm excited to announce that our paper, "The cultural pages of comics: cross-cultural variation in page layouts", has been published in the <i><a href="http://www.tandfonline.com/doi/abs/10.1080/21504857.2017.1413667?journalCode=rcom20" target="_blank">Journal of Graphic Novels and Comics</a></i>! It actually came out back around a year ago, but I was waiting for it to leave "early view." Since it's still unchanged, I figured better to just post it and get it out rather than waiting around.<br />
<br />
This paper is a follow up to our <a href="http://www.thevisuallinguist.com/2016/11/new-paper-changing-pages-of-comics.html" target="_blank">prior paper looking at how page layout has changed in American superhero comics across time</a>. This project, largely undertaken by my student co-authors, instead compared the page layouts in six different types of comics from around the world.<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOWhOmwj6jQjpLtJH-KQUgzcJNey5b_oSYjQMAC0hTtKDkq41Ar0JYrRd2IIZVmhAr-13BQoqax23WCevJYo41U3_vnOyLjWlRUp9UZ5bSsiXe3OmtpGN8Q5eTNzwsRuG0hyzh/s1600/CCECS_Data_arrangements.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" data-original-height="714" data-original-width="1093" height="260" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOWhOmwj6jQjpLtJH-KQUgzcJNey5b_oSYjQMAC0hTtKDkq41Ar0JYrRd2IIZVmhAr-13BQoqax23WCevJYo41U3_vnOyLjWlRUp9UZ5bSsiXe3OmtpGN8Q5eTNzwsRuG0hyzh/s400/CCECS_Data_arrangements.jpg" width="400" /></a></div>
Our overall findings found that page layout could be a factor that characterizes different types of comics, since different cultures' layouts differed in consistent ways. In particular, Asian layouts (like Japanese manga and Hong Kong manhua) use more vertical segments than Western comics. Indy comics from the US and European comics tend to use more horizontal staggering, while American comics use more "pure" grids.<br />
<br />
These findings further contribute to showing that there are systematic cross-cultural differences between the "visual languages" used in comics of the world. We've shown in many studies (several of which are still on the way to being published) that cultures' comics differ across nearly every dimension possible, and often have variation within cultures (such as between genres). To some degree, such diversity calls into question just how consistent an abstract notion of the "comics medium" there is in the first place. More on this to come in the future for sure.<br />
<br />
The full paper is <a href="http://www.visuallanguagelab.com/P/2017.JGNC.NCJAMDRYKP.pdf" target="_blank">downloadable here</a>, and along with <a href="http://visuallanguagelab.com/papers.html">all my papers here</a>.<br />
<br />
Abstract:<br />
<br />
<blockquote>
Page layouts are a salient feature of comics, which have only recently begun to be studied using empirical methods. This preliminary study uses corpus analysis to investigate the properties of page layouts in comics from Europe (Sweden, France), Asia (Japan, Hong Kong), and America (Mainstream, Indy genres). Pages from Asian books used more vertical segments and bleeding panels, while European and American Indy pages used more horizontal staggering. Pages from American mainstream comics used widescreen panels spanning a whole row, and more variable distances between panels (separation, overlap). These results suggest that pages from different types of comics have different systematic characteristics, which can be studied by empirical methods.</blockquote>
<br />
Full reference:<br />
<br />
<b>Cohn, Neil, Jessika AxnĂŠr, Michaela Diercks, Rebecca Yeh, and Kaitlin Pederson. 2017. The cultural pages of comics: Cross-cultural variation in page layouts. <i>Journal of Graphic Novels and Comics</i>. doi: 10.1080/21504857.2017.1413667.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-80485334829364051802018-09-13T08:42:00.002-07:002018-09-13T08:42:28.288-07:00New paper: Visual and linguistic narrative comprehension in autism spectrum disordersMy new paper with my collaborator, Emily Coderre, is finally out in <i><a href="https://www.sciencedirect.com/science/article/pii/S0093934X17300408" target="_blank">Brain and Language</a></i>. Our paper,"Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments," examines the neurocognition of how meaning is processed in verbal and visual narratives for individuals with autism and neurotypical controls.<br />
<br />
We designed this study because there are many reports that individuals with autism do better with visual than verbal information. In the brain literature, we also see reduced brainwaves indicative of semantic processing for language processing in these individuals. So, we asked here: are these observations about semantic processing due to differences between visual and verbal information, or is it due to processing meaning across a sequence.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyDGgqC-s2Yie-EMN3JmcpAqbxDbiubJhVu48ujh1ATrULuBVlqBDyfBmPtY4eEBLZ4KzgrXGMHeII5w8xJ0Yb_95e6dBynMxSEtGib5Ef6oIYXAto5Ge1MhPQmQPNPAjkScpJ/s1600/ECNC_Stim.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="316" data-original-width="666" height="188" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiyDGgqC-s2Yie-EMN3JmcpAqbxDbiubJhVu48ujh1ATrULuBVlqBDyfBmPtY4eEBLZ4KzgrXGMHeII5w8xJ0Yb_95e6dBynMxSEtGib5Ef6oIYXAto5Ge1MhPQmQPNPAjkScpJ/s400/ECNC_Stim.jpg" width="400" /></a>Thus, we presented both individuals with autism and neurotypical controls with either verbal or visual narratives (i.e., comics, or comics "translated" into text) and then introduced anomalous words/images at their end to see how incongruous information would be processed in both types of stimuli.<br />
<br />
We found that individuals with autism had reduced semantic processing (the N400 brainwaves) to the incongruities in <i>both</i> the verbal and visual narratives. This implies that it's not a deficit in processing of a type of modality, but in a more general type of information processing.<br />
<br />
The full paper is available at my <a href="http://visuallanguagelab.com/papers.html" target="_blank">Downloadable Papers</a> page, or at <a href="http://visuallanguagelab.com/P/2018.BL.ECNC.pdf" target="_blank">this link (pdf)</a>.<br />
<br />
Abstract<br />
<br />
Individuals with autism spectrum disorders (ASD) have notable language difficulties, including with understanding narratives. However, most narrative comprehension studies have used written or spoken narratives, making it unclear whether narrative difficulties stem from language impairments or more global impairments in the kinds of general cognitive processes (such as understanding meaning and structural sequencing) that are involved in narrative comprehension. Using event-related potentials (ERPs), we directly compared semantic comprehension of linguistic narratives (short sentences) and visual narratives (comic panels) in adults with ASD and typically-developing (TD) adults. Compared to the TD group, the ASD group showed reduced N400 effects for both linguistic and visual narratives, suggesting comprehension impairments for both types of narratives and thereby implicating a more domain-general impairment. Based on these results, we propose that individuals with ASD use a more bottom-up style of processing during narrative comprehension.<br />
<br />
<br />
<b>Coderre, Emily L., Neil Cohn, Sally K. Slipher, Mariya Chernenok, Kerry Ledoux, and Barry Gordon. 2018. "Visual and linguistic narrative comprehension in autism spectrum disorders: Neural evidence for modality-independent impairments." <i>Brain and Language</i> 186:44-59.<br />
</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-34599335959778961452018-08-02T09:45:00.002-07:002019-02-22T14:59:17.706-08:00New paper: Visual Language Theory and the scientific study of comics<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvotMe1SbZAJfsrqiV-A0tD4PhVUkzOpd7kTLDNVn-ggSPGjjTtfvNRY_zewNWQkJQm_rBkqkML6CXdE3Z8XiV2lVqOMa4tPO9w1sZjBMk90cga__cJXd-CIy1-GXeDaoMQtpY/s1600/images.jpeg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="906" data-original-width="600" height="400" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgvotMe1SbZAJfsrqiV-A0tD4PhVUkzOpd7kTLDNVn-ggSPGjjTtfvNRY_zewNWQkJQm_rBkqkML6CXdE3Z8XiV2lVqOMa4tPO9w1sZjBMk90cga__cJXd-CIy1-GXeDaoMQtpY/s400/images.jpeg" width="263" /></a>My latest paper is a chapter in the exciting new book collection, <i>Empirical Comics Research: Digital, Multimodal, and Cognitive Methods, </i>edited by Alexander Dunst, Jochen Laubrock, and Janina Wildfeuer. The book is a collection of empirical studies about comics, summarizing many of the works presented at the Empirical Studies of Comics conference at Bremen University in 2017.<br />
<br />
It's fairly gratifying to see a collection like this combining various scholars' work using empirical methods to analyze comics. I've been doing this kind of work for almost two decades at this point, and most if it has been without many other people doing such research, and certainly not coming together in a collaborative way. So, a publication like this is a good marker for what is hopefully an emerging field.<br />
<br />
My own contribution to the collection is the last chapter, "<a href="http://visuallanguagelab.com/P/2018.ESiC.NC.pdf" target="_blank"><b>Visual Language Theory and the scientific study of comics.</b></a>" I provide an overview of my visual language research across the fields of the visual vocabulary of images, narrative structure, and page layout.<br />
<br />
I also give some advice for how to go about such research and the necessity of an interdisciplinary perspective balancing theory, experimentation, and corpus analysis. The emphasis here is that all three of these techniques are necessary to make progress, and using one technique alone is limiting.<br />
<br />
You can find a <a href="http://visuallanguagelab.com/P/2018.ESiC.NC.pdf" target="_blank">preprint version of my chapter here</a>, though I recommend checking out the whole book:<br />
<i><br /></i>
<i><a href="https://amzn.to/2XscPpU" target="_blank">Empirical Comics Research: Digital, Multimodal, and Cognitive Methods</a><img alt="" border="0" height="1" src="//ir-na.amazon-adsystem.com/e/ir?t=vislanlab-20&l=am2&o=1&a=1138737445" style="border: none !important; margin: 0px !important;" width="1" /></i><br />
<br />
Abstract of <a href="http://visuallanguagelab.com/P/2018.ESiC.NC.pdf" target="_blank">my chapter</a>:<br />
<br />
<i>The past decades have seen the rapid growth of empirical and experimental research on comics and visual narratives. In seeking to understand the cognition of how comics communicate, Visual Language Theory (VLT) argues that the structure of (sequential) images is analogous to that of verbal language, and that these visual languages are structured and processed in similar ways to other linguistic forms. While these visual languages appear prominently in comics of the world, all aspects of graphic and drawn information fall under this broad paradigm, including diverse contexts like emoji, Australian aboriginal sand drawings, instruction manuals, and cave paintings. In addition, VLTâs methods draw from that of the cognitive and language sciences. Specifically, theoretical modeling has been balanced with corpus analysis and psychological experimentation using both behavioral and neurocognitive measures. This paper will provide an overview of the assumptions and basic structures of visual language, grounded in the growing corpus and experimental literature. It will cover the nature of visual lexical items, the narrative grammar of sequential images, and the compositional structure of page layouts. Throughout, VLT emphasizes that these components operate as parallel yet interfacing structures, which manifest in varying âvisual languagesâ of the world that temper a comprehenderâs fluency for such structures. Altogether, this review will highlight the effectiveness of VLT as a model for the scientific study of how graphic information communicates.</i><br />
<br />
<br />
<b>Cohn, Neil. 2018. Visual Language Theory and the scientific study of comics. In Wildfeuer, Janina, Alexander Dunst, Jochen Laubrock (Ed.). <i>Empirical Comics Research: Digital, Multimodal, and Cognitive Methods</i>. (pp. 305-328) London: Routledge.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-84868855044986257282018-07-08T11:45:00.001-07:002018-07-08T11:45:09.755-07:00New paper: Listening beyond seeingOur new paper has just been published in <i><a href="https://www.sciencedirect.com/science/article/pii/S0093934X17303528" target="_blank">Brain and Language</a></i>, titled "Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative." My collaborator Mirella Manfredi carried out this study, which builds on <a href="http://www.thevisuallinguist.com/2017/02/new-paper-when-hit-sounds-like-kiss.html" target="_blank">her previous work looking at different types of words (Pow! vs. Hit!) substituted into visual narrative sequences</a>. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2LGSrFrJfGKYO9YVD8m0G3LGs4xDdwlsay9p12Iv5D8U7K2Q-s5eLrMGd99ovipLSLlbV_Gnsoc6-7IGGR6y8TiDGL9g0TJanSaSsmy3NlPRSopH808E73V7P8Sc4YCAzgHhx/s1600/AVC1_Stimuli2.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="950" data-original-width="1600" height="238" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2LGSrFrJfGKYO9YVD8m0G3LGs4xDdwlsay9p12Iv5D8U7K2Q-s5eLrMGd99ovipLSLlbV_Gnsoc6-7IGGR6y8TiDGL9g0TJanSaSsmy3NlPRSopH808E73V7P8Sc4YCAzgHhx/s400/AVC1_Stimuli2.jpg" width="400" /></a></div>
Here, Mirella showed visual narratives where the climactic event either matched or mismatched auditory sounds or words. So, like the figure to the right, a panel showing Snoopy spitting would accompany the sound of spitting or the word "spitting". Or, we played incongruous sounds, like the sound of something getting hit, or the word "hitting."<br />
<br />
We measured participants brainwave responses (ERPs) to these panels/sounds. We found that these stimuli elicited an "N400 response"âwhich occurs to the processing of meaning in any modality (words, sounds, images, video, etc.). We found that though the overall semantic processing response (N400) was similar to both stimulus types, the incongruous sounds evoked a slightly different response across the scalp than the incongruous words. This suggested that, despite the overall process of computing meaning being similar, these stimuli may be processed in different parts of the brain.<br />
<br />
In addition, these patterned responses very much resembled what is typical of showing words or sounds in isolation, and did not resemble what often appear to images. This suggests that, despite the multimodal image-sound/word interaction determining whether stimuli were congruent or incongruent, the semantic processing of the images did not seem to factor into the responses (or, was equally subtracted out across stimulus types).<br />
<br />
So, overall, this implies that semantic processing across different modalities uses a similar response (N400), but may differ in neural areas.<br />
<br />
You can find the paper here (<a href="http://visuallanguagelab.com/P/2018.BL.MMNCMDAAPSB.pdf" target="_blank">pdf</a>) or along with my <a href="http://visuallanguagelab.com/papers.html" target="_blank">other downloadable papers</a>.<br />
<br />
Abstract<br />
<blockquote>
Every day we integrate meaningful information coming from different sensory modalities, and previous work has debated whether conceptual knowledge is represented in modality-specific neural stores specialized for specific types of information, and/or in an amodal, shared system. In the current study, we investigated semantic processing through a cross-modal paradigm which asked whether auditory semantic processing could be modulated by the constraints of context built up across a meaningful visual narrative sequence. We recorded event-related brain potentials (ERPs) to auditory words and sounds associated to events in visual narrativesâi.e., seeing images of someone spitting while hearing either a word (Spitting!) or a sound (the sound of spitting)âwhich were either semantically congruent or incongruent with the climactic visual event. Our results showed that both incongruent sounds and words evoked an N400 effect, however, the distribution of the N400 effect to words (centro-parietal) differed from that of sounds (frontal). In addition, words had an earlier latency N400 than sounds. Despite these differences, a sustained late frontal negativity followed the N400s and did not differ between modalities. These results support the idea that semantic memory balances a distributed cortical network accessible from multiple modalities, yet also engages amodal processing insensitive to specific modalities.</blockquote>
<br />
Full reference:<br />
<br />
<b>Manfredi, Mirella, Neil Cohn, Mariana De AraĂşjo Andreoli, and Paulo Sergio Boggio. 2018. "<a href="http://visuallanguagelab.com/P/2018.BL.MMNCMDAAPSB.pdf" target="_blank">Listening beyond seeing: Event-related potentials to audiovisual processing in visual narrative</a>." <i>Brain and Language</i> 185:1-8. doi: https://doi.org/10.1016/j.bandl.2018.06.008.</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-38569515446705740792018-04-22T06:34:00.000-07:002018-04-22T06:34:03.262-07:00New paper: Combinatorial morphology in visual languages<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh__pK0j9qyil72LJQ-znQ4KBLv1JNKlVc5qtNzLRa3h73o8K8-0tR8m0LSB5-rDFiUsR5Nr5vSBGnF1BrzEq2M27YeLcCbCvJge61pNm-DuiV4V4ScWShKmgsL8z7ImNL5PM21/s1600/Normal_upfixes.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="342" data-original-width="360" height="304" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh__pK0j9qyil72LJQ-znQ4KBLv1JNKlVc5qtNzLRa3h73o8K8-0tR8m0LSB5-rDFiUsR5Nr5vSBGnF1BrzEq2M27YeLcCbCvJge61pNm-DuiV4V4ScWShKmgsL8z7ImNL5PM21/s320/Normal_upfixes.jpg" width="320" /></a></div>
I'm very pleased to announce that my newest paper, "<a href="http://visuallanguagelab.com/P/2018.CoW.NC.pdf" target="_blank">Combinatorial morphology in visual languages</a>" has now been published in a book collection edited by Geert Booij, <i><a href="https://link.springer.com/chapter/10.1007/978-3-319-74394-3_7" target="_blank">The Construction of Words: Advances in Construction Morphology</a></i>. The overall collection looks excellent and is a great resource for work in linguistics on morphology across domains.<br />
<br />
My own contribution makes a first attempt to formalize the structure of combinatorial visual morphologyâhow visual signs like motion lines or hearts combine with their "stems" to create a larger additive meaning.<br />
<br />
This paper also introduces a new concept for these types of signs. Since various visual morphemes are affixesâlike the "upfixes" that float above faces (right)âit begs the question: what are these affixes attaching to? In verbal languages, affixes attach to "word" units. But visual representations don't have words, so this paper discusses what type of structure would be required to fill that theoretical gap, and formalizes this within <a href="http://visuallanguagelab.com/P/NC_multimodality.pdf" target="_blank">the parallel architecture model of language</a>.<br />
<br />
You can download a pre-print of chapter here (<a href="http://visuallanguagelab.com/P/2018.CoW.NC.pdf" target="_blank">pdf</a>) or on my <a href="http://visuallanguagelab.com/papers.html#draw" target="_blank">downloadable papers page</a>.<br />
<br />
Abstract<br />
<br />
Just as structured mappings between phonology and meaning make up the lexicons of spoken languages, structured mappings between graphics and meaning comprise lexical items in visual languages. Such representations may also involve combinatorial meanings that arise from affixing, substituting, or reduplicating bound and self-standing visual morphemes. For example, hearts may float above a head or substitute for eyes to show a person in love, or gears may spin above a head to convey that they are thinking. Here, we explore the ways that such combinatorial morphology operates in visual languages by focusing on the balance of intrinsic and distributional construction of meaning, the variation in semantic reference and productivity, and the empirical work investigating their cross-cultural variation, processing, and acquisition. Altogether, this work draws these parallels between the visual and verbal domains that can hopefully inspire future work on visual languages within the linguistic sciences.<br />
<br />
<br />
<b>Cohn, Neil. Combinatorial morphology in visual languages. In Booij, Geert (Ed.). <i>The Construction of Words: Advances in Construction Morphology</i>. (pp. 175-199). London: Springer</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-88254542598930792722018-04-10T10:58:00.002-07:002018-04-10T10:58:56.975-07:00Workshop: How We Make and Understand DrawingsA few weeks back I had the pleasure of doing a <a href="https://harry-van-der-hulst.uconn.edu/pictorial-representations-event/" target="_blank">workshop</a> with <a href="http://gjgreenberg.bol.ucla.edu/" target="_blank">Gabriel Greenberg</a> (UCLA) about the understanding of drawings and visual narratives at the University of Connecticut. The workshop was hosted by <a href="https://harry-van-der-hulst.uconn.edu/" target="_blank">Harry van der Hulst</a> from the Linguistics Department, and we explored the connections between graphic systems and the structure of language. UConn has now been nice enough to put our talks online for everyone, and I've posted them below.<br />
<br />
On Day 1, Gabriel first talked about his theory of pictorial semantics. Then, I presented my theory about the structure of the "visual lexicon(s)" of drawing systems, and then about how children learn to draw. This covered what it means for people to say "I can't draw," as was the topic of <a href="http://visuallanguagelab.com/papers.html" target="_blank">my papers on the structure of drawing</a>. <br />
<br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/1rMo49FRmN8?start=138" width="560"></iframe><br />
<br />
On Day 2, we covered the understanding of sequential images. Here our views diverged, with Gabriel taking more of a "discourse approach", while I presented my theory of Visual Narrative Grammar and several of the studies supporting it. I finished by presenting my "grand theory of everything" about a multimodal model of language and communication. Unfortunately, the mic ran out of batteries on the second day and we didn't know it, so the sound is very soft. But, if you crank up the volume and listen carefully, you should be able to hear it (hopefully).<br />
<br />
<iframe allow="autoplay; encrypted-media" allowfullscreen="" frameborder="0" height="315" src="https://www.youtube.com/embed/66sAvYLrQ_k?start=120" width="560"></iframe>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-57623633526996253052018-02-15T14:28:00.001-08:002018-02-15T14:28:35.941-08:00New Paper: In defense of a âgrammarâ in the visual language of comics<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs9OOHzleR0r5YTQLkOtwc-AXQfgBb-D7Q1K7GLMe-ha69ZoxOko5Z4zAzytEXz9AP68IfzbQpGILlPDZ-LESMsbNLU1PJMYYLtj_I5IQM5yedTJPkZABhbn32oKHSqvByHpND/s1600/vng_example.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="754" data-original-width="1600" height="187" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgs9OOHzleR0r5YTQLkOtwc-AXQfgBb-D7Q1K7GLMe-ha69ZoxOko5Z4zAzytEXz9AP68IfzbQpGILlPDZ-LESMsbNLU1PJMYYLtj_I5IQM5yedTJPkZABhbn32oKHSqvByHpND/s400/vng_example.jpg" width="400" /></a></div>
I'm excited to announce that my new paper, "In defense of a 'grammar' in the visual language of comics" is now published in the <i><a href="https://www.sciencedirect.com/science/article/pii/S0378216617300693" target="_blank">Journal of Pragmatics</a></i>. This paper provides an overview of my theory of narrative grammar, and rigorously compares it against other approaches to sequential image understanding.<br />
<br />
Since my proposal that a "narrative grammar" operates to guide meaningful information in (visual) narratives, there have been several critiques and misunderstandings about how it works. Some approaches have also been proposed as a counterpoint. I feel all of this is healthy in the course of development of a theory and (hopefully) a broader discipline.<br />
<br />
In this paper I address some of these concerns. I detail how my model of Visual Narrative Grammar operates and I review the empirical evidence supporting it. I then compare it in depth to the specifics and assumptions found in other models. Altogether I think it makes for a good review of the literature on sequential image understanding, and outlines what we should expect out of a scientific approach to visual narrative.<br />
<br />
The paper is available on my<a href="http://visuallanguagelab.com/papers.html" target="_blank"> Downloadable Papers</a> page, or direct through this link (<a href="http://www.visuallanguagelab.com/P/2018.JoP.NC.pdf" target="_blank">pdf</a>).<br />
<br />
Abstract:<br />
<br />
Visual Language Theory (VLT) argues that the structure of drawn images is guided by similar cognitive principles as language, foremost a ânarrative grammarâ that guides the ways in which sequences of images convey meaning. Recent works have critiqued this linguistic orientation, such as Bateman and Wildfeuer's (2014) arguments that a grammar for sequential images is unnecessary. They assert that the notion of a grammar governing sequential images is problematic, and that the same information can be captured in a âdiscourseâ based approach that dynamically updates meaningful information across juxtaposed images. This paper reviews these assertions, addresses their critiques about a grammar of sequential images, and then details the shortcomings of their own claims. Such discussion is directly grounded in the empirical evidence about how people comprehend sequences of images. In doing so, it reviews the assumptions and basic principles of the narrative grammar of the visual language used in comics, and it aims to demonstrate the empirical standards by which theories of comics' structure should adhere to.<br />
<br />
<br />
Full reference:<br />
<br />
<b>Cohn, Neil. 2018. In defense of a "grammar" in the visual language of comics. <i>Journal of Pragmatics</i>. 127: 1-19</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-66660854441415340272018-01-23T12:06:00.001-08:002018-01-23T12:06:40.754-08:00My friend, Martin Paczynski<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOwmZc4hLpu6hsriPT9I5us2pinjm6A8MAzVGgow7fAO0BssIEdHLYOYH4Of4OwUmiUjOwKJvWiGObSigEiT4IDBy8qBc7lNf3rly_JQzCpxxmjx2oQJtLITtuXtow0PRpckcv/s1600/IMG_8064.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1067" data-original-width="1600" height="266" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjOwmZc4hLpu6hsriPT9I5us2pinjm6A8MAzVGgow7fAO0BssIEdHLYOYH4Of4OwUmiUjOwKJvWiGObSigEiT4IDBy8qBc7lNf3rly_JQzCpxxmjx2oQJtLITtuXtow0PRpckcv/s400/IMG_8064.jpg" width="400" /></a></div>
It was with much surprise and a heavy heart that I learned last week that my friend and colleague <a href="http://paczynski.org/" target="_blank">Dr. Martin Paczynski </a>suddenly passed away. Martin and I met in 2006 when I entered graduate school at Tufts University, and he was the first graduate student working with our mentor <a href="https://projects.iq.harvard.edu/kuperberglab" target="_blank">Gina Kuperberg</a> (I was her second). He quickly grew to be a close collaborator, a mentoring senior student, my first choice for brainstorming, and my best friend throughout graduate school. Here, I'll honor his place in the sciences and my work.<br />
<br />
It's always a nice benefit when your closest colleagues are smarter than you, and that meant Martin's influence on me and my research is everywhere. He essentially trained me in using EEG, and helped me formulate and analyze countless studies. Though he started the program a year before me, we graduated together, which I think made it all the more special.<br />
<br />
Though he initially studied computer science and worked in that field, Martin's graduate work at Tufts focused on the neurocognition of linguistic semantics, though he was knowledgeable in many more fields. His early work focused on aspects of <a href="https://pdfs.semanticscholar.org/1cd4/a01a905a7a6062b29c81f372c0fbf85e5182.pdf" target="_blank">animacy and event roles</a>. He later turned to issues of inference like <a href="https://pdfs.semanticscholar.org/1d86/4b3c8f1cdcc76e262427bc48882b915423b5.pdf" target="_blank">aspectual coercion</a>âwhere we construe an additional meaning about time that isn't in a sentence, such as the sense of repeated action in sentences like "For several minutes, the cat pounced on the toy." His experiments were elegant and brilliant. <br />
<br />
Our collaborative work on my visual language research started with <a href="http://www.visuallanguagelab.com/P/NC_(Pea)nuts&bolts.pdf" target="_blank">my first brain study</a>, for which Martin was the second author. After graduate school we co-authored our work on <a href="http://www.visuallanguagelab.com/P/NC_agents&patients.pdf" target="_blank">semantic roles of event building</a>, which united our research interests. This continued until just recently, as my <a href="http://www.visuallanguagelab.com/P/2017.BC.NCMPMK.pdf" target="_blank">most recent paper</a> again had Martin as my co-author, directly following our earlier work, almost 6 years after we left graduate school together. And it wasn't just me: <a href="https://scholar.google.nl/citations?user=VrpCyiwAAAAJ&hl=en" target="_blank">he is a co-author on many many people's work from our lab</a>, which speaks to both his generosity and insightfulness.<br />
<br />
Authorship wasn't his only presence in my work. If you've ever seen me give a talk that mentions film, you'll see him starring in the video clips I created as examples (him walking barefoot down our grad office hallway... a frequent sight). If you look at page 85 of <a href="http://visuallanguagelab.com/vloc.html" target="_blank">my book</a>, there's Martin, shaking hands with another <a href="http://evawittenberg.com/i/start.html" target="_blank">friend</a>:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmKBWtS2L2p-zW3RsvWIWoUtCGmwEqjcqt-2sIAG1l8v_PeA0YEqOLTyCB9F7zQVVAwM-4xobLYFJue_quQlAW58Upv8vKyhoOkUEUWpejsPk5lkymgIu4C6xD2e75NW1K_q1K/s1600/Martin_Refiner.jpeg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" data-original-height="404" data-original-width="1257" height="127" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjmKBWtS2L2p-zW3RsvWIWoUtCGmwEqjcqt-2sIAG1l8v_PeA0YEqOLTyCB9F7zQVVAwM-4xobLYFJue_quQlAW58Upv8vKyhoOkUEUWpejsPk5lkymgIu4C6xD2e75NW1K_q1K/s400/Martin_Refiner.jpeg" width="400" /></a></div>
<br />
After graduation, Martin's interests moved away from psycholinguistics, more towards research on mindfulness, stress, and other clinical and applied aspects of neurocognition. For many years he talked about one day studying architecture and design using EEG, but hadn't implemented those ideas just yet. There seemed to be no topic that he couldn't excel at when he applied himself.<br />
<br />
He was warm, kind, creative, funny, brilliant, and intellectually generous. I like to especially remember him with a mischievous grin, foreshadowing a comment which would inevitably be both hilarious and astute.<br />
<br />
The sciences have lost a spark of insight in Dr. Martin Paczynski, and the world has lost a progressive and compassionate soul. I've lost that and more. Safe travels my friend.Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-69687190470020255322017-12-18T10:23:00.001-08:002017-12-18T10:23:05.719-08:002017: My publications in reviewLast year I <a href="http://www.thevisuallinguist.com/2016/12/2016-my-publications-in-review.html" target="_blank">summarized all the papers I published in 2016</a>, and I thought it worked out so well I might as well keep it going. This year wasn't quite the flurry of <a href="http://www.visuallanguagelab.com/books.html" target="_blank">books</a> and <a href="http://visuallanguagelab.com/papers.html" target="_blank">papers</a> as last year (due largely to setting up a new EEG lab and submitting multiple grants), but we had several significant papers come out balancing both brainwave studies and corpus analyses.<br />
<br />
So, here are the papers that I published in 2017...<br />
<br />
<a href="http://www.thevisuallinguist.com/2017/02/new-paper-drawing-linein-visual.html" target="_blank"><b>Drawing the Line Between Constituent Structure and Coherence Relations in Visual Narratives</b></a> (<a href="http://visuallanguagelab.com/P/2017.JXPLMC.NCPB.pdf" target="_blank">pdf</a>) - This project with my former assistant Patrick Bender looked at people's intuitions for how to "segment" visual narratives into different subsections. Contrary to work on events and discourse, we found that breaks in categories of my model of narrative grammar were better predictors of segmentation than just changes in meaning between images (like spatial or character changes).<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-3N5hEL0Gso3oFWR3t_YEt17MKZoGmz2m9H6d5CaiCZV1-PHRkvwNJ8zvrzyADVHe0wdAeWGHruXX_63w4JLxlCJPvP5WEcPIhKm5-iFSETZe7nqlETapKaTdQhffqbW917uu/s1600/NeilCohn_econjunction.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="538" data-original-width="732" height="293" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh-3N5hEL0Gso3oFWR3t_YEt17MKZoGmz2m9H6d5CaiCZV1-PHRkvwNJ8zvrzyADVHe0wdAeWGHruXX_63w4JLxlCJPvP5WEcPIhKm5-iFSETZe7nqlETapKaTdQhffqbW917uu/s400/NeilCohn_econjunction.jpg" width="400" /></a><a href="http://www.thevisuallinguist.com/2017/02/new-paper-when-hit-sounds-like-kiss.html" target="_blank"><b>When a hit sounds like a kiss</b></a> (<a href="http://visuallanguagelab.com/P/2017.BL.MMNCMK.pdf" target="_blank">pdf</a>) - This project with <a href="https://www.researchgate.net/profile/Mirella_Manfredi" target="_blank">Mirella Manfredi</a> and <a href="http://kutaslab.ucsd.edu/" target="_blank">Marta Kutas</a> examined how the brain processes words that replace panels, like <i>Pow!</i> or <i>Hit!</i> replacing a climactic event. We found that the context of the sequence modulated the semantic processing of the words, and that descriptive words (<i>Hit!</i>) generated brain responses consistent with lower probability words than onomatopoeia (<i>Pow!</i>).<br />
<br />
<a href="http://www.thevisuallinguist.com/2017/05/new-paper-whats-your-neural-function.html" target="_blank"><b>What's your neural function, narrative conjunction? </b></a>(<a href="http://cognitiveresearchjournal.springeropen.com/articles/10.1186/s41235-017-0064-5" target="_blank">online article</a>, <a href="https://link.springer.com/epdf/10.1186/s41235-017-0064-5?author_access_token=pHXWOrTRZtXoqIyDvteFXm_BpE1tBhCbnbw3BuzI2RNLwR57YsG2BUFShM0IthPQbmWJhu71opkQ5uva_fdgA1kAo6r1YX1S8mH_8zWkET8Vxdv06qmDKBgcxabgCIZkQwjxCpu6YCcR9h3ghLJ48w==" target="_blank">pdf</a>) - I consider this to be one of my coolest and most interesting studies to date. With <a href="http://kutaslab.ucsd.edu/" target="_blank">Marta Kutas</a>, I examined the brain response to a narrative pattern called Environmental-Conjunction. We found that it elicits two types of brain responses consistent with grammatical processing in language. Other work has shown that Environmental-Conjunction appears more in Japanese manga than Western comics, and indeed we found that readership of manga modulated this brain response. So: the brain uses grammatical processing for narrative patterns, and people familiar with this pattern process it in ways that are different from people who are less familiar with it. In other words, <i><b>the way you process the sequences in comics depends on which ones you read</b></i>.<br />
<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLwsYHLKB6YG88zClGUCkVXFzADv7Mrw3elP8GIip3fJyN8ukVtEwa3rrPaa2q91qUFHQTRIORPL10EwIlWrS0XiI7JcQnIWytkTyYZHQsjG1hln3WIPBD9yy7ksl-ZgaY6Izg/s1600/NeilCohn_framingovertime.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="915" data-original-width="1600" height="182" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjLwsYHLKB6YG88zClGUCkVXFzADv7Mrw3elP8GIip3fJyN8ukVtEwa3rrPaa2q91qUFHQTRIORPL10EwIlWrS0XiI7JcQnIWytkTyYZHQsjG1hln3WIPBD9yy7ksl-ZgaY6Izg/s320/NeilCohn_framingovertime.jpg" width="320" /></a><a href="http://www.thevisuallinguist.com/2017/06/new-paper-picture-is-worth-more-words.html" target="_blank"><b>A picture is worth more words over time</b></a> (<a href="http://www.visuallanguagelab.com/P/2017.MC.NCRTKP.pdf" target="_blank">pdf</a>) - This project with co-authors Ryan Taylor and Kaitlin Pederson is the companion to last year's paper by Kaitlin on <a href="http://www.thevisuallinguist.com/2016/11/new-paper-changing-pages-of-comics.html" target="_blank">how page layouts have changed in superhero comics over the past 80 years</a>. Here, we look at how text-image interactions and storytelling methods have changed from the 1940s to 2010s in American superhero comics. Here's also a <a href="https://youtu.be/Rf7zg2OMscU" target="_blank">link to Ryan presenting</a> this work at Comic-Con International a few years ago.<br />
<br />
<b>Path salience in motion events from verbal and visual languages</b> (<a href="https://mindmodeling.org/cogsci2017/papers/0348/paper0348.pdf" target="_blank">pdf</a>) - In this corpus study we examined how paths are depicted in 35 different comics from 6 different countries around the world. We found that the patterns of paths differed along dimensions similar to what is found in distinctions of those authors' <i>spoken </i>languages, hinting at possible connections between a visual language that one draws and the spoken language one speaks or writes.<br />
<br />
<a href="http://www.thevisuallinguist.com/2017/09/new-paper-not-so-secret-agents.html" target="_blank"><b>Not so secret agents</b></a> (<a href="http://visuallanguagelab.com/P/2017.BC.NCMPMK.pdf" target="_blank">pdf</a>) - This paper with Marta Kutas looked at the brain processes of certain postures of characters in events. We found that preparatory postures (like reaching back to throw a ball or to punch) differed from those that did not hint at such subsequent events.<br />
<br />
Not a bad collection, if I do say so myself. I'm already excited about the new work set to come out next year, so stay tuned. All these papers and more are available <a href="http://visuallanguagelab.com/papers.html" target="_blank">online here</a>.<br />
<br />
<br />Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0tag:blogger.com,1999:blog-19586719.post-77563382927649839742017-09-23T09:43:00.001-07:002017-09-23T09:43:46.296-07:00New paper: Not so secret agents<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.visuallanguagelab.com/images/forum/marvelway.gif" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="http://www.visuallanguagelab.com/images/forum/marvelway.gif" data-original-height="231" data-original-width="324" height="228" width="320" /></a></div>
I'm excited to announce a new paper, "Not so secret agents: Event-related potentials to semantic roles in visual event comprehension," in the journal <i><a href="http://www.sciencedirect.com/science/article/pii/S0278262617301562" target="_blank">Brain and Cognition</a></i>. This paper was done during my time in the lab of my co-author, <a href="http://kutaslab.ucsd.edu/" target="_blank">Marta Kutas</a>, and collaborating with my friend from grad school, co-author <a href="https://www.researchgate.net/profile/Martin_Paczynski3" target="_blank">Martin Paczynski</a>. <br />
<br />
This paper is <a href="http://www.visuallanguagelab.com/P/NC_agents&patients.pdf" target="_blank">a follow up of a study Martin and I did previously</a> that found that agents-to-be, the doers of actions, elicit more predictions about subsequent events than patients-to-be, the receivers of actions. For example, an agent-to-be would be a person reaching back their arm to punch (like in this image from the classic <i>How to Draw Comics the Marvel Way</i>), which will convey more information about that upcoming event than the patient-to-be (who is about to be punched).<br />
<br />
In this follow up, we measured participants' brainwaves to see whether this type of "agent advantage" appears when comparing these agents appear in preparatory postures against patients, and versus those with agents where we took away the preparatory postures. So, instead of reaching back to punch, the agent's arm instead would be hanging next to their body, not indicting an upcoming punch. We found indeed that preparatory actions appear to be more costly prior to an action, and appear to have a downstream influence on processing the subsequent action. <br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgprLaxUlZ0pztdoT6ylnNsCMX_is-6uKBAbx-9nl0qQRS7zS_RJG33KY_Rkly06msaKBgNV6vUUZo1fff896dnAIrm5B-sdeP963gn_i7mdcTMkYOcWQ-uVSNWppimgXD98VO9/s1600/IMG_3351.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="1200" data-original-width="1600" height="240" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgprLaxUlZ0pztdoT6ylnNsCMX_is-6uKBAbx-9nl0qQRS7zS_RJG33KY_Rkly06msaKBgNV6vUUZo1fff896dnAIrm5B-sdeP963gn_i7mdcTMkYOcWQ-uVSNWppimgXD98VO9/s320/IMG_3351.jpg" width="320" /></a></div>
The paper is available here or on my <a href="http://visuallanguagelab.com/papers.html" target="_blank">downloadable papers page</a> or this <a href="http://visuallanguagelab.com/P/2017.BC.NCMPMK.pdf" target="_blank">direct pdf link</a>, and is summarized concisely in Experiment 1 of this <a href="http://visuallanguagelab.com/posters/cns2016.html" target="_blank">poster</a>, which has subsequently made for some keen pillows on my couch (right).<br />
<br />
<u>Abstract:</u><br />
<br />
<i>Research across domains has suggested that agents, the doers of actions, have a processing advantage over patients, the receivers of actions. We hypothesized that agents as âevent buildersâ for discrete actions (e.g., throwing a ball, punching) build on cues embedded in their preparatory postures (e.g., reaching back an arm to throw or punch) that lead to (predictable) culminating actions, and that these cues afford frontloading of event structure processing. To test this hypothesis, we compared event-related brain potentials (ERPs) to averbal comic panels depicting preparatory agents (ex. reaching back an arm to punch) that cued specific actions with those to non-preparatory agents (ex. arm to the side) and patients that did not cue any specific actions. We also compared subsequent completed action panels (ex. agent punching patient) across conditions, where we expected an inverse pattern of ERPs indexing the differential costs of processing completed actions as a function of preparatory cues. Preparatory agents evoked a greater frontal positivity (600â900 ms) relative to non-preparatory agents and patients, while subsequent completed actions panels following non-preparatory agents elicited a smaller frontal positivity (600â900 ms). These results suggest that preparatory (vs. non-) postures may differentially impact the processing of agents and subsequent actions in real time.</i><br />
<br />
<br />
<u>Full reference:</u><br />
<br />
<b>Cohn, Neil, Martin Paczynski, and Marta Kutas. 2017. <a href="http://visuallanguagelab.com/P/2017.BC.NCMPMK.pdf" target="_blank">Not so secret agents: Event-related potentials to semantic roles in visual event comprehension</a>. <i>Brain and Cognition</i>. 119: 1-9.<br />
</b>Neil Cohnhttp://www.blogger.com/profile/03705933006220475644noreply@blogger.com0