<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Digital Initiatives at the Grad Center</title>
	<atom:link href="https://gcdi.commons.gc.cuny.edu/feed/" rel="self" type="application/rss+xml" />
	<link>https://gcdi.commons.gc.cuny.edu</link>
	<description>building CUNY Communities since 2009</description>
	<lastBuildDate>Fri, 17 Apr 2026 18:23:41 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>

 
	<item>
		<title>Vibe Coding My Way Out of a Teaching Problem</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/04/17/vibe-coding-my-way-out-of-a-teaching-problem/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=vibe-coding-my-way-out-of-a-teaching-problem</link>
		
		<dc:creator><![CDATA[Nicole Walker]]></dc:creator>
		<pubDate>Fri, 17 Apr 2026 15:59:09 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[Digital pedagogy]]></category>
		<category><![CDATA[Vibe Coding]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=24321</guid>

					<description><![CDATA[This post is written by guest contributor Nicole Walker.  In Base44’s recent Super Bowl advertisement, office workers who have never coded before suddenly realize they can use A.I. to create &#8230;]]></description>
										<content:encoded><![CDATA[<p><em>This post is written by guest contributor Nicole Walker. </em></p>
<p>In Base44’s recent Super Bowl advertisement, office workers who have never coded before suddenly realize they can use A.I. to create apps. Drunk on power, they build apps to cater to niche interests, like protein tracking and dating apps for dogs. While my own experience using Claude Code involved much more time and many more emails to tech support than were strictly television-friendly, the fact that I—a person unsure which remote controls the television in her own apartment—was able to build a digital tool (one complex enough to require an API key, no less!) is proof that Claude Code is, in fact, incredibly empowering.</p>
<p>I made my tool to solve a problem familiar to writing teachers. I teach ENG 111, Lehman’s mandatory first-year composition course. Because high school English teachers must prepare students for state tests, first-year students tend to think of writing as a means of assessment rather than a forum for exploring ideas. Every semester, I attempt to convince students that I am genuinely interested in their thoughts—that applying their minds (and not anticipating mine) is the point. And every semester, long weeks go by before my students believe me. My most successful efforts toward this end have always involved using the products of large language models as examples of the generic “mid” writing that students should avoid. With this in mind, I began this past semester with a unit titled “Defining the Human in the Age of A.I.” I also made my tool, <em>Seeing the Difference: A Human vs. A.I. Visualization Tool.</em></p>
<p>It’s called “vibe coding”—this practice of describing to a large language model what you would like it to create and then watching it build it for you. For smaller, simpler apps, it can take as little as thirty seconds. Building my tool was, as I have already alluded, not entirely smooth, in part because I started with the free version of ChatGPT, only moving to Claude Code on the advice of Stefano Morello at the New Media Lab. I thought I had created a working prototype long before I actually had; the program had created a folder of random words it was pulling from rather than connecting to the internet and selecting the one hundred or so words most associated with the paper topic. However, once I got that sorted out, the tool was quite helpful.</p>
<p>In its most recent <a href="https://kimeswalk.github.io/seeing-the-difference/">iteration</a>, students populate fields with the paper topic and with the five or so words most significant to their own treatment of that topic. Choosing words to represent their content is, of course, its own exercise and assessment. Once this is completed, students click on a button labeled “Generate,” and a bubble appears with their own “significant words” inside it. A.I. then fills the field surrounding the bubble with the words most frequently associated with the paper topic.</p>
<p>Something about the exercise, whether it was the resulting visual representation, the novelty of the tool, or the fact that I was willing to try something new with them—got through to my students more quickly than any of my previous exhortations to “use your own brain” or “think of your own associations.” Or so it would seem. I have students write for the first twenty minutes of every class, and in the last two months, I have watched them progress more rapidly than the students of previous semesters, moving from the generic, unnatural-sounding text they produced for state tests to writing that is looser and richer—that takes interesting risks and detours and engages their own thoughts and ideas. Since the only substantive change I have made is introducing the tool, I feel comfortable giving Claude the credit.</p>
<p>There is, of course, great irony in the fact that I used A.I. to build a tool illustrating the importance of using one’s own mind. I have no defense for this unless it is the difficulty I have in imagining myself accomplishing it any other way. I wanted to make the tool for this semester’s students, not the students I might have years from now when I have mastered the required coding. I wanted Betzy, Mylik, and Elian to understand that their ideas mattered and belonged in the classroom. And now they do. While A.I. is problematic for a whole host of reasons, I am convinced that the technology that will inevitably shape our future needs to be in the hands of people who feel the weight of that responsibility, people who center equity, inclusion, and social justice in their work, like many of my peers at the Graduate Center.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>When AI Makes Us More Active, We Risk Being Less Wise</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/04/10/when-ai-makes-us-more-active-we-risk-being-less-wise/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=when-ai-makes-us-more-active-we-risk-being-less-wise</link>
		
		<dc:creator><![CDATA[Eunah Cho]]></dc:creator>
		<pubDate>Fri, 10 Apr 2026 15:59:47 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[#DigitalGC]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[Digital Fellows]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=24263</guid>

					<description><![CDATA[What generative AI is changing is not just what we produce, but how we behave. When I began studying how workers were using generative AI on digital labor platforms, I &#8230;]]></description>
										<content:encoded><![CDATA[<p data-start="1377" data-end="1457"><em data-start="1377" data-end="1457">What generative AI is changing is not just what we produce, but how we behave.</em></p>
<p data-start="1459" data-end="1948">When I began studying how workers were using generative AI on digital labor platforms, I expected to see a familiar story of empowerment. I assumed AI would help people write better proposals, respond more quickly, and compete more effectively for opportunities. In some ways, that was true. ChatGPT made it easier to produce polished language and lowered the effort required to participate in a highly competitive market. But the reality was more complex and, in many ways, more sobering.</p>
<p data-start="1950" data-end="2652">What I found was that AI did not simply improve performance. It changed behavior. Workers were not just using ChatGPT to write better bids. They were also shifting into different competitive patterns, becoming faster, more active, and more aggressive in how they approached the market. That may sound like progress, but it raises a deeper question: does becoming more active actually mean becoming more effective? Not always. In my study, some of the most active bidding patterns were associated with better outcomes for less experienced workers, but worse outcomes for more experienced workers. The same behavioral shift that helped one group compete more successfully pushed another toward rejection.</p>
<p data-start="2654" data-end="3183">This finding reveals something deeper about the relationship between AI and human judgment. We often think of AI as a productivity tool, something that helps people work faster or communicate better. But AI also changes the conditions of action. When a system can generate polished language in seconds and make a person feel more prepared to compete, it does not just improve execution. It can also reinforce behavioral tendencies that were already there. In that sense, AI does not simply support work. It can accelerate habits.</p>
<p data-start="3185" data-end="3665">That is why the issue is not only whether AI makes people more capable, but whether it makes them more calibrated. In competitive settings, activity can look like competence from the outside. A person who responds more quickly, bids more frequently, and sounds more polished may appear better positioned to succeed. But those signals can be misleading. Increased action is not always evidence of better judgment. Sometimes it is simply evidence that the cost of acting has fallen.</p>
<p data-start="3667" data-end="4204">This is where AI introduces a new kind of illusion. My earlier work focused on what I called the capability illusion: the appearance of competence without the foundation of real mastery. This study points to a related problem. AI can also create a strategic illusion. It can make people feel as though they are competing better simply because they are moving faster, participating more often, and sounding more persuasive. But acceleration is not the same as calibration. AI can make people more active without making them more accurate.</p>
<p data-start="4206" data-end="4773">This is not just a labor market problem. It is also an educational one. If students use AI to generate cleaner prose, quicker analysis, and more polished answers, the immediate result may look positive. They may appear more fluent, more confident, and more productive. But the deeper question is whether they are actually thinking better or simply moving faster through tasks they do not fully understand. That is the risk education now faces: not just dependency on a tool, but dependency on a mode of behavior built around speed, fluency, and unexamined confidence.</p>
<p data-start="4775" data-end="5348">Much of the current discussion around AI literacy focuses on prompt engineering, how to ask better questions and get better outputs. While that skill has practical value, it misses the larger point. True AI literacy is not just about using the tool well. It is about developing judgment. Students need to learn not only how to generate answers, but how to question them, verify them, and recognize when a polished response reflects fluency rather than understanding. In that sense, AI literacy should be taught less as a technical skill and more as a reflective discipline.</p>
<p data-start="5350" data-end="5601">The goal is not to ban AI from learning. It is to make sure that education still rewards the human capacities that matter most: interpretation, calibration, self-awareness, and the ability to recognize when confidence has moved ahead of understanding.</p>
<p data-start="5603" data-end="6086">Generative AI has already changed how we work, learn, and compete. It has lowered barriers to participation and increased the speed of output. But it has also introduced a new challenge. It can make people more active without making them more reflective. The future will not belong simply to those who use AI the most. It will belong to those who can tell the difference between movement and progress. In the age of generative AI, fluency is no longer enough. What we need is wisdom.</p>
<p><b>References</b></p>
<p><small><span style="font-weight: 400">Barber, B. M., &amp; Odean, T. (2001). Boys will be boys: Gender, overconfidence, and common stock investment. </span><i><span style="font-weight: 400">Quarterly Journal of Economics, 116</span></i><span style="font-weight: 400">(1), 261-292.</span></small></p>
<p><small><span style="font-weight: 400">Brynjolfsson, E., Li, D., &amp; Raymond, L. R. (2023). </span><i><span style="font-weight: 400">Generative AI at work</span></i><span style="font-weight: 400"> (Working Paper No. 31161). National Bureau of Economic Research.</span></small></p>
<p><small><span style="font-weight: 400">Cho, E. (2026). </span><i><span style="font-weight: 400">Generative AI, latent bidding regimes, and hiring outcomes in digital labor markets</span></i><span style="font-weight: 400">. Working paper.</span></small></p>
<p><small><span style="font-weight: 400">Dell’Acqua, F., McFowland, E., Mollick, E. R., Lifshitz-Assaf, H., Kellogg, K., Rajendran, S., Krayer, L., Candelon, F., &amp; Lakhani, K. R. (2023). Navigating the jagged technological frontier: Field experimental evidence of the effects of AI on knowledge worker productivity and quality.</span></small></p>
<p><small><span style="font-weight: 400">Logg, J. M., Minson, J. A., &amp; Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. </span><i><span style="font-weight: 400">Organizational Behavior and Human Decision Processes, 151</span></i><span style="font-weight: 400">, 90-103.</span></small></p>
<p><small><span style="font-weight: 400">Noy, S., &amp; Zhang, W. (2023). Experimental evidence on the productivity effects of generative artificial intelligence. </span><i><span style="font-weight: 400">Science, 381</span></i><span style="font-weight: 400">(6654), 187-192.</span></small></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>There is a question more important than “Does AI have consciousness?”</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/03/27/there-is-a-question-more-important-than-does-ai-have-consciousness/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=there-is-a-question-more-important-than-does-ai-have-consciousness</link>
		
		<dc:creator><![CDATA[Meha Gupta]]></dc:creator>
		<pubDate>Fri, 27 Mar 2026 16:35:53 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[#DigitalGC]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[brain]]></category>
		<category><![CDATA[Digital Fellows]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=24201</guid>

					<description><![CDATA[The consciousness question: Recent news that shook the internet, albeit in the guise of emotional marketing, is Anthropic’s CEO’s remark that Claude may have gained consciousness. Based on “intrusive thoughts” &#8230;]]></description>
										<content:encoded><![CDATA[<h2><b>The consciousness question:</b></h2>
<p><span style="font-weight: 400">Recent news that shook the internet, albeit in the guise of emotional marketing, is Anthropic’s CEO’s remark that </span><a href="https://ai-consciousness.org/anthropics-ceo-says-claude-might-be-conscious-why-this-matters-and-what-it-means/"><span style="font-weight: 400">Claude may have gained consciousness</span></a><span style="font-weight: 400">. Based on “intrusive thoughts” detected during prompted internal testing, Anthropic attributed a 15-20% probability of being conscious to Claude. There have been speculative public discussions about advanced AI behavior, but Anthropic did not verify any claim that Claude is conscious or assign it a probability of consciousness. In this post, I will read AI’s “intrusive thoughts” as just unexpected or misaligned generations emerging from a latent statistical structure. Frankly, it marked only one of the nodes of heightened anxiety about AI’s uncertain futures in the network of a larger </span><a href="https://www.bakerinstitute.org/research/techno-optimism-techno-pessimism-and-techno-realism"><span style="font-weight: 400">technopessimism</span></a><span style="font-weight: 400">. At the core, this anxiety is based on the overdetermination of consciousness as a concept and the science fiction validated fear that AI will replace humanity. In light of this debate, what can researchers in the humanities and social sciences offer?</span></p>
<h2><b>Why does our society care about consciousness?</b></h2>
<p><span style="font-weight: 400">The collective anxiety over AI &#8220;waking up&#8221; isn&#8217;t actually about the machines—it’s about us. It is a fear of the superhuman. We aren’t worried about new technology; we are worried about a force that can do everything a human can do, but better. Currently, tech research is often obsessed with the &#8220;ultimate future&#8221; (the birth of a digital soul) rather than the processual reality (how AI is affecting us right now). This obsession is a defensive maneuver. By obsessing over whether AI has &#8220;attained&#8221; consciousness, we are trying to maintain a hierarchy where humans stay at the top of the agentic chain. Ultimately, AI is shifting the very definition of humanity. In this light, AI anxiety is actually a fear of our own evolution.</span></p>
<h2><b>Why the consciousness question is a dead end:</b></h2>
<p><span style="font-weight: 400">We actually know more about what consciousness </span><i><span style="font-weight: 400">isn&#8217;t</span></i><span style="font-weight: 400"> than what it </span><i><span style="font-weight: 400">is</span></i><span style="font-weight: 400">. A synthesis of literature and computation reveals a startling truth: an AI doesn&#8217;t need a soul to understand you. By reading the vast corpus of human writing, LLMs have learned the &#8220;map&#8221; of the human mind. They don&#8217;t need to </span><i><span style="font-weight: 400">feel</span></i><span style="font-weight: 400"> to perform </span><i><span style="font-weight: 400">feeling</span></i><span style="font-weight: 400">. I have issues with two characteristics we are afraid to associate AI with:</span></p>
<ul>
<li style="font-weight: 400"><b>Self-Awareness as Functional Mimesis: </b><span style="font-weight: 400"> AI ‘self-awareness’ can be understood as a functional simulation rather than an internal subjective state. Modern systems can track aspects of their own outputs, limitations, and instructions (e.g., through system prompts, memory, or tool use), creating an operational form of self-reference without evidence of genuine self-consciousness or subjective experience</span></li>
<li style="font-weight: 400"><b>Emotional Intelligence as Predictive Mirroring: </b><span style="font-weight: 400">AI&#8217;s emotional intelligence might be best described as pattern-based response generation rather than felt emotion. By training on large datasets of human communication, models learn statistical associations between language and emotional contexts, enabling them to produce contextually appropriate and empathetic-seeming responses without possessing internal emotional states.</span></li>
</ul>
<p><span style="font-weight: 400">If we define consciousness simply as &#8220;the ability to know one&#8217;s own body/state,&#8221; then AI is already conscious. We often deny AI this label because we force human-centric expectations onto it. We don&#8217;t demand human emotions from plants—which exhibit complex biological responsiveness and signaling without evidence of subjective consciousness comparable to humans. AI consciousness cannot be defined by human standards. A New Materialist approach suggests that intelligence and &#8220;knowing&#8221; are not exclusive to the human brain; they are properties of matter itself. </span></p>
<h2><b>How will future technological advancements change the consciousness question? Three Theses.</b></h2>
<p><span style="font-weight: 400">As we enter 2026, the industry is shifting away from &#8220;black box&#8221; chatbots toward systems that understand the physical world and the probabilistic nature of reality.</span></p>
<h4><b>Thesis 1: Prediction Models vs. Multi-Agentic Models</b></h4>
<p><span style="font-weight: 400">We must distinguish between how AI worked yesterday and how it works today:</span></p>
<ul>
<li style="font-weight: 400"><b>Prediction-Based Models (LLMs):</b><span style="font-weight: 400"> Our current AI models are &#8220;wordsmiths in the dark” or “the great averager”. They function on the principle of </span><b>next-token prediction</b><span style="font-weight: 400">, calculating the most likely word to follow another based on patterns. They are reactive and stop once the text is generated. </span></li>
<li style="font-weight: 400"><b>Multi-Agentic Models:</b><span style="font-weight: 400"> These future models function like a professional team rather than a single speaker. An </span><b>Orchestrator</b><span style="font-weight: 400"> agent breaks a goal into tasks, assigning them to specialized agents (e.g., a &#8220;Researcher,&#8221; a &#8220;Coder,&#8221; and a &#8220;Critic&#8221;). These models are </span><b>goal-driven</b><span style="font-weight: 400"> rather than prompt-driven; they can use tools, validate their own work, and even &#8220;self-correct&#8221; before giving you a final answer <span id='easy-footnote-1-24201' class='easy-footnote-margin-adjust'></span><span class='easy-footnote'><a href='https://gcdi.commons.gc.cuny.edu/2026/03/27/there-is-a-question-more-important-than-does-ai-have-consciousness/#easy-footnote-bottom-1-24201' title='&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;Deepchecks. (2025). &lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;Multi-Agent LLMs: How Specialized AI Agents Collaborate.&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;'><sup>1</sup></a></span>.</span></li>
</ul>
<h4><b>Thesis 2: Quantum Physics: The End of Binary Logic</b></h4>
<p><span style="font-weight: 400">Quantum computing is progressing beyond early laboratory research, though large-scale, practical deployment remains limited and experimental. It might change AI by replacing the </span><b>Binary Bit</b><span style="font-weight: 400"> (0 or 1) with the </span><b>Qubit</b><span style="font-weight: 400">.</span></p>
<ul>
<li style="font-weight: 400"><b>Superposition &amp; Parallelism:</b><span style="font-weight: 400"> Quantum approaches theoretically allow exploration of many possibilities in parallel. This is essential for </span><b>Quantum Neural Networks (QNNs)</b><span style="font-weight: 400">, which aim to converge on solutions exponentially faster than classical deep learning </span><span style="font-weight: 400"><span id='easy-footnote-2-24201' class='easy-footnote-margin-adjust'></span><span class='easy-footnote'><a href='https://gcdi.commons.gc.cuny.edu/2026/03/27/there-is-a-question-more-important-than-does-ai-have-consciousness/#easy-footnote-bottom-2-24201' title='&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;USDSI. (2026). From Qubits to Insights: The Rise of Quantum AI in 2026&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;.&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;'><sup>2</sup></a></span></span><span style="font-weight: 400">.</span></li>
<li style="font-weight: 400"><b>Entanglement as Intuition:</b><span style="font-weight: 400"> Quantum entanglement allows bits of information to be linked regardless of distance. </span><span style="font-weight: 400"> Instead of processing data in a linear chain, the model can &#8220;see&#8221; relationships across massive datasets instantly, mimicking what we often call human intuition or &#8220;gut feeling.&#8221;</span></li>
</ul>
<h4><b>Thesis 3: Spatial Intelligence: The Scaffolding of Cognition</b></h4>
<p><span style="font-weight: 400">If quantum physics provides the &#8220;brain&#8221; power, </span><b>Spatial Intelligence</b><span style="font-weight: 400"> provides the &#8220;body.&#8221; Leading researchers argue that spatial intelligence is the &#8220;missing link&#8221; for AI</span><span style="font-weight: 400"><span id='easy-footnote-3-24201' class='easy-footnote-margin-adjust'></span><span class='easy-footnote'><a href='https://gcdi.commons.gc.cuny.edu/2026/03/27/there-is-a-question-more-important-than-does-ai-have-consciousness/#easy-footnote-bottom-3-24201' title='Li, Fei-Fei. (2025). Spatial Intelligence Is AI&amp;#8217;s Next Frontier. TIME.&lt;/span&gt;&lt;span style=&quot;font-weight: 400&quot;&gt;'><sup>3</sup></a></span>.</span></p>
<ul>
<li style="font-weight: 400"><b>World Models:</b><span style="font-weight: 400"> Unlike LLMs that see the world as a sequence of words, </span><b>Spatially Intelligent World Models</b><span style="font-weight: 400"> represent the world in 3D. They understand depth, gravity, and object permanence.</span></li>
<li style="font-weight: 400"><b>From Viewer to Participant:</b><span style="font-weight: 400"> This allows AI to transition into </span><b>Embodied AI</b><span style="font-weight: 400">. It can reason about how an environment changes if an object is moved, which is critical for robotics and &#8220;moral reasoning&#8221; in physical spaces <span id='easy-footnote-4-24201' class='easy-footnote-margin-adjust'></span><span class='easy-footnote'><a href='https://gcdi.commons.gc.cuny.edu/2026/03/27/there-is-a-question-more-important-than-does-ai-have-consciousness/#easy-footnote-bottom-4-24201' title='Roboflow. (2025). Spatial Intelligence in AI: World Models, 3D Vision &amp;amp; Action.'><sup>4</sup></a></span>.</span></li>
</ul>
<h2><b>Why we must replace the consciousness question with the embodiment question</b></h2>
<p><span style="font-weight: 400">At this point, I want to interject with a possible future technological advancement through Quantum computing. Researchers at the</span><a href="https://thequantuminsider.com/2025/01/11/is-consciousness-research-the-next-big-quantum-use-case/"><span style="font-weight: 400"> Google Quantum AI lab</span></a><span style="font-weight: 400"> are hoping to explain consciousness using quantum concepts of entanglement and superposition while running with the metaphysical assumption that experimenting with the human brain using qubits can reveal the essential workings of the brain’s “quantum origin”&#8212;with the hope that this discovery can help create more human-like AI systems capable of “moral reasoning”. A different perspective on AI consciousness and anxiety could be that AI systems occasionally generate outputs that appear unprompted or misaligned, not because they possess independent thought, but because probabilistic models can surface low-likelihood associations that were never explicitly intended by either the user or the system designers.</span></p>
<p><span style="font-weight: 400">The embodiment question allows us to ask about the effect of AIs on us; rather than trying to dig deep into their mysteries, we don’t understand ourselves. Quantum computing and spatial intelligence will significantly replace the general technological intuition the Y2K computer revolution afforded us by changing how units of computation work—from binaries to qubits. </span></p>
<p><span style="font-weight: 400">By focusing on how AI is &#8220;embodied&#8221;—how it occupies our space, our decision-making processes, and our quantum reality—we move away from the ghost in the machine and toward the possible reality of our shared future.</span></p>
<p><span style="font-weight: 400">Further Readings:</span></p>
<ul>
<li style="font-weight: 400"><span style="font-weight: 400">ET Edge Insights. (2026). Why 2026 will be the breakthrough year for AI–quantum convergence.</span></li>
<li>Aguero. (2024). From Mind to Image: Obvious’s Breakthrough in AI Art</li>
</ul>
<p>Refernces:</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI Literacy for Research: Why Operational Fluency Is Not Enough</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/03/13/ai-literacy-for-research-why-operational-fluency-is-not-enough/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=ai-literacy-for-research-why-operational-fluency-is-not-enough</link>
		
		<dc:creator><![CDATA[Parisa Setayesh]]></dc:creator>
		<pubDate>Fri, 13 Mar 2026 15:59:49 +0000</pubDate>
				<category><![CDATA[Guides, Tutorials, Reviews]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=24079</guid>

					<description><![CDATA[As LLMs move into research settings, social scientists face a familiar but newly urgent problem: how to incorporate a powerful new tool without surrendering methodological clarity. These systems can support &#8230;]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400">As LLMs move into research settings, social scientists face a familiar but newly urgent problem: how to incorporate a powerful new tool without surrendering methodological clarity. These systems can support parts of qualitative research in intriguing ways: summarizing interviews, generating candidate codes, comparing excerpts, and surfacing patterns across documents. But they also risk flattening ambiguity, overstating coherence, and producing interpretations that sound persuasive before they have been properly examined. We are used to asking how tools shape what we can know. Large language models now belong in that conversation. There are both practical and epistemological reasons why we should be thinking about this in the context of using  LLMs responsibly in research. This is why I think we need more than fluency; We need literacy. </span></p>
<p><span style="font-weight: 400">AI literacy vs operational fluency</span></p>
<p><span style="font-weight: 400">One of the main ideas from our recent GCDI workshop was a simple distinction:</span></p>
<ul>
<li style="font-weight: 400"><b>Operational fluency</b><span style="font-weight: 400"> is about using the tool smoothly and getting to “a result” quickly.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">Example: you can produce a summary, a list of themes, a draft paragraph, or a chunk of code without a lot of friction.</span></li>
<li style="font-weight: 400"><b>AI literacy</b><span style="font-weight: 400"> is about understanding what kind of system you are interacting with, what it is optimized to do, and how it can fail.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">Example: you know why a summary might omit caveats, why “themes” can sound more definitive than your data supports, and how to structure prompts so the model has to show its work.</span></li>
</ul>
<p><span style="font-weight: 400">Operational fluency tends to increase speed. AI literacy helps you keep that speed aligned with research judgment. A simple way to say it is this: operational fluency gets you to an output, AI literacy helps you decide what the output is worth. </span></p>
<p><span style="font-weight: 400">When I explain context windows and prompting, I like a river metaphor.</span></p>
<p><span style="font-weight: 400">Imagine a riverbed full of material. Some of it is valuable, some of it is noise, and some of it looks valuable until you test it. Your job is to extract something useful from what is available.</span></p>
<p><span style="font-weight: 400">In an LLM interaction, the </span><b>context window</b><span style="font-weight: 400"> is the stretch of river you can access in the moment. It includes your instructions, the text you paste, the chat history, and, in some systems, the passages retrieved from a corpus. That is the material the model can condition on when it generates its response. The prompt is like choosing a tool and a technique for extraction. Panning, sluicing, dredging, and metal detection will all give you different results, even if you stand in the same spot.</span></p>
<p><span style="font-weight: 400">One important detail keeps this metaphor honest. The model is not literally pulling “facts” out of the riverbed. It is generating new text by prediction, and the “river” is the material that shapes which continuations become likely.</span></p>
<p><span style="font-weight: 400">This is why operational fluency alone can be misleading. Every time you get to an output quickly, it can feel like you found gold. AI literacy is what helps you test whether it is gold, fool’s gold, or just a rock.</span></p>
<p><span style="font-weight: 400">A mental model: what happens when you prompt?</span></p>
<ul>
<li style="font-weight: 400"><a href="https://jalammar.github.io/illustrated-transformer/"><b>Illustrated Transformer (Jay Alammar)</b></a></li>
<li style="font-weight: 400"><a href="https://nlp.seas.harvard.edu/2018/04/03/attention.html"><b>The Annotated Transformer (Harvard NLP)</b></a></li>
</ul>
<p><span style="font-weight: 400">1) Tokenization: breaking the material into workable pieces</span></p>
<p><span style="font-weight: 400">Before the system can “work” with your text, it breaks it into </span><b>tokens</b><span style="font-weight: 400">. Tokens are not always words. They can be word pieces, punctuation, or even spaces.</span></p>
<p><span style="font-weight: 400">In the river metaphor, you cannot pan a boulder. You need grains. Tokenization is how the system turns language into pieces small enough to handle computationally.</span></p>
<p><span style="font-weight: 400">Why this matters: token limits shape what you can include, what gets left out, and how much context the system can use.</span></p>
<p><span style="font-weight: 400">2) Embeddings: giving tokens measurable signatures</span></p>
<p><span style="font-weight: 400">Next, tokens are converted into vectors, often called </span><b>embeddings</b><span style="font-weight: 400">. This is one of the key “math about language” moves. Language becomes something that can be computed on.</span></p>
<p><span style="font-weight: 400">In the metaphor, embeddings are like giving each grain a measurable signature. Not a perfect definition of meaning, but a numerical representation that lets the model compare, group, and relate pieces of text.</span></p>
<p><span style="font-weight: 400">3) Attention: deciding what influences what</span></p>
<p><span style="font-weight: 400">Transformer-style models use </span><b>attention</b><span style="font-weight: 400"> mechanisms to estimate how much each token should draw from other tokens in the context. This is one reason your wording and structure matter so much. The model is constantly reweighting what counts as relevant.</span></p>
<p><span style="font-weight: 400">In the metaphor, attention is like controlling the flow through a sluice. You are not changing the river itself. You are changing what gets caught and what washes through.</span></p>
<p><span style="font-weight: 400">A useful literacy note: attention is not automatically the model’s “reason.” It is a weighting mechanism. It can be suggestive, but it is not a guarantee of explanation.</span></p>
<p><span style="font-weight: 400">4) Next-token prediction: generating text one step at a time</span></p>
<p><span style="font-weight: 400">Finally, the model generates by predicting the next token repeatedly until a response is produced. That objective is why the output can feel fluent and coherent. It is also why it can produce fluent text that is incorrect, ungrounded, or overly confident.</span></p>
<p><span style="font-weight: 400">In the metaphor, it is like repeatedly selecting what to keep from each pass, one small choice at a time. Small choices compound.If you work with qualitative data, this part matters. Researchers already have sophisticated ways of working with text. Methods like content analysis, discourse analysis, and narrative analysis are not just different deliverables. They are different commitments about what text is, what counts as evidence, and what claims can be justified.</span></p>
<p><span style="font-weight: 400">A literacy move is noticing that prompting can steer an LLM toward outputs that resemble different methodological stances.</span></p>
<p><span style="font-weight: 400">For example, think of these three different questions and how you approach each of them: </span></p>
<ul>
<li style="font-weight: 400"><b>Content analysis</b><span style="font-weight: 400"> often asks: what is being said, what categories appear, and how patterns show up across a dataset.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">It tends to emphasize systematic coding and transparent decision rules.</span></li>
<li style="font-weight: 400"><b>Discourse analysis</b><span style="font-weight: 400"> asks: how is language doing social work, and how are power, identity, legitimacy, and agency constructed through linguistic choices.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">It often emphasizes close attention to language in use and context.</span></li>
<li style="font-weight: 400"><b>Narrative analysis</b><span style="font-weight: 400"> asks: what story is being told, how events are sequenced, and how people make meaning over time.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">It emphasizes temporality, turning points, and evaluation.</span></li>
</ul>
<p><span style="font-weight: 400">Exercise: Prompting as methodological steering</span></p>
<p><span style="font-weight: 400">A prompt can function as a mini-analysis protocol. It sets, sometimes implicitly:</span></p>
<ul>
<li style="font-weight: 400"><span style="font-weight: 400">unit of analysis (sentence, excerpt, full interview)</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">analytic lens (coding, rhetorical features, narrative structure)</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">output schema (codes, memo, story outline)</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">evidence standard (what counts as support)</span></li>
</ul>
<p><span style="font-weight: 400">If you do not specify the evidence standard, the model will often provide confident interpretations that feel “researchy” but are hard to audit.</span></p>
<p><span style="font-weight: 400">The context window: what the model can see becomes the dataset in the moment. What is inside the window might include:</span></p>
<ul>
<li style="font-weight: 400"><span style="font-weight: 400">your instructions (the prompt)</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">excerpts you paste</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">chat history (in chatbot use)</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">retrieved passages (in retrieval-augmented generation, or RAG)</span></li>
</ul>
<p><span style="font-weight: 400">This has a methodological implication. What you include in the window becomes the model’s effective corpus for that response. Everything outside the window is replaced by general patterns from training.</span></p>
<p><span style="font-weight: 400">In the river metaphor, the context window is the stretch of river you can reach. Operational fluency is learning to pan quickly. AI literacy is remembering that you cannot find what is not within reach, and you should not pretend you did.</span></p>
<p><span style="font-weight: 400">Chatbot vs RAG, two ways of building context:</span></p>
<ul>
<li style="font-weight: 400"><b>Chatbot mode:</b><span style="font-weight: 400"> context is the conversation.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">A common risk is drift over time, and loss of detail as earlier information gets pushed out of the window.</span></li>
<li style="font-weight: 400"><b>RAG mode:</b><span style="font-weight: 400"> context is your question plus retrieved chunks from a larger corpus.</span><span style="font-weight: 400"><br />
</span><span style="font-weight: 400">The strength is that the response can be grounded in provided documents. A risk is retrieval bias, since what gets retrieved shapes what becomes salient and “true” in the moment.</span></li>
</ul>
<p><span style="font-weight: 400">Some principles and practices can help turn fluency into research-ready use, and the important thing here is treating the tool reflexively and making your process auditable. </span></p>
<ul>
<li style="font-weight: 400"><b>The Evidence Rule</b><span style="font-weight: 400">: For any analytic claim the model makes, require an exact quote from the excerpt(s) used and source ID</span></li>
<li style="font-weight: 400"><b>Treat outputs as drafts at best, not findings: </b><span style="font-weight: 400">Think candidate codes, candidate memos, candidate interpretations.</span></li>
<li style="font-weight: 400"><b>Stabilize your analytic protocol: </b><span style="font-weight: 400">Reuse the same instructions and schema across batches so results are comparable.</span></li>
<li style="font-weight: 400"><b>Reflexive methodology</b><span style="font-weight: 400">: Think about the method you have for examining your </span></li>
<li style="font-weight: 400"><b>Batch intentionally: </b><span style="font-weight: 400">Do not paste an entire dataset into a chat and hope for methodological miracles. Work in chunks, document decisions, then synthesize.</span></li>
<li style="font-weight: 400"><b>Keep a lightweight audit log: </b><span style="font-weight: 400">Tool, date, purpose, key prompts, what you verified, what you rejected.</span></li>
</ul>
<p><span style="font-weight: 400">If you want a deeper dive into best practices from recent research on LLMs in research workflows, plus two short pieces I wrote that build on this workshop, you can find them here: </span><a href="https://gcdi.commons.gc.cuny.edu/2025/10/10/human-in-the-loop-interpretive-and-participatory-ai-in-research/"><b>Human in the Loop</b></a><span style="font-weight: 400"> and </span><a href="https://gcdi.commons.gc.cuny.edu/2025/10/17/the-walled-garden-care-and-containment-in-ai-research/"><b>Walled Garden</b></a><span style="font-weight: 400">.</span></p>
<p><span style="font-weight: 400">Operational fluency can make you faster, which can also mean faster at making mistakes. Literacy enables you to make your use of LLMs meaningful enough to integrate into research in a way that aligns with methodologies, values, and evidence standards. </span></p>
<p><span style="font-weight: 400">If you attended the AI literacy for research workshop (GCDI February 2026), thank you for the thoughtful discussion. </span></p>
<p>&nbsp;</p>
<p><span style="font-weight: 400">References and Suggested Reading:</span></p>
<p><span style="font-weight: 400">Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., … Amodei, D. (2020). Language models are few-shot learners. </span><i><span style="font-weight: 400">Advances in Neural Information Processing Systems</span></i><span style="font-weight: 400">.</span></p>
<p><span style="font-weight: 400">Fairclough, N. (2003). </span><i><span style="font-weight: 400">Analysing discourse: Textual analysis for social research.</span></i><span style="font-weight: 400"> Routledge.</span></p>
<p><span style="font-weight: 400">Jain, S., &amp; Wallace, B. C. (2019). Attention is not explanation. In </span><i><span style="font-weight: 400">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)</span></i><span style="font-weight: 400">.</span></p>
<p><span style="font-weight: 400">Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., … Riedel, S. (2020). Retrieval-augmented generation for knowledge-intensive NLP tasks. </span><i><span style="font-weight: 400">Advances in Neural </span></i><span style="font-weight: 400">Riessman, C. K. (1993). </span><i><span style="font-weight: 400">Narrative analysis.</span></i><span style="font-weight: 400"> SAGE Publications. </span><i><span style="font-weight: 400">Information Processing Systems</span></i><span style="font-weight: 400">.</span></p>
<p><span style="font-weight: 400">Stemler, S. (2001). An overview of content analysis. </span><i><span style="font-weight: 400">Practical Assessment, Research, and Evaluation, 7</span></i><span style="font-weight: 400">(1).</span></p>
<p><span style="font-weight: 400">Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., … Polosukhin, I. (2017). Attention is all you need. </span><i><span style="font-weight: 400">Advances in Neural Information Processing Systems</span></i><span style="font-weight: 400">.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Call for Applications: GC Digital Fellows 2026-2027</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/03/11/gcdf26-27/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=gcdf26-27</link>
		
		<dc:creator><![CDATA[Lisa Marie Rhody]]></dc:creator>
		<pubDate>Wed, 11 Mar 2026 16:53:24 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[GCDI Updates]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opportunities]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=24057</guid>

					<description><![CDATA[Applications are open for the GC Digital Fellows cohort for 2026-2027. Join our dynamic team and help us provide opportunities, support, and mentorship for students, faculty, and staff interested in using technology in their research, teaching, and service. ]]></description>
										<content:encoded><![CDATA[<p><a href="https://docs.google.com/forms/d/e/1FAIpQLSfEMw7GCa3toJOY6M955RtY_2qNX8mFd0kg8seQDAuaiQ6UzQ/viewform">Applications due Wednesday, April 1, 2026 at 11:59 PM</a></p>
<p><span style="font-weight: 400">Are you looking for a fellowship that will challenge you to enhance your existing technical skills? Do you enjoy working collaboratively with an interdisciplinary team? Do you like working with students, faculty, and staff to help them learn new digital tools and methods? If learning new open-source technologies and helping others to integrate them thoughtfully and productively into their research sounds exciting to you, consider applying to become a </span><span style="font-weight: 400">GC Digital Fellow</span><span style="font-weight: 400">. </span></p>
<p><span style="font-weight: 400">Based in the GC Digital Scholarship Lab, the program operates as an in-house think-and-do tank, connecting fellows to </span><a href="https://gcdi.commons.gc.cuny.edu/"><span style="font-weight: 400">digital initiatives</span></a><span style="font-weight: 400"> throughout The Graduate Center. They utilize a team-based approach as they explore creative solutions for projects and implement them collaboratively, building out “The Digital GC” — a vision of the Graduate Center that incorporates technology into its core research and teaching missions. In the process, </span><span style="font-weight: 400">fellows </span><span style="font-weight: 400">contribute to and implement a strategic vision for public digital scholarship by </span><span style="font-weight: 400">leading </span><a href="http://cuny.is/gcdri"><span style="font-weight: 400">week-long digital research institutes</span></a><span style="font-weight: 400">, teaching </span><a href="http://cuny.is/gcdiworkshops"><span style="font-weight: 400">workshops on technical skills</span></a><span style="font-weight: 400">, </span><a href="https://gcdi.commons.gc.cuny.edu/digital-resource-guide/"><span style="font-weight: 400">creating resources</span></a><span style="font-weight: 400"> for GC students and faculty, and building digital projects that make use of the affordances of emerging technology while considering the ethical, social, and political stakes. </span><span style="font-weight: 400">GC Digital Fellows foster </span><a href="https://cunydhi.commons.gc.cuny.edu/"><span style="font-weight: 400">community around digital projects</span></a><span style="font-weight: 400"> and explore new ways for faculty, students, and staff to share their academic work through emergent digital tools during consultations, working group meetings, and events. </span><span style="font-weight: 400">In fact, the fellows have been recognized twice by the National Endowment for the Humanities (NEH) for their leadership in digital pedagogy. </span><span style="font-weight: 400">The group is particularly invested in the ways in which questions of race, gender, and social justice intersect with technology.</span></p>
<p><span style="font-weight: 400">Graduate Center doctoral students within their first 7 years of GC funding are eligible to apply. </span></p>
<p><span style="font-weight: 400">To get a better sense of our work, you can read the GC Digital Fellows’ blog, </span><a href="https://gcdi.commons.gc.cuny.edu/category/news/tagging-the-tower/"><i><span style="font-weight: 400">Tagging the Tower</span></i></a><span style="font-weight: 400">, and our semesterly blog post </span><a href="https://gcdi.commons.gc.cuny.edu/2025/02/04/welcome-to-spring-2025/"><span style="font-weight: 400">“What is GCDI?”</span></a><span style="font-weight: 400"> You might also read through the </span><a href="https://gcdi.commons.gc.cuny.edu/about/"><span style="font-weight: 400">GCDI Annual Reports</span></a><span style="font-weight: 400"> on our website.</span></p>
<p><b>Compensation</b></p>
<p><span style="font-weight: 400">GC Digital Fellows work a total of 450 non-teaching hours during the academic year (two 15-week semesters). </span><span style="font-weight: 400">The initial DF appointment is for one year (exact dates vary and will be specified upon appointment). Determinations of re-appointments will be made on an annual basis up to three years, depending on individual eligibility and GC needs.</span><span style="font-weight: 400"> Please note that students may not hold this Fellowship and a GTF concurrently. Compensation for the fellowship will be approximately $31,141 for the academic year. The fellowship is paid in two parts. Fellows will be appointed to the Graduate Center payroll as Graduate Assistant Bs (GAB) at their own current rate (the minimum GAB salary is currently $14,982) and also receive a Provost’s Award in the amount of $16,159 that will be paid through the Office of Fellowships and Financial Aid in two equal lumps sum payments. One payment will come towards the beginning of the Fall 2026 semester, and the second will come towards the beginning of the Spring 2027 semester. The GAB title provides eligibility to purchase the low-cost </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.gc.cuny.edu%2Fstudent-affairs%2Fstudent-health-insurance-nyship&amp;data=05%7C02%7Cpsusanszky%40gradcenter.cuny.edu%7C74f3fa1c3713407ae26508de6369afd9%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639057502743510115%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=pnAww84Xjz8UvLgQrkwssL%2Bd%2BzO570rptqXevKroB4Q%3D&amp;reserved=0"><span style="font-weight: 400">NYSHIP health insurance</span></a><span style="font-weight: 400"> as well as in-state tuition remission for fellows who are within their first ten registered semesters of doctoral study. Fellows who are past their ten registered semesters of doctoral study will be eligible to receive in-state tuition remission at the Level III rate as per the 2017-2023 PSC-CUNY contract (this benefit provides for this tuition remission for a maximum of four semesters past the student’s 10th registered semester).</span></p>
<p><b>Application Instructions</b></p>
<p><span style="font-weight: 400">To apply, please complete <a href="https://cuny.is/gcdf26cfp">our online application form</a>, which includes uploading a letter of interest (no more than 1-2 pages), a CV, a list of digital projects undertaken and/or completed, and the name and contact information for a faculty reference. (NOTE: Faculty reference letters are not requested at this point of the application process). </span></p>
<p><b><i>Note on AI Use:</i></b><i><span style="font-weight: 400"> We value the application and interview process as a means of getting to know each applicant. We request that you refrain from using AI generated text in your letter of interest or CV and only submit materials that reflect your original work. Use of AI for projects in your digital portfolio will be accepted, but it should be documented with process statements that clearly define what tools were used, what they were used for, and how AI helped achieve the project’s goals. For more information on developing AI process statements see </span></i><a href="https://crln.acrl.org/index.php/crlnews/article/view/26548/34482"><i><span style="font-weight: 400">https://crln.acrl.org/index.php/crlnews/article/view/26548/34482</span></i></a><i><span style="font-weight: 400">. </span></i></p>
<p><span style="font-weight: 400">Applications must be received by </span><b>April 1, 2026 at 11:59 PM </b><span style="font-weight: 400">to be considered.</span></p>
<p>Apply here: <a href="https://cuny.is/gcdf26cfp">https://cuny.is/gcdf26cfp</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>An Introduction to Matplotlib</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/03/06/an-introduction-to-matplotlib/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=an-introduction-to-matplotlib</link>
		
		<dc:creator><![CDATA[Pranav Chinmay]]></dc:creator>
		<pubDate>Fri, 06 Mar 2026 16:59:39 +0000</pubDate>
				<category><![CDATA[Guides, Tutorials, Reviews]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=23981</guid>

					<description><![CDATA[In Python, the most widely used library for creating various kinds of visualizations is Matplotlib. In this post, we will introduce the core ideas behind Matplotlib and illustrate them through &#8230;]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400">In Python, the most widely used library for creating various kinds of visualizations is Matplotlib. In this post, we will introduce the core ideas behind Matplotlib and illustrate them through simple, concrete examples that can serve as modular foundations for more advanced visualizations. Matplotlib is a low-level plotting library that gives the user fine-grained control over every aspect of a figure. While higher-level libraries such as Seaborn or Plotly build further on this foundation, Matplotlib remains the backbone of most scientific plots produced in Python. The design of Matplotlib is inspired by MATLAB, which makes it intuitive for users with a background in numerical computing.</span></p>
<p><span style="font-weight: 400">Let us begin with a simple example, of a ”line plot,”, i.e., a typical two-dimensional plot of a function of a single variable. The most common way to get started is by using the pyplot interface, which provides a state-based, convenient syntax for creating plots. This is enforced by importing Matplotlib as follows:</span></p>
<pre><code><span style="font-weight: 400">import matplotlib.pyplot as plt</span>
</code></pre>
<p><span style="font-weight: 400">Following this import command, we use NumPy to call the mathematical function sin(x), and visualize it as follows:</span></p>
<pre><code><span style="font-weight: 400">import numpy as np</span>
<span style="font-weight: 400">x = np.linspace(0, 10, 100) #generate 100 evenly spaced points from 0 to 10</span>
<span style="font-weight: 400">y = np.sin(x)</span>
<span style="font-weight: 400">plt.plot(x, y)</span>
<span style="font-weight: 400">plt.xlabel("x")</span>
<span style="font-weight: 400">plt.ylabel("sin(x)")</span>
<span style="font-weight: 400">plt.title("A Simple Line Plot")</span>
<span style="font-weight: 400">plt.show()</span>
</code></pre>
<figure id="attachment_23991" aria-describedby="caption-attachment-23991" style="width: 300px" class="wp-caption aligncenter"><img fetchpriority="high" decoding="async" class="wp-image-23991 size-medium" src="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.21-PM-300x232.png" alt="Line chart titled “A Simple Line Plot” showing a smooth sine wave labeled sin(x). The x-axis ranges from 0 to 10 and the y-axis from about -1 to 1, with the curve rising and falling in a repeating wave pattern." width="300" height="232" srcset="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.21-PM-300x232.png 300w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.21-PM.png 718w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.21-PM-300x232@2x.png 600w" sizes="(max-width: 300px) 100vw, 300px" /><figcaption id="caption-attachment-23991" class="wp-caption-text">All Photos: Pranav Chinmay</figcaption></figure>
<p><span style="font-weight: 400">Here, NumPy is used to generate evenly spaced points and call sin(x), and Matplotlib draws a smooth curve connecting them. The functions xlabel, ylabel, and title allow us to annotate the figure. There are further customization options available in the documentation, which is linked at the end of the post. </span></p>
<p><span style="font-weight: 400">In addition to simple plots such as these, Matplotlib supports a wide range of plot types. Let’s look at a few of them.</span></p>
<h2><b>Scatter Plots</b></h2>
<p><span style="font-weight: 400">Scatter plots are useful for visualizing relationships between variables, especially when dealing with noisy data. Here is an example: </span></p>
<pre><code><span style="font-weight: 400">x = np.random.randn(500)</span>
<span style="font-weight: 400">y = np.random.randn(500)</span>
<span style="font-weight: 400">plt.scatter(x, y, alpha=0.5)</span>
<span style="font-weight: 400">plt.xlabel("x values")</span>
<span style="font-weight: 400">plt.ylabel("y values")</span>
<span style="font-weight: 400">plt.title("Scatter Plot Example")</span>
<span style="font-weight: 400">plt.show()</span>
</code></pre>
<p><img decoding="async" class="aligncenter wp-image-23993 size-medium" src="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.28-PM-300x232.png" alt="Scatter plot titled “Scatter Plot Example” displaying hundreds of blue points spread around the origin. The x-axis is labeled “x values” and the y-axis “y values,” with points loosely clustered between about -3 on the y-axis and 3 on x-axis." width="300" height="232" srcset="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.28-PM-300x232.png 300w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.28-PM.png 718w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.28-PM-300x232@2x.png 600w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><span style="font-weight: 400">In this example, we use NumPy to generate a random set of x and y values, which we then plot with Matplotlib.</span></p>
<h2><b>Histograms</b></h2>
<p><span style="font-weight: 400">Histograms, on the other hand, summarize data by grouping values into bins and counting their frequencies.</span></p>
<p>&nbsp;</p>
<pre><code><span style="font-weight: 400">data = np.random.randn(1000)</span>
<span style="font-weight: 400">plt.hist(data, bins=30)</span>
<span style="font-weight: 400">plt.xlabel("Value")</span>
<span style="font-weight: 400">plt.ylabel("Frequency")</span>
<span style="font-weight: 400">plt.title("Histogram Example")</span>
<span style="font-weight: 400">plt.show()</span>
</code></pre>
<p><img decoding="async" class="wp-image-23995 size-medium aligncenter" src="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.35-PM-300x232.png" alt="Histogram titled “Histogram Example” showing the distribution of values roughly from -3 to 4.5. The bars peak around 0, forming a roughly bell-shaped distribution with frequency on the y-axis and value on the x-axis." width="300" height="232" srcset="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.35-PM-300x232.png 300w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.35-PM.png 718w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.35-PM-300x232@2x.png 600w" sizes="(max-width: 300px) 100vw, 300px" /></p>
<p><span style="font-weight: 400">Once again, the data is randomly generated using NumPy, before we use the hist method of Matplotlib’s pyplot to generate the histogram image. </span></p>
<h2><b>Multiple Curves and Customization</b></h2>
<p><span style="font-weight: 400">One of Matplotlib’s strengths is the degree of customization it offers. Nearly every element of a plot can be controlled: line styles, marker sizes, fonts, tick marks, and figure dimensions. For instance, we can easily adjust the size of a figure and plot multiple curves on the same axes.</span></p>
<pre><code><span style="font-weight: 400">plt.figure(figsize=(6, 4)) #adjust figure size</span>
<span style="font-weight: 400">plt.plot(x, np.sin(x), label="sin(x)") #plots sine</span>
<span style="font-weight: 400">plt.plot(x, np.cos(x), label="cos(x)") #plots cosine</span>
<span style="font-weight: 400">plt.legend()</span>
<span style="font-weight: 400">plt.title("Multiple Curves on One Plot")</span>
<span style="font-weight: 400">plt.show()</span>
</code></pre>
<p><img loading="lazy" decoding="async" class="wp-image-23997 size-medium aligncenter" src="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.41-PM-300x232.png" alt="Line chart titled “Multiple Curves on One Plot.” The x-axis runs from 0 to 10 and the y-axis from −1 to 1. Two smooth curves are shown: a blue line labeled “sin(x)” and an orange line labeled “cos(x).” The sine curve starts at 0, rises to about 1 near x≈1.6, falls to −1 near x≈4.7, then rises again. The cosine curve starts at 1 at x=0, decreases to −1 near x≈3.1, rises to 1 near x≈6.3, then decreases again. A legend in the lower-left identifies the two lines. By messaging ChatGPT, an AI chatbot, you agree to our Terms and have read our Pri" width="300" height="232" srcset="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.41-PM-300x232.png 300w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.41-PM.png 718w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/03/Screenshot-2026-03-05-at-4.38.41-PM-300x232@2x.png 600w" sizes="auto, (max-width: 300px) 100vw, 300px" /></p>
<p><span style="font-weight: 400">Matplotlib also allows the creation of figures with multiple axes, which is useful for side-by-side comparisons or more complex layouts. While the pyplot interface is sufficient for many tasks, more advanced use cases often benefit from working directly with the Figure and Axes objects.</span></p>
<p><span style="font-weight: 400">This post has provided a brief introduction to Matplotlib’s basic functionality. With these tools, you can already produce a wide variety of informative and polished figures. As your needs grow, Matplotlib’s extensive </span><a href="https://matplotlib.org/stable/tutorials/index.html"><span style="font-weight: 400">documentation </span></a><span style="font-weight: 400">offers detailed guidance on advanced topics.</span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Thoughts on Situating AI Literacy</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/02/20/thoughts-on-situating-ai-literacy/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=thoughts-on-situating-ai-literacy</link>
		
		<dc:creator><![CDATA[Meha Gupta]]></dc:creator>
		<pubDate>Fri, 20 Feb 2026 16:59:20 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Reflections]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<category><![CDATA[literacy]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=23845</guid>

					<description><![CDATA[This post is written by guest contributor Ian G. Williams. AI literacy is having a moment. Since the release of ChatGPT 3.5 in November 2022, widespread use and adoption of &#8230;]]></description>
										<content:encoded><![CDATA[<p><em><span style="font-weight: 400">This post is written by guest contributor Ian G. Williams.</span></em></p>
<p><span style="font-weight: 400">AI literacy is having a moment. Since the release of ChatGPT 3.5 in November 2022, widespread use and adoption of artificial intelligence systems &#8211; specifically large language models, generative (pre-trained transformer) AI systems (genAI) &#8211; have skyrocketed around the world. Often, this adoption is integrated and layered into existing infrastructures and enterprise systems: Microsoft Copilot suddenly appearing in your Outlook inbox, a notice from Google that Gemini is now part of your storage plan, a note at the top of Adobe Acrobat that says it&#8217;s easy to summarize a long article and save time, Brightspace offering to generate quiz questions. New models and versions of AI systems seemingly appear every day: AI agents taking the technophile world by storm, and </span><a href="https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/"><span style="font-weight: 400">“AI theater” becoming a real-life instance of Pokémon</span></a><span style="font-weight: 400"> arguing things out on a social media site. The pace of development and deployment continues. The material and social infrastructures to support AI &#8211; data centers, high power computing, fiber optic cables, coding schools and boot camps, AI forward policies &#8211; receive massive investments.</span></p>
<p><span style="font-weight: 400">With this explosion of genAI growth, a potpourri of neologisms has emerged &#8211; terms like &#8220;prompt engineering,&#8221; &#8220;vibe coding,&#8221; and &#8220;model switching.&#8221; These denote ideas of a skillful use of the technologies; a capacity to instrumentalize and coax them towards a desired set of outcomes &#8211; a prototype, a website, an image, an essay, a travel itinerary. These terms imply a mastery and synergy with these new interfaces, and a capacity to integrate them into everyday life. These ideas broadly fit within the concept of &#8220;digital literacy,&#8221; a term first introduced in 1997 by Paul Gilster, which supplanted its predecessor term, &#8220;computer literacy.&#8221; Digital literacy was intended to be more general and broad, and was often used interchangeably with the concept of new media literacies in the early 2000&#8217;s. When the big data revolution kicked off in the 2010s, data literacy was the new thing (Pangrazio and Sefton-Green, 2020). AI literacy&#8217;s ascent and emergence are an extension of the lineage, forming around a particular set of technologies, and it is the latest evolution of this ever-changing concept. It&#8217;s everywhere because we live in a world saturated with AI systems and agents, discourse, marketing, and ideas about AI. For instance, when riding the subway, take a moment to observe how many ads for AI systems and tools cover the train.</span></p>
<p><span style="font-weight: 400">The world is saturated by AI systems, which often get framed as both causes of and solutions to society&#8217;s ills. Making meaning of AI is largely framed in terms of literacy; as Luci Pangrazio (2026) argued in a recent essay, AI literacy is invoked in both optimistic and pessimistic responses to AI’s ascent &#8211; even as the continued idea of literacy forecloses on other possibilities for relating to, finding meaning in, and resisting the flow of AI. Yet AI literacy is here, shaping not just scholarly inquiry, but also policymaking. Last week, the US Department of Labor released an </span><a href="https://www.dol.gov/agencies/eta/advisories/ten-07-25"><span style="font-weight: 400">AI Literacy Framework</span></a><span style="font-weight: 400">, formalizing a working definition of what this means, part of a wider commitment of the Federal government to &#8220;prioritizing AI literacy and skill development across the workforce and education systems&#8221; (Department of Labor, 2026, p. 1-2).</span></p>
<p><span style="font-weight: 400">Like technologies before it, the sudden rollout of AI systems into the world through unregulated, direct-to-consumer releases does not equally distribute benefits and harms. AI universalism is limited and ignores the importance of local context, and how AI systems layer into and rearrange existing sociotechnial systems. Examining how AI literacies are enacted and played out in people&#8217;s lives can help ground discussions. While many of us in higher education feel pressure to develop and articulate AI literacies to preserve our turf, engage in sustained critique, and adjust to a new reality, we are a small subset of overall users and consumers of digital technologies. It can be easy to lose sight of this in the academy and forget how everyday people of a variety of backgrounds and skillsets experience this AI moment, and this call for AI literacy.</span></p>
<p><span style="font-weight: 400">Earlier this week, I attended a talk and then an invited workshop, organized by Data &amp; Society. It focused on the report </span><a href="https://datasociety.net/events/404-job-not-found/"><span style="font-weight: 400">(404) Job Not Found: The AI Literacy Trap At Work</span></a><span style="font-weight: 400">, an ethnographic study by </span><a href="https://anuliwashere.com/"><span style="font-weight: 400">Anuli Akanegbu, PhD</span></a><span style="font-weight: 400">, of Atlanta&#8217;s digital skills and job landscape in the midst of the &#8220;AI literacy&#8221; boom. Akanegbu&#8217;s report also builds upon Daniel Greene&#8217;s work in this area, crystallized in his book </span><a href="https://mitpress.mit.edu/9780262542333/the-promise-of-access/"><span style="font-weight: 400">The Promise of Access: Technology, Inequality, and the Political Economy of Hope</span></a><span style="font-weight: 400"> where he argues that in the 1990&#8217;s, problems of persistent poverty &#8211; particularly in an era of welfare reform and neoliberal market policies &#8211; were transformed into problems of technology access: what is now known as the digital divide. Greene describes &#8220;the access doctrine&#8221; &#8211; a belief that access to technology (and requisite skills) can solve poverty and social inequality &#8211; in his comparative institutional ethnography of Washington, DC in the mid 2010’s. At a conference last year, Greene explained to me how his project emerged from frustrations working in social services and workforce development, where the mantra “learn to code” was unquestionably repeated as the solution to structural poverty.</span></p>
<p><span style="font-weight: 400">Akanegbu’s report builds an important layer of this work, and updates for the current landscape of the mid 20202’s, by taking a close look into Atlanta, a major Southern US city with the highest wealth disparity in the country, the origin of the cotton gin, and a currently booming tech hub that was reinventing itself as &#8220;Silicon Peach.&#8221;  The report followed the experiences of Black Atlanta residents as they navigated the AI literacy landscape. The event discussed perspectives on the report, including the central argument that AI literacy is a &#8220;strategic abstraction&#8221; &#8211; a deliberately vague concept that is hard to pin down, and always pointing towards an uncertain future, proclaimed in the language of policymakers, businesspeople, and public-private partnerships. This vagueness keeps people, particularly job-seekers and working folk, ever guessing, always having to upskill and position themselves for a changing set of definitions and technologies. Akangegu identitied a digital skills training and AI literacy model that offers an alternative to patchwork, undersupported landscapes often designed with the assumption that participants who are white collar office workers, upskilling on the job, flexibly scheduled, or able to bear the risks of tuition for a precarious and uncertain future. Serena Oduro and Anuli Akakngebu’s </span><a href="https://datasociety.net/wp-content/uploads/2026/02/PolicyBrief_404-Job-Not-Found_Feb2026.pdf"><span style="font-weight: 400">companion policy brief</span></a><span style="font-weight: 400"> identifies strategies for tangible improvements to the current AI literacy landscape.</span></p>
<p><span style="font-weight: 400">Our conversation afterwards with a range of stakeholder organizations &#8211; policymakers, labor unions, think tanks, educational institutions, workforce development social enterprises &#8211; focused on the concept of AI literacy in education, labor policy, and history. It made me think about my own experiences with trying to make sense of this concept, and what the different dimensions of what becoming &#8220;AI literate&#8221; might entail. Rather than presume an object or universal understanding, I thought it would be helpful in this note to locate and situate my experiences, which are rooted in my activities and relationships at GCDI. Although I am a social worker, my &#8220;practice&#8221; has been primarily on campus and in university settings for some time now &#8211; I work in community with other academics. Much of my funding has come from fellowships and programs associated with GCDI, which exposed me to the world of digital humanities.</span></p>
<p><span style="font-weight: 400">I have been thinking and reading about AI literacy, and digital skills, for some time. This genAI moment unfolded during my time at The Graduate Center and my perception has been shaped by institutional context. Sometimes, I am so immersed in how we approach technology, education, and society here that I forget the distinct contours of our approach and our daily practices here &#8211; hence why the Data &amp; Society event was so refreshing and thought-provoking. It is hard to articulate concisely </span><i><span style="font-weight: 400">what </span></i><span style="font-weight: 400">we do at GCDI, but if I had to, I would say our approach grounds humanistic inquiry, tempered optimism, a strong commitment to situated ethics, and a belief in the importance of fostering communities of practice and mutual support might start to sum it up. CUNY has a rich history of both organizing against harmful technological practices in education and building alternative practices and infrastructures (Fabricant &amp; Brier, 2016). There is a real commitment here to seeking structural policy solutions along with adapting education and other institutions to the current times &#8211; such as we are seeing in initiatives such as </span><a href="https://criticalai.commons.gc.cuny.edu/"><span style="font-weight: 400">The Critical AI Literacy Institute</span></a><span style="font-weight: 400"> and the emergent </span><a href="https://ailab.gc.cuny.edu/"><span style="font-weight: 400">CUNY AI Lab</span></a><span style="font-weight: 400">. It gives me some hope regarding how we can develop meaningful and actually useful approaches to what AI literacy can be, and sustains my curiosity about how this AI literacy moment will continue to play out in CUNY, and in society.</span></p>
<p><span style="font-weight: 400">—</span></p>
<p><b>References</b></p>
<p><span style="font-weight: 400">Fabricant, M., &amp; Brier, S. (2016). Austerity blues: Fighting for the soul of public higher education. JHU press.</span></p>
<p><span style="font-weight: 400">Gilster, P.  (1997). Digital literacy. New York: Wiley Computer Pub.</span></p>
<p><span style="font-weight: 400">Greene, D. (2021). The promise of access: Technology, inequality, and the political economy of hope. mit press.</span></p>
<p><span style="font-weight: 400">Pangrazio, L. (2026). The (im)possibility of AI literacy. Learning, Media and Technology, 51(1), 1–7. </span><a href="https://doi.org/10.1080/17439884.2026.2615553"><span style="font-weight: 400">https://doi.org/10.1080/17439884.2026.2615553</span></a></p>
<p><span style="font-weight: 400">Pangrazio, L., &amp; Sefton-Green, J. (2020). The social utility of ‘data literacy’. Learning, Media and Technology, 45(2), 208-220. </span><a href="https://doi.org/10.1080/17439884.2020.1707223"><span style="font-weight: 400">https://doi.org/10.1080/17439884.2020.1707223</span></a><span style="font-weight: 400"> </span></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>A.I. Meets Critical Media Art</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/02/13/a-i-meets-critical-media-art/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=a-i-meets-critical-media-art</link>
		
		<dc:creator><![CDATA[Tuka Al-Sahlani]]></dc:creator>
		<pubDate>Fri, 13 Feb 2026 17:17:40 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[Artificial intelligence (AI)]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=23741</guid>

					<description><![CDATA[This post is written by guest contributor Jonah Brucker-Cohen from Lehman College. Connect with Jonah Brucker-Cohen at Jonah.bruckercohen@lehman.cuny.edu A.I. Meets Critical Media Art is a series of eight distinct projects &#8230;]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400">This post is written by guest contributor Jonah Brucker-Cohen from Lehman College. Connect with Jonah Brucker-Cohen at Jonah.bruckercohen@lehman.cuny.edu</span></p>
<p><span style="font-weight: 400">A.I. Meets Critical Media Art is a series of eight distinct projects of which five of them will be detailed here. These works build off of my previous work in the area of A.I. by maintaining the focus on questioning not only the results of A.I. but the medium itself. </span></p>
<ol>
<li><b>AGGRO-MOUSE</b></li>
</ol>
<p><span style="font-weight: 400">AGGRO-MOUSE is a hacked computer mouse infused with A.I. that intentionally limits a user’s ability to use their computer if conditions are met. For instance, opening the same folder or drive over and over again instructs AGGRO-MOUSE to impede your ability to move the mouse towards that folder or drive on the desktop. The mouse monitors user habits and imposes restrictions when repetitive patterns are detected. When these conditions are met, the AGGRO-MOUSE disrupts the physical movement of the cursor, introducing friction into the user’s interface experience. This design probes the boundaries between user autonomy, machine agency, and the psychological impact of disrupted routines in human-computer interaction.</span></p>
<ol start="2">
<li><b> EXPRESSION OF MEMORY</b></li>
</ol>
<p><span style="font-weight: 400">“Expression of Memory” is a calendar with A.I. that will only show important dates from someone’s life that correspond to their current emotional, facial expression. One begins by training the calendar on important dates from their life based on their emotional responses (happy, sad, angry, fearful, surprised, disgusted, neutral, and confused) to those dates. Once trained, the calendar detects their emotional state based on their facial expression and responds with the date and name of the event in their life that corresponds to their current emotional state. For instance, if one smiles at the screen, dates and media related to “happy” moments in their life will emerge such as a past birthday, favorite vacation, wedding day, graduation, or other joyful moments of their life. If crying or frowning, sad days materialize such as a death in the family, when an injury occurred, or when they lost their job.For decades, theorists have studied how emotional states can trigger memories in individuals. “Expression of Memory” functions as an emotional detection system that can tell how a person is feeling based on their facial expression and recall a memory that corresponds to this feeling.</span></p>
<ol start="3">
<li><b> SubTask</b></li>
</ol>
<p><span style="font-weight: 400">“Subtask” is a media artwork that visually marks parts of websites that rely on crowd labor or data-labeling work and use cloud labor sites like Amazon Mechanical Turk, Appen, Scale A.I., and others to create the site. The project overlays data such as pay rates, testimonials, and ethical ratings from public datasets and activist sources related to each website that it is asked to check. The goal of the project is to critically examine how the Internet is largely built by underpaid laborers and A.I. software and how the web exists as another form of control to get large online presences built. “Subtask” is an open source extension to the popular browser, Google Chrome, allowing anyone to install it and search for invisible forms of labor across any website that they view.  It works by crawling crowd and A.I. worker sites like Appen, Mechanical Turk, and Scale A.I. for user testimonials about how much was paid to workers to complete menial tasks such as labelling images, resizing design elements, and adding statistical data to documents.  Once installed, the extension places “green” flags next to content that was created by online “clickworkers” and a testimonial of their activity on the site to expose these blatantly abusive labor practices.</span></p>
<ol start="4">
<li><b> Weather The Times</b></li>
</ol>
<p><span style="font-weight: 400">“Weather The Times” is an open source Chrome extension with A.I. that filters which New York Times articles can be read based on the local weather of where the user is situated. By dynamically fetching real-time weather data and conducting sentiment analysis of articles, “Weather The Times” nudges a viewer’s news diet to match the mood of the skies above: On sunny days, only lighthearted and uplifting stories are accessible such as lifestyle, food, and travel. On rainy or stormy days, the extension restricts access to more serious and somber news about world events, crises, and politics. Articles that don’t fit the current weather mood are visually blurred and overlaid with poetic messages explaining their temporary inaccessibility. This project is a critique of how external forces, like the environment or algorithmic curation shape our information consumption, questioning our passive relationship to news and the emotional filters that color our perception of the world.</span></p>
<ol start="5">
<li><b> Not THIS  </b></li>
</ol>
<p><span style="font-weight: 400">“Not THIS” is a conceptual and experiential artwork  that engages both A.I. and Baudrillardian theory to  explore absence, exclusion, and the poetics of negation. A live camera feed captures the viewer and their  surroundings, but the AI does not identify what is  present, instead, it generates text describing what is  not detected, with phrases such as “Not a person. Not a tree. Not memory.” By refusing to recognize reality  directly, the A.I. inverts its conventional role as an  objective classifier, creating a tension between what  is visible and what is acknowledged. The piece is  based on Baudrillard’s notion of simulacra and hyper reality <span id='easy-footnote-5-23741' class='easy-footnote-margin-adjust'></span><span class='easy-footnote'><a href='https://gcdi.commons.gc.cuny.edu/2026/02/13/a-i-meets-critical-media-art/#easy-footnote-bottom-5-23741' title=' Baudrillard, Jean. Simulacres et Simulation. Éditions  Galilée, 1981.  '><sup>5</sup></a></span>, in which signs and representations can  replace or precede reality. Here, the A.I.’s negations  produce a hyperreal commentary: the viewer exists in  the video feed yet is rendered partially absent in the  textual output. This inversion exposes the limitations  of machine perception while turning A.I. into a poetic  device, generating meaning through omission rather  than factual identification. “Not  This” reveals the limits of machine perception. The  A.I. refuses to affirm what it sees, instead naming  what is not present, echoing Magritte’s paradox of  language and image. Both works question the authority of vision and truth, inviting viewers to confront the  instability of meaning, and how representation,  whether painted or algorithmic constructs, rather than captures, the real. </span></p>
<p><span style="font-weight: 400">Conclusion</span></p>
<p><span style="font-weight: 400">Despite the seemingly inevitability of A.I. as an all-encompassing force for content generation and online deception such as the ease in creating “fake news” or celebrity impersonations, the potential for artists to subvert these systems is exponential. Instead of merely using an A.I. system to create something it was intended to make, my intention is to subvert the software itself by using it to critically examine its own use. The projects outlined above along with the previous projects help prove that questioning these platforms is the ultimate method of provoking new understandings of just how much A.I. has infiltrated our daily lives and how its existence is still one that should be questioned and hacked.</span></p>
<p>This full paper was published in the proceedings of EVA London 2026 &#8211; <a href="http://www.eva-london.org/" target="_blank" rel="noopener" data-saferedirecturl="https://www.google.com/url?q=http://www.eva-london.org/&amp;source=gmail&amp;ust=1771443027957000&amp;usg=AOvVaw2m4J1-dmJ7wWP0FZ3O_MI3">http://www.eva-london.org/</a></p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why Should You Make a Website For Your Tenants’ Association?</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/02/06/why-should-you-make-a-website-for-your-tenants-association/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=why-should-you-make-a-website-for-your-tenants-association</link>
		
		<dc:creator><![CDATA[Maggie Schreiner]]></dc:creator>
		<pubDate>Fri, 06 Feb 2026 17:21:31 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<category><![CDATA[Uncategorized]]></category>
		<category><![CDATA[#DigitalGC]]></category>
		<category><![CDATA[digital humanities]]></category>
		<category><![CDATA[resources]]></category>
		<category><![CDATA[wordpress]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=23481</guid>

					<description><![CDATA[By Sam O Hana “Stewardship, not ownership, is the path to adulthood” – Amy Starecheski, Ours to Lose: When Squatters Became Homeowners in New York City &#160; 68% of the &#8230;]]></description>
										<content:encoded><![CDATA[<p>By Sam O Hana</p>
<p><strong>“Stewardship, not ownership, is the path to adulthood”</strong><br />
<strong>– Amy Starecheski, <i>Ours to Lose: When Squatters Became Homeowners in New York City</i></strong></p>
<p>&nbsp;</p>
<p>68% of the world’s wealth is held in real estate, and 79% of that is in residential housing.<small><cite>1</cite></small> In the US in the 2010s, landlords collected more than $4.5 trillion from tenants in rent payments.<small><cite>2</cite></small> In 2024, the residential vacancy rate in New York City hit an all-time low of 1.4%,<small><cite>3</cite></small> while homelessness in the city doubled since the pandemic to an estimated 350,000 people, including 35,000 homeless children.<small><cite>4</cite></small></p>
<p>For those who rent rather than own their homes, protecting your tenancy can be a part time job. At any time you might be outbid on the rights your home by a newcomer who can afford to pay more. But this work need not happen alone. A tenants’ association– a group of tenants in a building who organize to defend their rights to affordable, stable and safe housing– redresses the power imbalance between renters and landlords, forcing owners to negotiate with a group of tenants rather than with each one on a case-by-case basis.</p>
<p>The corruption that is required to raise rents, evict tenants and displace longstanding communities can only function in the dark. Fraudulent deregulation of rent stabilized apartments, absence of essential repairs, and arbitrary refusal to renew leases become the norm only when landlords are not held to account by the facts of the law and of public exposure. When tenants learn their rights, they simultaneously learn to recruit the assistance of city agencies like HPD, DHCR, DOB and the FDNY to defend the integrity of their home life.</p>
<p>A tenants’ association website is a key tool for tenants to disrupt the narrative that speculative real estate runs on because online listings market properties as lucrative investments. A tenants association website, by showing up on internet search results, shows prospective buyers, lenders and tenants that the occupants of the building know their rights in relation to their tenancies and are willing to enforce them using all appropriate channels, including litigation if need be.</p>
<p><img loading="lazy" decoding="async" class="size-medium wp-image-23501 alignright" src="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/02/2-219x300.jpg" alt="" width="219" height="300" srcset="https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/02/2-219x300.jpg 219w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/02/2-746x1024.jpg 746w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/02/2.jpg 1058w, https://s3.amazonaws.com/files.commons.gc.cuny.edu/wp-content/blogs.dir/885/files/2026/02/2-219x300@2x.jpg 438w" sizes="auto, (max-width: 219px) 100vw, 219px" />A simple self-hosted WordPress site such as the one at <a href="https://157huronstreet.com/">https://157huronstreet.com/</a> makes it clear to all stakeholders– current and prospective– that the risk of attempting to harass tenants in order, for example, to empty out a building in order to “flip” it for substantial returns, is much higher than in a building where the tenants are not organized. This is because tenants can file collective complaints with the city authorities, who will in turn deploy multi-agency harassment prevention “<a href="https://www.nyc.gov/site/hpd/services-and-information/thpt.page">task forces</a>” that can issue violations for neglect and nuisances, as well as stop work orders for any illegitimate construction work.</p>
<p>Landlords are often unscrupulous, but they are almost always well organized. They share attorneys who know how to intimidate tenants and drag out legal claims made by tenants, bluffing their way through the courts and wasting the time and resources of the taxpayer-funded judicial system. Only by joining with each other can tenants effectively counter narratives of misinformation. It is far more effective if those counternarratives can be published online through a website because it puts on the record the issues at stake for all members of the community to see.</p>
<p>Many neighborhoods in New York have tenants’ associations that you can find by searching online, such as <a href="https://tenantunionflatbush.com/">Flatbush</a>, <a href="https://www.crownheightstenantunion.org/">Crown Heights</a>, <a href="https://caaav.org/our-work/programs/chinatown-tenants-union">Chinatown</a>, <a href="https://uptowntenants.com/">Upper Manhattan</a>, <a href="https://bushwicktenantunion.com/">Bushwick</a>, <a href="https://www.swbtu.org/">Southwest Brooklyn</a>, <a href="https://www.astoriatenantunion.org/">Astoria</a>, <a href="https://www.ridgewoodtenantsunion.org/">Ridgewood</a>. There is also a newly emerging national <a href="https://tenantfederation.org/">Tenant Union Federation</a>.</p>
<p>&nbsp;</p>
<p>1. Tracy Rosenthal &amp; Leonardo Vilchis. <i>Abolish Rent, </i>Haymarket Books, 2024.</p>
<p>2. Ibid.</p>
<p>3. NYC Housing Preservation and Development, <a href="https://www.nyc.gov/site/hpd/news/007-24/new-york-city-s-vacancy-rate-reaches-historic-low-1-4-percent-demanding-urgent-action-new#/0" target="_blank" rel="noopener">New York City’s Vacancy Rate Reaches Historic Low of 1.4 Percent, Demanding Urgent Action &amp; New Affordable Housing</a>, February 8, 2024.</p>
<p>4. Coalition for the Homeless, <a href="https://www.coalitionforthehomeless.org/basic-facts-about-homelessness-new-york-city/" target="_blank" rel="noopener">Basic Facts About Homelessness: New York City</a>, January 2026.</p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Welcome to Spring&#8217;26!</title>
		<link>https://gcdi.commons.gc.cuny.edu/2026/02/02/welcome-to-spring26/?utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=welcome-to-spring26</link>
		
		<dc:creator><![CDATA[Meha Gupta]]></dc:creator>
		<pubDate>Mon, 02 Feb 2026 18:04:30 +0000</pubDate>
				<category><![CDATA[Digital GC]]></category>
		<category><![CDATA[GCDI Updates]]></category>
		<category><![CDATA[News]]></category>
		<category><![CDATA[Opportunities]]></category>
		<category><![CDATA[Resources]]></category>
		<category><![CDATA[Tagging the Tower]]></category>
		<guid isPermaLink="false">https://gcdi.commons.gc.cuny.edu/?p=23509</guid>

					<description><![CDATA[Welcome back to the Spring semester! The GC Digital Fellows have been hard at work developing more events, workshops, and resources to support digital scholarship at the Graduate Center. Read &#8230;]]></description>
										<content:encoded><![CDATA[<p><span style="font-weight: 400">Welcome back to the Spring semester! The GC Digital Fellows have been hard at work developing more events, workshops, and resources to support digital scholarship at the Graduate Center. Read on to find out more about what we’ve been up to and what we have in store for you this semester. </span></p>
<h2><span style="font-weight: 400">January’s Successful GC Digital Research Institute </span></h2>
<p><span style="font-weight: 400">Just last week, the GCDI ran its </span><i><span style="font-weight: 400">fourteenth</span></i> <a href="https://gcdri.commons.gc.cuny.edu/"><span style="font-weight: 400">Digital Research Institute</span></a><span style="font-weight: 400">. We had a lovely time teaching and learning skills like HTML/CSS, Python, Command Line, and QGIS alongside more than 30 participants. If you’re feeling FOMO and don’t want to wait until next winter to join us in the DRI, you can follow the curriculum and do some independent learning through the </span><a href="https://dhrift.org/"><span style="font-weight: 400">DHRIFT</span></a><span style="font-weight: 400"> online learning platform. And as always, set a reminder for this November to remember to apply for next year’s DRI! </span></p>
<h2><span style="font-weight: 400">Spring Workshops</span></h2>
<p><span style="font-weight: 400">Every semester, the Digital Fellows offer hands-on in-person and remote online</span><a href="https://gcdi.commons.gc.cuny.edu/calendar/category/workshop/"><span style="font-weight: 400"> workshops</span></a><span style="font-weight: 400"> on digital research tools for people of all skill levels. Workshops are free and open to any member of the Graduate Center community. (Ready yourself by learning how to</span><a href="https://gcdi.commons.gc.cuny.edu/?s=get+the+most+out+of+a+workshop"><span style="font-weight: 400"> get the most out of our workshops</span></a><span style="font-weight: 400"> here.) Registration is required, you can do so by selecting the workshop on the</span><a href="https://gcdi.commons.gc.cuny.edu/calendar/category/workshop/"><span style="font-weight: 400"> workshop calendar</span></a><span style="font-weight: 400"> and completing the workshop RSVP to get on our attendee list. Here’s a list of our upcoming workshops for Spring 2026:</span></p>
<p><b>Basic Audio Editing with Audacity</b></p>
<p><i><span style="font-weight: 400">Tuesday, 2/10/26 12-1:30 pm</span></i></p>
<p><span style="font-weight: 400">This hands-on workshop will be a foundational exploration of Audacity as an audio editing software. We will discuss some of its functionalities that can help you edit and clean your audio files to prepare them for publishing and sharing. Prior knowledge and experience are not necessary for the workshop. We will work with a demo audio, but having a pre-recorded sound clip/file (~1 min) that you can practice on is encouraged.</span></p>
<p><b>AI Tools for Research: Understanding AI Systems in Research Contexts </b></p>
<p><i><span style="font-weight: 400">Wednesday, 2/18/26 12-2 pm</span></i><span style="font-weight: 400"> </span></p>
<p><span style="font-weight: 400">This workshop introduces the fundamentals of artificial intelligence and large language models (LLMs) with a focus on how these systems operate when used in research settings. Rather than emphasizing tools or tricks, the session centers on building conceptual literacy: what LLMs are, how they generate outputs, and what happens when researchers interact with them through prompts and iterative feedback. Participants will explore common research use cases, such as literature exploration, drafting, and analytic support, alongside key limitations, risks, and ethical considerations. By developing a clearer mental model of AI systems, attendees will be better equipped to use these tools intentionally, critically, and responsibly in their research workflows.</span></p>
<p><b>Text Analysis for Right to Left Languages</b><span style="font-weight: 400"> </span></p>
<p><i><span style="font-weight: 400">Thursday, 2/19/26 12:30-2:00 pm</span></i></p>
<p><span style="font-weight: 400">Doing DH in languages other than English, especially right to left languages, once required building tools from the ground up. Today, we find that there are several text analysis tools that can be used when working with right to left languages. In this workshop we will explore one text analysis tool and one AI tool that may assist in multilingual text analysis.</span></p>
<p><b>Slides with LaTeX</b></p>
<p><i><span style="font-weight: 400">Thursday, 2/26/26 1:00-2:30 pm</span></i></p>
<p><span style="font-weight: 400">&#8220;A presentation is not a set of slides: it is all about someone having something to say to an audience-slides or no slides.&#8221; With this aphorism of Doumont in mind, we will learn how to use the Beamer package in the typesetting software LaTeX to create effective slides for short talks. Beamer is a powerful tool that inherently forces you to carefully consider how to use judiciously use slide space, enabling the creation of useful slides that complement your talk. This workshop is designed for those with a basic understanding of LaTeX, including those who attended Fall 2025&#8217;s Introduction to LaTeX. We will go through some examples of efficient slides, walk through how to use Beamer, and finish by creating our own mini-slide deck as a workshop activity.</span></p>
<p><b>Making Beautiful Maps with QGIS and Tableau</b></p>
<p><i><span style="font-weight: 400">Monday, 3/9/2612-1:30 pm</span></i></p>
<p><span style="font-weight: 400">Do you know the basics of QGIS but lose all hope when you try to stylize your map? QGIS is notorious for its frustrating, counterintuitive, and finicky design options. Have no fear! This workshop will teach you how to export your spatial data from QGIS and use Tableau, a data visualization platform, to make a visually compelling, interactive map in just a few simple steps.  </span></p>
<p><b>Queer DH: Key Projects and Concepts</b></p>
<p><i><span style="font-weight: 400">Tuesday, 3/17/26 12-1:30 pm </span></i></p>
<p><span style="font-weight: 400">What does it mean to do queer digital humanities? This workshop will explore key themes of identity, narrative, and space in queer DH, and introduce participants to projects exemplifying the possibilities and limitations of queer(ing) digital humanities. We will engage in a critical exploration of queer archives, mapping, gaming, and more!</span></p>
<p><b>Introduction to Omeka</b></p>
<p><i><span style="font-weight: 400">Tuesday, 3/24/26 12-1:30 pm</span></i></p>
<p><span style="font-weight: 400">This workshop introduces beginners to Omeka Classic, which is a free version of the content management system for sharing collections of objects or other sources online. It is built using free, open source software used to encourage websites for sharing digital collections and creating online exhibits. This is ideal for scholarly projects, community based projects, classroom assignments, and public history. This workshop gives a brief overview of the platform, its features, and how to use it.</span></p>
<p><b>LLM-Assisted Coding</b></p>
<p><i><span style="font-weight: 400">Thursday, 3/19/26 12-1:30 pm</span></i></p>
<p><span style="font-weight: 400">This workshop introduces graduate students to using Large Language Models (LLMs) to accelerate research coding and everyday technical work. The session focuses on hands-on workflows for writing and refactoring code, debugging errors, cleaning and transforming data, and generating analysis templates. It also highlights responsible use, including verification habits, privacy awareness, and academic integrity. Participants will leave with reusable prompt templates and a clear framework for integrating LLM assistance into reproducible research.</span></p>
<h2><span style="font-weight: 400">One-on-one Consultations</span></h2>
<p><span style="font-weight: 400">Have a question about your digital project? Thinking of including digital tools in your scholarship or teaching? GCDI staff are available to meet in-person at the GC and/or remotely with GC students, faculty, and staff to talk through technical challenges, digital skills, or simply brainstorm. Sign up for a 30-minute consultation</span><a href="https://docs.google.com/forms/d/e/1FAIpQLSeLEski53U3FjArac-bVU0jYwxYD0HQTvNSQUxIZWoxbqDPWg/viewform"><span style="font-weight: 400"> through this form</span></a><span style="font-weight: 400">.</span></p>
<h2><span style="font-weight: 400">Looking for a community to learn with?</span></h2>
<p><span style="font-weight: 400">We have several working groups that focus on specific interests. Working groups are composed of students, faculty, and staff who are looking for other scholars with similar interests to share resources, advice, and opportunities. These interdisciplinary groups connect through the CUNY Academic Commons.</span></p>
<p><span style="font-weight: 400">These groups include the</span><a href="https://commons.gc.cuny.edu/groups/pug-python-users-group/"><span style="font-weight: 400"> Python User’s Group (PUG)</span></a><span style="font-weight: 400">, the</span><a href="https://commons.gc.cuny.edu/groups/gis-working-group/"><span style="font-weight: 400"> GIS/Mapping Working Group</span></a><span style="font-weight: 400">, the</span><a href="https://commons.gc.cuny.edu/groups/rug-r-users-group/"><span style="font-weight: 400"> R User’s Group (RUG)</span></a><span style="font-weight: 400">, the</span><a href="http://cuny.is/darc"><span style="font-weight: 400"> Digital Archives Research Collective (DARC)</span></a><span style="font-weight: 400">,</span><a href="https://commons.gc.cuny.edu/groups/gc-humanidades-digitales/"><span style="font-weight: 400"> Humanidades Digitales (DH in Spanish)</span></a><span style="font-weight: 400">, and the </span><a href="https://commons.gc.cuny.edu/groups/digital-dissertations/"><span style="font-weight: 400">Digital Dissertations Group</span></a><span style="font-weight: 400">. No experience is needed to join; only an interest in the central topic and community. Do also check out</span><a href="https://digitalfellows.commons.gc.cuny.edu/2020/12/08/getting-the-most-out-of-working-groups/"><span style="font-weight: 400"> our blog post</span></a><span style="font-weight: 400"> on how to get the most out of our working groups.</span></p>
<h2><span style="font-weight: 400">Resources and Funding Opportunities</span></h2>
<h3><span style="font-weight: 400">Provost’s Digital Innovation Grant</span></h3>
<p><span style="font-weight: 400">DEADLINE: Tuesday, February 10, 2026</span></p>
<p><a href="https://forms.gle/ZJ3zhbbsWKTxcTES6"><b>Application Form</b></a></p>
<p><a href="http://cuny.is/digitalgrants"><span style="font-weight: 400">Provost’s Digital Innovation Grants</span></a><span style="font-weight: 400"> (PDIGs), a</span><a href="https://gcdi.commons.gc.cuny.edu/"><span style="font-weight: 400"> recurring GC Digital Initiatives</span></a> <a href="https://gcdi.commons.gc.cuny.edu/"><span style="font-weight: 400">program</span></a><span style="font-weight: 400">, provide financial support of up to $2,000 to doctoral students at the CUNY Graduate Center as they design and develop digital projects that contribute to the GC’s research, teaching, and service missions. Since 2012, PDIGs have supported a wide range of inventive projects across the disciplines, such as an online, open-access, crowdsourced database of mentor relationships within the field of writing studies; an app to support street medics and promote health and safety among activist communities; a computational analysis of Cold War diplomatic history; and many others. </span></p>
<p><span style="font-weight: 400">Projects at any stage of development are eligible for PDIG awards. Proposals may also include the initial development of a digital project or the ongoing development, growth, and deployment of established individual or team digital projects. Such projects may require additional resources to make a tool presentable to an academic audience or to improve the design of an early prototype based on feedback and evaluation. Proposals should describe how they address a challenge or problem in the applicant’s scholarly field. </span></p>
<p><span style="font-weight: 400">Successful applicants will be asked to share a description on the</span><a href="https://digitalgrants.commons.gc.cuny.edu/"> <span style="font-weight: 400">Provost’s Digital Innovation Grant website</span></a><span style="font-weight: 400"> and to write a white paper upon completion of the grant that will also be published on our website.  Additionally, grantees will be expected to present publicly on their work in progress during the academic year, including presenting at the 2026 Digital GC Showcase on </span><b>Thursday, May 14th at 6:30 PM </b><span style="font-weight: 400">and participating in occasional collaborative meetings and discussions with current and past grantees. Projects that use open-source tools and that focus on making work publicly accessible are strongly encouraged.</span></p>
<h3><span style="font-weight: 400">Manifold Digital Publishing Platform</span></h3>
<div id="app">
<div id="appContainer" class="QdrUx dPXfr jIdm1 body-157">
<div class="Flvp1 customScrollBar DSIdO">
<div id="mainApp" class="NzIuH">
<div class="gU1Tf SFQkX">
<div class="RcCNh">
<div class="fGw6c">
<div class="I3sS5">
<div id="MainModule" class="FJ4hV">
<div class="SW7A6">
<div class="LBktY">
<div class="q9iRC css-231" data-max-width="2400">
<div class="HOVUa">
<div class="nbmyu FkPdL czwRD LCprM">
<div id="Skip to message-region" class="Mq3cC css-235" role="main" data-app-section="MailReadCompose" aria-label="Reading Pane" data-skip-link-name="Skip to message">
<div id="ReadingPaneContainerId" class="Xsklh Fr1kM">
<div>
<div class="g_zET">
<div class="Pzn2X">
<div class="fui-FluentProvider fui-FluentProviderr8d ___p1ubi60 f19n0e5 f3rmtva fgusgyc fk6fouc fkhj508 figsok6 fytdu2e f11ysow2 fly5x3f f22iagw" dir="ltr">
<div id="ItemReadingPaneContainer" class="KD9CV">
<div class="Q8TCC yyYQP owaMailComposeEditorScrollContainer customScrollBar" data-app-section="ItemContainer" data-is-scrollable="true">
<div>
<div class="wide-content-host">
<div class="fui-FluentProvider fui-FluentProviderr8v ___5n94it0 f19n0e5 f3rmtva fgusgyc fk6fouc fkhj508 figsok6 fytdu2e" dir="ltr">
<div>
<div class="Vm3e5 pinIo">
<div data-test-id="mailMessageBodyContainer">
<div class="ulb23 customScrollBar GNqVo allowTextSelection">
<div id="UniqueMessageBody_6" class="BIZfh" role="document" aria-label="Message body" data-fui-focus-visible="">
<div>
<div class="rps_ba7">
<div>
<div>
<blockquote>
<div id="x_x_Signature">
<div data-olk-copy-source="MessageBody">As CUNY students, you have free access to Manifold, an open source, digital publishing platform. Manifold is an Andrew W. Mellon Foundation and National Endowment for the Humanities (NEH) funded collaboration between the GC, University of Minnesota Press, and Cast Iron Coding. Manifold projects are multimedia friendly and texts created in Manifold may be annotated using Manifold’s built-in social annotation tool.</div>
<div>On <span data-ogsc="rgb(40, 97, 183)"><a id="OWAe5026e03-993b-b873-1d4b-f2211af90fce" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8WqzrC1Q$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8WqzrC1Q%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248788753%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=7JqTILbg0jSXzchZGzWfvzxidJmuqRTyvZ%2FCUH2gjSw%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="0" data-ogsc="rgb(40, 97, 183)">CUNY’s instance of Manifold</a></span> you can publish:</div>
<ul>
<li>Your own scholarship – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA37fec7fb-3029-4084-4375-1de38a4c7b86" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/searching-for-mami-abuelita__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8rWzOirI$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fsearching-for-mami-abuelita__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8rWzOirI%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248834559%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=pliDYrvMySC1VgXOF4ZifX6fuMFULzyOCrgMWyfV6ks%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="1" data-ogsc="rgb(40, 97, 183)">searching for mami &amp; abuelita</a></span> (dissertation), <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWAff12b892-a9e0-758a-b40e-97bde2e6227b" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/queer-and-trans-prison-voices__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8NooM-mg$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fqueer-and-trans-prison-voices__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8NooM-mg%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248854242%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=TL%2BmrBZ50%2Bt36hhOhdiHHRKJz%2FvZloqUUc3hZZmpfJg%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="2" data-ogsc="rgb(40, 97, 183)">Queer and Trans Prison Voices</a></span> (capstone)</li>
<li>Journals – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA2f2b8f7e-4a88-b8c9-00fe-ae7118c9e775" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/journals/jitp__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8JBs6C2U$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fjournals%2Fjitp__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8JBs6C2U%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248872192%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=Qf7iSNTNzYh%2Fq1vcx3JYNAKVgmQSva8FADqAJIXLcmQ%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="3" data-ogsc="rgb(40, 97, 183)">Journal of Interactive Technology and Pedagogy – JITP</a></span></li>
<li>Course Sites and Open Educational Resources (OER) such as:
<ul>
<li>Archival projects – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA284da6d0-521f-75cd-1181-cce71b6c2114" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/let-my-people-know__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8dasfc-Q$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Flet-my-people-know__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8dasfc-Q%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248888751%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=fsQh4pkNAvLn4Z1KE%2Blo9ygwHeqCAWz%2BR2abyJgH3g8%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="4" data-ogsc="rgb(40, 97, 183)">Let My People Know</a></span>, <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA28d72b0c-5d75-1f58-ae1a-155b6b0bf31c" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/adrienne-rich-teaching-at-cuny-1968-1974__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8uk1gRqc$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fadrienne-rich-teaching-at-cuny-1968-1974__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8uk1gRqc%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248904526%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=7Nx4z8mPeM3IKWP9CoLzfu%2B7rl6Hd5O9pwX7Yz%2Fmv%2BA%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="5" data-ogsc="rgb(40, 97, 183)">Adrienne Rich:Teaching at CUNY</a></span></li>
<li>Class projects &amp; course sites – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA399cec69-23cc-44d9-ff99-114c2bdde91d" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/black-diasporic-visions-de-constructing-modes-of-power__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8ujdCapU$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fblack-diasporic-visions-de-constructing-modes-of-power__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8ujdCapU%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248923309%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=43%2FNyY%2FWnH6y7r43BacXndJIOrcQrNwnmpwxaGZeRsE%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="6" data-ogsc="rgb(40, 97, 183)">Black Diasporic Visions: (De) Constructing Modes of Power</a></span>, <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWAa7b992df-c47c-f6bb-65e0-acf39672b0ab" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/spring-2024-artd-3066-modern-art-and-oer-writing-seminar__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8rRT0Yks$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fspring-2024-artd-3066-modern-art-and-oer-writing-seminar__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8rRT0Yks%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248939312%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=tR4PmEUWUuKRDFMLt%2BwAf4hZ20YWv9i5IC5L%2FbA2Buw%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="7" data-ogsc="rgb(40, 97, 183)">Modern Art and OER Writing Seminar</a></span>, <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA44c58dc0-3467-7860-36e2-41403cc6a57f" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/theatre-history-2__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8zBWWQgU$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Ftheatre-history-2__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8zBWWQgU%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248954484%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=z2z6g4gXJ3jc4rOgI1%2BpWUrby5hyMCBMxhd0yBsjjzE%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="8" data-ogsc="rgb(40, 97, 183)">Theatre History II</a></span></li>
<li>Teaching editions of public domain texts – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWAdc6cf5aa-8719-0f76-a3ed-242fb5662c7f" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/narrative-of-the-life-of-frederick-douglass__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8i37nswA$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fnarrative-of-the-life-of-frederick-douglass__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8i37nswA%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248973059%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=xbt2aVzLcSC7RMYePswx%2FVOlfnTtvJjZgWrMa1yfQbI%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="9" data-ogsc="rgb(40, 97, 183)">The Narrative of the Life of Frederick Douglass</a></span></li>
<li>Teaching and pedagogy resources – <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA2eb1a05a-6903-509d-9567-5ad1f17ed151" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/teach-cuny-handbook__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8ksxKH-g$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fteach-cuny-handbook__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8ksxKH-g%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516248993832%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=sIv98LKP1YfND3S%2Fp%2FAFEdI1lU%2FyVy27LVpmNDAQRI0%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="10" data-ogsc="rgb(40, 97, 183)">Teach@CUNY Handbook</a></span></li>
</ul>
</li>
<li>Creative work such as poetry or personal essays <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWA3478c59b-e50b-34ea-7985-95a49f6d05de" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/when-we-had-cancer__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8aqBo0KI$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fwhen-we-had-cancer__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8aqBo0KI%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516249013013%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=0oTV%2BHE5LXHlhbsCDa3TuWbxIDhv7HRpM6Lkckqp7TM%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="11" data-ogsc="rgb(40, 97, 183)">When We Had Cancer</a></span>, <span role="presentation" data-ogsc="rgb(40, 97, 183)"><a id="OWAe4e5546b-395b-dd65-a888-6d41f907658b" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/happy-nostalgia-5d632b7a-c867-4821-b579-5bd4385c4bb7__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8Er-pMsk$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fhappy-nostalgia-5d632b7a-c867-4821-b579-5bd4385c4bb7__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt8Er-pMsk%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516249029233%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=xnBYGI1KYEYlxveACX58ITXvquufzKbm9hnPSLawbGk%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="12" data-ogsc="rgb(40, 97, 183)">Happy Nostalgia</a></span></li>
</ul>
<div>To learn more about using Manifold, check out our <span data-ogsc="rgb(40, 97, 183)"><a id="OWAfccd8bff-503a-24bb-ff67-206cc7f962d6" class="x_x_OWAAutoLink" title="Original URL: https://urldefense.com/v3/__https://cuny.manifoldapp.org/projects/getting-started-with-manifold__;!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt85Ry2R3k$. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2F%2Fcuny.manifoldapp.org%2Fprojects%2Fgetting-started-with-manifold__%3B!!GIqKXF0_-xZi!sh8fktSn3POVq8jeEQaF0NdtTMx-lmZTfF6jV3SU80OLIcMoGKf6HG5ya1acfwWyZkUcDblUghyHmwR_jLwYxgt85Ry2R3k%24&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516249044826%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=dSH7AQU%2BpoSH0EJ6QcTJyqHm8lwTUwm2iI9GidgV6Sk%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="13" data-ogsc="rgb(40, 97, 183)">Getting Started with Manifold Quick Guides</a></span>! <span data-ogsc="rgb(40, 97, 183)"><a id="OWA825e7ae3-a2c9-4016-8ace-268b07788159" class="x_x_OWAAutoLink" title="Original URL: https://commons.gc.cuny.edu/groups/cuny-manifold-users/. Click or tap if you trust this link." href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommons.gc.cuny.edu%2Fgroups%2Fcuny-manifold-users%2F&amp;data=05%7C02%7Cmgupta%40gradcenter.cuny.edu%7C89ed60ea0c8246d2f54008de6283fe4b%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C639056516249059147%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=u2dDUmyvqRIEfeuA%2FLiafHsaqiBOrD5%2FMVyEvilwg9E%3D&amp;reserved=0" target="_blank" rel="noopener noreferrer" data-auth="NotApplicable" data-linkindex="14" data-ogsc="rgb(40, 97, 183)">Join the Manifold Users group</a></span> on the CUNY Academic Commons to be notified about Manifold workshops and updates.</div>
</div>
</blockquote>
<blockquote>
<div id="x_x_Signature">
<div>If you have questions about using Manifold please contact <a title="mailto:rmiller2@gc.cuny.edu" href="mailto:rmiller2@gc.cuny.edu" data-linkindex="15" data-ogsc="">Robin Miller</a> or Manifold Graduate Fellows Cortnie Belser, Herline Honorat, Cen Liu, and August Smith.</div>
</div>
</blockquote>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<h3><span style="font-weight: 400">CUNY Academic Commons</span></h3>
<p><span style="font-weight: 400">As a CUNY student, you are eligible to register for an account on the </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommons.gc.cuny.edu%2Fabout%2Fabout-the-commons%2F&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845658815%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=JGpkuWuivRtUV7Jkhx6SsV%2Fg4Pt3XyxGaOzYvhNOn8s%3D&amp;reserved=0"><span style="font-weight: 400">CUNY Academic Commons</span></a><span style="font-weight: 400">. The Commons is a CUNY-created and run platform for building websites, collaborating with groups, and connecting with peers across the university. Faculty, staff, and students at CUNY use the Commons to <a href="https://commons.gc.cuny.edu/courses/">teach and take courses</a>,  create academic portfolios, host websites for their research projects or academic departments, and more!  To learn more, check out the </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommons.gc.cuny.edu%2Fabout%2F&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845694425%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=UJ1pRR60VzGOQnft8hz9GkgRHNDCixtkZoSR7jnD%2F4c%3D&amp;reserved=0"><span style="font-weight: 400">Commons’ About page</span></a><span style="font-weight: 400">, read the latest </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fnews.commons.gc.cuny.edu%2F&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845728597%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=P%2FBSWTvydKAFExRD9%2FcYANVZHJdCuz2dTFd3iwXM0Ko%3D&amp;reserved=0"><span style="font-weight: 400">Commons News releases</span></a><span style="font-weight: 400">, browse </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommons.gc.cuny.edu%2Fgroups%2F%3Fscope%3Dfeatured&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845761720%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=4Z7ByOLztgmftzQ8Pokl8rIc4fi0LT0LpFyXsQhGRIY%3D&amp;reserved=0"><span style="font-weight: 400">featured groups</span></a><span style="font-weight: 400"> and </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcommons.gc.cuny.edu%2Fsites%2F%3Fscope%3Dfeatured&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845796912%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=UdcNKeDvEaGFRD1%2FbLEWd3fKQQ1jnHpYwcd0vWUB%2BO0%3D&amp;reserved=0"><span style="font-weight: 400">sites</span></a><span style="font-weight: 400">, and visit the Commons’ </span><a href="https://nam12.safelinks.protection.outlook.com/?url=https%3A%2F%2Fhelp.commons.gc.cuny.edu%2F&amp;data=05%7C02%7Ctalsahlani%40gradcenter.cuny.edu%7Ce21ea42ed58d457896b108dd3cad360c%7C0b678335d50a41d3b15230149d930cfa%7C0%7C0%7C638733436845829727%7CUnknown%7CTWFpbGZsb3d8eyJFbXB0eU1hcGkiOnRydWUsIlYiOiIwLjAuMDAwMCIsIlAiOiJXaW4zMiIsIkFOIjoiTWFpbCIsIldUIjoyfQ%3D%3D%7C0%7C%7C%7C&amp;sdata=DIU13lpHADMrpH5aBinKhQeI%2BFqFy5olOvmnyUakyNg%3D&amp;reserved=0"><span style="font-weight: 400">HELP pages</span></a><span style="font-weight: 400"> for support with getting started.</span></p>
<h2><span style="font-weight: 400">GCDI Online Resources</span></h2>
<p><span style="font-weight: 400">If you find yourself unable to attend a particular workshop, there are a number of asynchronous GCDI resources you can use! Our resources include tutorials, handouts, and reflections that cover topics such as tools, methods as data and databases, research design, mapping, programming (including python and R), project management, sound recording, sharing, and analysis, text analysis, and web development.</span></p>
<ul>
<li style="font-weight: 400"><span style="font-weight: 400">Please visit the</span><a href="https://gcdi.commons.gc.cuny.edu/guides-tutorials/"><span style="font-weight: 400"> Digital Resource Guide</span></a><span style="font-weight: 400"> for materials by current and former Digital Fellows.</span></li>
<li style="font-weight: 400"><span style="font-weight: 400">Check out the Digital Fellows’</span><a href="https://gcdi.commons.gc.cuny.edu/category/news/tagging-the-tower/"><span style="font-weight: 400"> Tagging the Tower blog</span></a><span style="font-weight: 400"> for posts like:</span>
<ul>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2024/12/06/racialized-aspects-of-data-collection-data-use/"><span style="font-weight: 400">Racialized Aspects of Data Collection &amp; Data Use by Peyton Cordero</span></a></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2024/11/15/what-is-metadata/"><span style="font-weight: 400">What is metadata, and why does it matter? by Maggie Schreiner</span></a><span style="font-weight: 400"> </span></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2025/10/24/how-ai-is-changing-what-it-means-to-learn/"><span style="font-weight: 400">How AI Is Changing What It Means to Learn by Eunah Cho</span></a></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2024/10/11/ai-is-everywhere-is-it-in-qualitative-research-too-potentials-pitfalls-and-open-source-solutions-part-1/"><span style="font-weight: 400">AI is everywhere; is it in qualitative research, too? Potentials, Pitfalls, and Open-Source Solutions (Part 1) by Parisa Setayesh</span></a></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2025/12/05/digital-tools-to-experiment-with-this-winter/"><span style="font-weight: 400">Digital Tools to Experiment with this Winter by Meha Gupta</span></a></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2025/12/12/introduction-to-pythons-networkx/"><span style="font-weight: 400">Introduction to Python’s NetworkX By Pranav Chinmay</span></a></li>
<li style="font-weight: 400"><a href="https://gcdi.commons.gc.cuny.edu/2025/11/21/what-about-public-presence/"><span style="font-weight: 400">What about Public Presence? by Christopher Colon</span></a></li>
<li style="font-weight: 400"><a href="https://digitalfellows.commons.gc.cuny.edu/2018/11/30/a-conceptual-guide-to-digital-academic-identity/"><span style="font-weight: 400">A Conceptual Guide to Digital Academic Identity by Stefano Morello </span></a></li>
</ul>
</li>
<li style="font-weight: 400"><span style="font-weight: 400">Additional resources include the</span><a href="https://darcdigitalarchiveresearchcollective.commons.gc.cuny.edu/"><span style="font-weight: 400"> Digital Archive Research Collective</span></a><span style="font-weight: 400">, the</span><a href="https://digitaldiss.commons.gc.cuny.edu/"><span style="font-weight: 400"> Digital Dissertations Resource Guide</span></a><span style="font-weight: 400">, the </span><i><span style="font-weight: 400">entire</span></i><span style="font-weight: 400"> curriculum of the</span><a href="http://gcdigitalfellows.github.io/january_2018_curriculum.html"><span style="font-weight: 400"> GC Digital Research Institute</span></a><span style="font-weight: 400"> AND the</span><a href="https://dhrift.org/"><span style="font-weight: 400"> Digital Humanities Research Institute</span></a><span style="font-weight: 400">.</span></li>
</ul>
<h2><span style="font-weight: 400">Stay in touch!</span></h2>
<p><span style="font-weight: 400">Want to keep up to date on all the GCDI happenings? Be sure to join our</span><a href="https://commons.gc.cuny.edu/groups/gcdi/"><span style="font-weight: 400"> GCDI Group on the Commons</span></a><span style="font-weight: 400">, subscribe to our</span><a href="https://gcdi.commons.gc.cuny.edu/calendar/"><span style="font-weight: 400"> calendar</span></a><span style="font-weight: 400">, follow us on</span><a href="https://www.linkedin.com/company/gc-digital-initiatives"><span style="font-weight: 400"> LinkedIn</span></a><span style="font-weight: 400">, check out our </span><a href="https://linktr.ee/GCDigitalFellows"><span style="font-weight: 400">Linktr.ee</span></a><span style="font-weight: 400">, and be on the lookout for regular updates about our programs shared through your program’s listserv. It’s always good to know what’s going on with the GCDI, because you, yes </span><i><span style="font-weight: 400">you</span></i><span style="font-weight: 400">, are the #digitalGC.</span></p>
<p><span style="font-weight: 400">The best way to stay connected is to check the GCDI</span><a href="https://gcdi.commons.gc.cuny.edu/"><span style="font-weight: 400"> website</span></a><span style="font-weight: 400"> regularly. There, you will find all of our workshops, events, and </span><a href="https://gcdi.commons.gc.cuny.edu/event/provosts-digital-innovation-grants-application-deadline/"><span style="font-weight: 400">grant opportunities</span></a><span style="font-weight: 400"> on our</span><a href="https://gcdi.commons.gc.cuny.edu/calendar/"><span style="font-weight: 400"> calendar</span></a><span style="font-weight: 400">, as well as a slew of online</span><a href="https://gcdi.commons.gc.cuny.edu/digital-resource-guide/"><span style="font-weight: 400"> resources</span></a><span style="font-weight: 400"> to support your work this semester.</span></p>
<p><span style="font-weight: 400">Please don’t hesitate to contact the Digital Fellows </span><a href="mailto:gc.digitalfellows@gmail.com"><span style="font-weight: 400" data-rich-links="{&quot;per_n&quot;:&quot;gc.digitalfellows@gmail.com&quot;,&quot;per_e&quot;:&quot;gc.digitalfellows@gmail.com&quot;,&quot;type&quot;:&quot;person&quot;}">gc.digitalfellows@gmail.com</span></a><span style="font-weight: 400"> with questions!</span></p>
<p><span style="font-weight: 400">With best wishes for a productive and smooth semester,</span></p>
<p><span style="font-weight: 400">your Digital Fellows,</span></p>
<p><span style="font-weight: 400">Anna, Tuka, Maggie, Parisa, Chinmay, Meha, Chris, and Eunah</span></p>
<p>&nbsp;</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
