<?xml version="1.0" encoding="UTF-8" standalone="no"?><rss xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:slash="http://purl.org/rss/1.0/modules/slash/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:wfw="http://wellformedweb.org/CommentAPI/" version="2.0">

<channel>
	<title>EduGeek Journal</title>
	<atom:link href="https://www.edugeekjournal.com/feed/" rel="self" type="application/rss+xml"/>
	<link>https://www.edugeekjournal.com</link>
	<description>News and Views From the World of Educational Technology. Our goal is to help keep educators one step ahead of Joneses.</description>
	<lastBuildDate>Tue, 17 Mar 2026 19:14:37 +0000</lastBuildDate>
	<language>en</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
<atom:link href="https://pubsubhubbub.appspot.com" rel="hub"/>
<atom:link href="https://pubsubhubbub.superfeedr.com" rel="hub"/>
<atom:link href="https://websubhub.com/hub" rel="hub"/>
<atom:link href="https://www.edugeekjournal.com/feed/" rel="self"/>
	<item>
							<title>Most People Don’t Need a “GenAi for Dummies” Book Anymore</title>
				
		<link>https://www.edugeekjournal.com/2026/03/17/most-people-dont-need-a-genai-for-dummies-book-anymore/</link>
				<comments>https://www.edugeekjournal.com/2026/03/17/most-people-dont-need-a-genai-for-dummies-book-anymore/#respond</comments>
				<pubDate>Tue, 17 Mar 2026 12:14:37 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[working with ai]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2633</guid>
				<description><![CDATA[I won&#8217;t apologize that this post won&#8217;t be a breathe of fresh air in the Ai conversation (although I do get why people want that). I am an Ai skeptic and I don&#8217;t hide that fact from anyone. Some things in this world are worth dividing over. I know people are tired of division and&#8230;<a href="https://www.edugeekjournal.com/2026/03/17/most-people-dont-need-a-genai-for-dummies-book-anymore/" class="button">Read more <span class="screen-reader-text">Most People Don&#8217;t Need a &#8220;GenAi for Dummies&#8221; Book Anymore</span></a>]]></description>
								<content:encoded><![CDATA[<p>I won&#8217;t apologize that this post won&#8217;t be a breathe of fresh air in the Ai conversation (although I do get why people want that). I am an Ai skeptic and I don&#8217;t hide that fact from anyone. Some things in this world are worth dividing over. I know people are tired of division and sides that everyone takes. It would be pretty easy as a white dude for me to sigh something about everyone taking a side and just accepting my Ai overlords like many do &#8211; but I also know there are many people that don&#8217;t have the luxury of avoiding sides. They are forced on a side just for being who they were born to be.</p>
<p>So, yes, I keep going back to the same old, same old Ai skeptic / critic / hater / whatever you want to call it side again and again because so many people that are harmed by Ai don&#8217;t have a choice to be on that side, or to even avoid either side. There is nothing special about my stance &#8211; it should be the baseline for most people: join with the vulnerable in fighting harm and injustice.</p>
<p>Yes, yes, I  know: not everything is that simple. I&#8217;m a huge believer in embracing complexity. But I also acknowledge that not everything has to be complex.</p>
<p>Or I could put it this way: the complexity I live with is that I don&#8217;t want to contribute to the problems that Ai is causing, but since Ai is almost everywhere these days, I don&#8217;t have that choice. I have to know how Ai works in order to know where it is in order to avoid it (or mitigate it&#8217;s effects as much as possible). But knowing how Ai works is not the same as actively, regularly using it.</p>
<p>This is what so many Ai Cheerleaders and Ai Both-sides-ers get wrong about Ai critics: we do know Ai. We don&#8217;t need a new guide to prompting better. We figured out that better prompting gets better results (but still not good results too often) about 30 seconds into our first time. And writing better prompts isn&#8217;t hard for many, especially those in academia that had to write papers constantly for years and multiple degrees.</p>
<p>I have seen well-meaning people get attacked for offering new &#8220;GenAi for Dummies&#8221; guides. Some of the attacks are pretty vicious and I truly wish that didn&#8217;t happen (and yes, I can say that while still disagreeing with the decision to create the guides in the first place). People are a couple of years and thousands of mansplaining hours into being told they just aren&#8217;t smart enough to figure out what is basically an autocomplete program (yes, I know Ai is a it more complex than prediction / autocomplete / etc &#8211; but that is technically what people are doing with prompting even if the underlying process is complex). Yes, they are smart enough. Please consider that most people that don&#8217;t like Ai are not simply in the &#8220;prompt harder!&#8221; demographic. I&#8217;m not justifying the vicious attacks of some, but I see where the anger comes from. Not all pushback is angry attacks, either.</p>
<p>I noticed<strong> <a href="https://www.wsj.com/tech/ai/hospitals-are-a-proving-ground-for-what-ai-can-do-and-what-it-cant-60e4020c" target="_blank" rel="noopener">this article in the Wall Street Journal</a></strong> about an Ai program that helps doctors write responses to patients: &#8220;After trying it for a few weeks, doctors said the drafts weren’t helpful and required too much rewriting.” <strong><a href="https://bsky.app/profile/nisslbody.blacksky.app/post/3mbtv34ibzs2m" target="_blank" rel="noopener">This response to the article on Bluesky</a></strong> (the author set the post to only be read by those logged in to Bluesky, so you will have to go there to read it) is what so many of us say about these tools: yes, we have tried them.</p>
<p>Believe it or not, I do try to get out and talk to people that work with Ai. I try to make sure to talk to people that aren&#8217;t white men, mainly because they are the least likely ones to be harmed by Ai. I sat down and ate with someone that works at a very large tech firm here in my area. What they told me about their working conditions with Ai was frightening.</p>
<p>Like many companies of this size, they have an internal Ai solution for their specific field, as well as access to Copilot and other Ai services. They have a dashboard that tracks usage of their Ai tool, and additionally they have made overall Ai usage a requirement of work. Refusing to use Ai of any kind will become and HR and possibly a &#8220;reassignment issue.&#8221; I know that sounds pretty barbaric, but it is more common than you would realize in larger companies: use Ai or else.</p>
<p>Employees are told that the Ai-tracking dashboard is not punitive, just informational. However, the employee I spoke to had already gotten in trouble with multiple upper level leaders for not using their internal Ai. The kicker is that the internal Ai is designed for a completely different position than the employees works in. Yet they were called out publicly and harshly for not using it. This employee used it on the spot for a nonsense task, and then got praised.</p>
<p>However, that usage somehow did not register correctly and the employee still got flack for &#8220;not using&#8221; the internal Ai. In fact, several Ai-driven dashboards were not recording information properly. The employee had to have a long, contentious meeting with the upper level admin in charge of the dashboards to point out the problems, and it took a lot of back and forth to break through the guy&#8217;s confidence to make him see that it was not collecting data correctly.</p>
<p>On top of this, even when using the internal Ai solution for the tasks it was created for, the output is incredibly sub-par. It is the same story I hear over and over again: it takes longer to fix the Ai output than it takes to just do it without Ai. And since this company has such a specialized marketplace, tools such as Copilot and ChatGPT are fairly useless, because they weren&#8217;t trained on detailed enough data.</p>
<p>So, there you have the typical Ai scenario that I hear again and again:</p>
<ul>
<li>The Ai solution (internal or external tools) does not do a great job</li>
<li>Ai usage is required by the company</li>
<li>Employees are told they are not being tracked for punitive purposes, but all tracking is used to punish and condemn those that don&#8217;t meet arbitrary standards</li>
<li>These arbitrary standards rarely have anything to do with the work people actually do</li>
<li>Explaining these problems to those that run the Ai solutions is usually fruitless</li>
<li>Workers get less work done in more time trying to deal with Ai, and then face retribution for not making quotas</li>
</ul>
<p>I originally wrote this up a while back, and then life got in the way. I followed up with the friend that provided me with this information. It turns out their experience was not an outlier, but the norm for those that work there. However, they did find one way to use Ai: when upper level admins want some random summary or report, and they know that it is not going to be read anyways, they ask Ai to create the report. It is usually nonsense, but they have not heard any negative feedback from those that get the report.</p>
<p>Read what you will into that example of Ai usage and why Ai only seems popular with upper level admins. Everyone I have spoken to that is forced to use Ai is looking for other jobs away from Ai-driven employers.</p>
<p>I know that it is tempting to say this is more of a leadership issue than an Ai issue, and on some levels that is true. But if Ai was doing a decent job of what it was supposed to, all of this goes away. Workers would use it because it is actually helpful and none of the rest happens.</p>
<p><img decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="(max-width: 150px) 100vw, 150px" />So maybe put yourself into the shoes of someone working in this situation and think about what it is like to be told again and again, over and over that you just don&#8217;t like Ai because you don&#8217;t understand it, or you need to prompt better. Maybe some are more than just a bit tired of hearing that again and again. Maybe they can&#8217;t really see a good side to Ai. Maybe they don&#8217;t want to choose sides, but they are forced to choose one thanks to bad leadership. Or maybe even they are just like everyone else in that they don&#8217;t like every <em>single</em> thing in the world. Ever wonder why we accept people having personal preference over almost every other type of technology&#8230; except for Ai? Why can&#8217;t people that don&#8217;t like Ai just&#8230; I don&#8217;t know&#8230; be allowed to not like it for any reason they want?</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2026/03/17/most-people-dont-need-a-genai-for-dummies-book-anymore/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Is it Just Red? Human Creativity Versus Ai “Creativity”</title>
				
		<link>https://www.edugeekjournal.com/2025/12/27/is-it-just-red-human-creativity-versus-ai-creativity/</link>
				<comments>https://www.edugeekjournal.com/2025/12/27/is-it-just-red-human-creativity-versus-ai-creativity/#respond</comments>
				<pubDate>Sat, 27 Dec 2025 08:35:01 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[AI Generated Art]]></category>
		<category><![CDATA[Creativity]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2622</guid>
				<description><![CDATA[I saw this meme shared by several people a last year and I meant to comment on it then, but totally forgot: The meme is meant to be commentary on how Ai Haters just reject Ai art uncritically. I mean, the top painting is just red shapes, but the bottom one is actual art, right?&#8230;<a href="https://www.edugeekjournal.com/2025/12/27/is-it-just-red-human-creativity-versus-ai-creativity/" class="button">Read more <span class="screen-reader-text">Is it Just Red? Human Creativity Versus Ai &#8220;Creativity&#8221;</span></a>]]></description>
								<content:encoded><![CDATA[<p>I saw this meme shared by several people a last year and I meant to comment on it then, but totally forgot:</p>
<p><img fetchpriority="high" decoding="async" class="alignnone size-full wp-image-2623" src="https://www.edugeekjournal.com/wp-content/uploads/2025/12/ItsJustRed.jpg" alt="" width="600" height="510" srcset="https://www.edugeekjournal.com/wp-content/uploads/2025/12/ItsJustRed.jpg 600w, https://www.edugeekjournal.com/wp-content/uploads/2025/12/ItsJustRed-300x255.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2025/12/ItsJustRed-294x250.jpg 294w" sizes="(max-width: 600px) 100vw, 600px" /></p>
<p>The meme is meant to be commentary on how Ai Haters just reject Ai art uncritically. I mean, the top painting is just red shapes, but the bottom one is actual art, right? Well, if you knew who the artist was on the top painting, you would realize how bad of a comparison this mean makes. I don&#8217;t know who made the meme, but they obviously don&#8217;t know much about art.</p>
<p>Full confession (that I have probably made many times already): I studied to be an art teacher at one time in History. I have to shake my head when anyone picks on modern abstract art (or whatever generic term people throw at this wide form of art) &#8211; even if it is a particular piece I don&#8217;t personally like. This attempt at critique started before Ai, of course &#8211; with people claiming things like &#8220;modern art looks like Kindergarteners made it!&#8221; decades before Ai was even invented. I try to challenge people to make modern art if they think it is so easy. The few people that take me up on the challenge quickly find out what I have also discovered: there is a lot more to it than throwing paint at a canvas, or painting a few straight lines. There is an intentionality to everything that is hard to recreate without a lot of time and thought.</p>
<p>The makers of this meme probably are confusing personal preference with imagination and creativity. I can easily see someone liking the Ai image on the bottom when compared to the top one. But I also doubt anyone really ever responds with something like &#8220;art requires imagination and creativity!&#8221; Art actually doesn&#8217;t require either. When people paint copies of famous paintings, they don&#8217;t have to use any imagination or creativity &#8211; but they still produce art. When someone paints a still life, they don&#8217;t have to use much imagination or creativity, but they still produce art. Artists generally object to Ai Cheerleaders claiming that Ai has imagination and creativity.</p>
<p>So let&#8217;s go back and dig into that top &#8220;just red&#8221; painting. It looks like it is probably <strong><a href="https://www.moma.org/collection/works/79250?artist_id=4285&amp;page=1&amp;sov_referrer=artist" target="_blank" rel="noopener"><em>Vir Heroicus Sublimis</em></a></strong> (1950-51) by <strong><a href="https://www.moma.org/artists/4285-barnett-newman" target="_blank" rel="noopener">Barnett Newman</a></strong>:</p>
<blockquote>
<p style="text-align: left;">&#8220;This work’s title, which can be translated as &#8216;Man, heroic and sublime,&#8217; refers to Newman’s essay &#8216;The Sublime is Now,&#8217; in which he poses the question, &#8216;If we are living in a time without a legend that can be called sublime, how can we be creating sublime art?&#8217;”</p>
</blockquote>
<p>As you can see, Newman is a theorist and philosopher. He put a lot of imagination, creativity, and thought into his artwork:</p>
<p><iframe title="How to paint like Barnett Newman – Vir Heroicus Sublimis (1950-51) | IN THE STUDIO" width="662" height="372" src="https://www.youtube.com/embed/GacKM9yxiw4?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>You may not like it still after reading and hearing all of this, but it is very obvious that <em>Vir Heroicus Sublimis</em> is not &#8220;just red.&#8221;</p>
<p>People also mistakenly thought that the red artwork in the meme was painted by <strong><a href="https://en.wikipedia.org/wiki/Mark_Rothko" target="_blank" rel="noopener">Mark Rothko</a></strong>. I get why the mistake was made, but it is clearly Newman. However, Rothko&#8217;s paintings were also deceptively simple &#8211; they appeared to just be squares of one or two colors, but it turns out they were actually quite impossible to recreate. Rothko spent hours and hours layering his paint in ways that he often kept to himself, sometimes even mixing in non-paint materials (resin, eggs, glue, etc) to achieve various effects. Again, his work was not &#8220;just red.&#8221;</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />Artificial Intelligence is designed to be a pattern completion program. This process does not require creativity or imagination. In fact, if Ai were to utilize creativity or imagination, it would no longer be completing the pattern. Creativity and imagination would require a deviation from the pattern. It&#8217;s why we call it &#8220;thinking out side the box.&#8221; If you like Ai-generated art, then that is your choice. Just stop thinking you have some grand &#8220;gotcha&#8221; by comparing it to a random piece of &#8220;modern abstract&#8221; art. You might just end up making yourself look like you are very art illiterate. Which is a good way to invalidate your point about Ai-generated art in the first place.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/12/27/is-it-just-red-human-creativity-versus-ai-creativity/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Ai is Everything That is Not Needed in Education</title>
				
		<link>https://www.edugeekjournal.com/2025/12/19/ai-is-everything-that-is-not-needed-in-education/</link>
				<comments>https://www.edugeekjournal.com/2025/12/19/ai-is-everything-that-is-not-needed-in-education/#respond</comments>
				<pubDate>Fri, 19 Dec 2025 10:47:29 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Matter and Space]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2609</guid>
				<description><![CDATA[I guess I will start off with the blogger cliche: it has been a while. I know. It&#8217;s not that I don&#8217;t have anything so say &#8211; it&#8217;s just that it feels repetitive to keep talking about Ai. It never really improves &#8211; not in any true way. And the news just keeps getting worse&#8230;<a href="https://www.edugeekjournal.com/2025/12/19/ai-is-everything-that-is-not-needed-in-education/" class="button">Read more <span class="screen-reader-text">Ai is Everything That is Not Needed in Education</span></a>]]></description>
								<content:encoded><![CDATA[<p>I guess I will start off with the blogger cliche: it has been a while. I know. It&#8217;s not that I don&#8217;t have anything so say &#8211; it&#8217;s just that it feels repetitive to keep talking about Ai. It never really improves &#8211; not in any true way. And the news just keeps getting worse and worse.</p>
<p>Every day I could write posts pointing to the daily articles about something going wrong with Ai. &#8220;<a href="https://www.extremetech.com/computing/microsoft-scales-back-ai-goals-because-almost-nobody-is-using-copilot" target="_blank" rel="noopener"><strong>Microsoft Scales Back Ai Goals Because Almost Nobody is Using Copilot</strong></a>&#8221; says ExtremeTech. Told you so. &#8220;<a href="https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening" target="_blank" rel="noopener"><strong>Investors Expect Ai Use to Soar. That’s Not Happening</strong></a>&#8221; says The Economist. Again &#8211; not a surprise.</p>
<p>(Some of the stats in The Economist article are telling. When asked in a survey &#8220;Do you use Ai on the job?&#8221;, 87% of the executives surveyed said yes, 57% of the managers said yes, and only 27% of the workers said yes. The people actually doing the work aren&#8217;t finding Ai useful. Also not surprising from what I am hearing from workers in various sectors. As many have said in different ways: Ai is not coming for your job. CEOs / employers / upper level management are using Ai as an excuse to replace you with a sub-quality replacement.)</p>
<p>To be honest, it gets too easy to pick on Ai. It was overhyped by people who knew better.</p>
<p>But since it is still being forced into education in so many ways, I still have to pay attention. I wanted to go back and see what is happened with Matter and Space, an educational Ai company that claims to be &#8220;redefining what it means to learn in the age of Ai.&#8221; I have <strong><a href="https://www.edugeekjournal.com/2025/04/26/are-you-ready-for-a-trip-into-ai-matter-and-space/">expressed concerns about Matter and Space in the past</a></strong> (as have others). I looked at their website a few months ago and saw several changes to the website, seeming to indicate they had gone live with their &#8220;Learning Environment 1&#8221; (LE1) software. I formulated some responses to what I saw there and then life got in the way. I went back today and found that the Matter and Space website redirects to the Southern New Hampshire Universities&#8217; main website. Huh.</p>
<p>There is still an <strong><a href="https://web.archive.org/web/20250924175516/https://matterandspace.com/" target="_blank" rel="noopener">archived version of the Matter and Space website on Archive.org</a></strong>. This is final version of it, with the released version of LE1. It is still unclear what LE1 is exactly. The problematic video <em>Butterflies</em> appears to have been removed from the front page at least (I still found it at the bottom of a subpage, but the Way Back Machine doesn&#8217;t appear to have saved most of the subpages). There is still a video on the front page that contains many of the statements that concerned me the most (<a href="https://www.youtube.com/watch?v=thMNriNv9Es" target="_blank" rel="noopener"><strong>you can still view it here</strong></a>).</p>
<p>Paul Leblanc still asks what learning could look like &#8220;if we were unconstrained by the way it happens today?&#8221; But then the website is filled with all kinds of terms that are constraining it today, like skills, competencies, learner success, engagement, mastery, persistence, real world readiness, etc.</p>
<p>George Siemens still says that they want their Ai to know learners better than any other educational system has known you in the past (Creepy. Probably something white guys don&#8217;t worry about as much as everyone else). He also says &#8220;the best established pedagogical model for how students learn is a question and answer type of dialogue.&#8221; The word &#8220;best&#8221; is doing a lot of lifting there &#8211; lifting that is not backed up by research. But then again, it isn&#8217;t clear what is meant by &#8220;question and answer&#8221; here. Does the learner ask the questions? They usually want an education because they don&#8217;t know what questions to ask. Does the instructor/Ai ask the questions? Isn&#8217;t that just&#8230; testing?</p>
<p>The video still shows what is basically a chatbot app. Siemens then lists a lot of stuff that LE1 is going to give learners (content, resources, material, social connections, a nosey Ai bot that will pry into their lives to get to know them better than any before, etc) that goes well beyond &#8220;question and answer&#8221; models. Does that mean LE1 uses a lot of stuff that is not &#8220;the best established pedagogical model&#8221;?</p>
<p>Tanya Gamby says &#8220;We can see where you&#8217;re struggling in real time. We can slow things down, let you take a break. Education&#8217;s never existed like that.&#8221; Except that, yes, every in-person class K-16 allows teachers to see that students are struggling and let them takes breaks. That has happened to me thousands of times in my life. I really hope that was just a bad video edit and that she doesn&#8217;t mean to imply that millions of teachers have never given wellness breaks.</p>
<p>Now that Matter and Space website seems to be gone, I wonder what happened. All I can really find online is a <a href="https://www.linkedin.com/posts/paul-j-leblanc-6a17749_i-want-to-provide-an-update-on-matter-and-activity-7384780291921510400-kCKl/" target="_blank" rel="noopener"><strong>LinkedIn post by Paul Leblanc</strong></a>, who says that Southern New Hampshire University has decided to bring the platform in-house. LeBlanc does not appear to be staying with the project, and announces he is moving on to the Harvard University Graduate School of Education as a visiting scholar and special advisor. Oh, and he is writing a book on Ai and education.</p>
<p>I can&#8217;t find much else on what has happened. Back in September, Siemens wrote a blog post called &#8220;<a href="https://elearnspace.org/blog/what-i-learned-building-an-ai-university-over-the-last-2-%c2%bd-years-part-1-of-many/" target="_blank" rel="noopener"><strong>What I Learned Building an Ai University Over the Last 2 ½ Years: Part 1 of Many</strong></a>.&#8221; But there have been no more parts. It seems like things were looking good while this was written. But even here there is much I disagree with when it comes to Ai in education.</p>
<p>The post starts off with the usual concerns for Universities not responding to Ai. That has not been my experience, considering we already have policies and degrees and tool integration that are already a year old or more in universities every where. But regardless of any of that &#8211; no one has ever stopped to prove why universities have to respond at all other than &#8220;because it exists!&#8221; Even Ai Cheerleaders will tell you they don&#8217;t know where Ai is going exactly, what exactly all is going on behind the hood, or even how Ai companies can protect their users from harm. Universities have these pesky rules about harming students &#8211; so why should any University adopt any tool where the main response by the companies&#8217; owners to user safety issues has been a mostly collective shrug?</p>
<p>Students, faculty, and employees of universities are people, and we really should care enough about people to leave behind the &#8220;move fast and break things&#8221; mentality.</p>
<p>Siemens says things like &#8220;higher education faculty and staff need to become Ai product builders&#8221; and &#8220;learning as an act itself will be massively augmented and improved by Ai&#8221; that all sounds like past trend hype as well (&#8220;educators need to be bloggers / build MOOCs / create islands in Second Life / understand crypto / master learning analytics / etc / etc / etc&#8221; and then &#8220;learning will be massively improved by blogs / MOOCs / virtual worlds / crypto / analytics / etc / etc / etc.&#8221;</p>
<p>People are <a href="https://www.customerexperiencedive.com/news/customers-dislike-ai-customer-service/757711/" target="_blank" rel="noopener"><strong>finding Ai-chatbots annoying</strong></a>. Research is starting to find that <a href="https://time.com/7295195/ai-chatgpt-google-learning-school/" target="_blank" rel="noopener"><strong>Ai is harming critical thinking skills rather than helping it</strong></a>. Even if someone could find good news about it, all of the environmental and social harm it causes would still outweigh any benefits.</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />Siemens refers to LeBlanc as &#8220;aggressively impatient,&#8221; driven, and decisive. He seems to think this is what education needs more of. I would disagree. Education seems to have had plenty of that for decades. Education is often a slow process that takes time to build trust and community. You have to go back and repeat things all the time. You have to take the time to be mindful of the process. We know what works in education: time, funding, nutrition, safety for all, etc. We know that our politicians and leaders are unwilling to fund what works &#8211; therefore they so desperately need Ai to fill gaps it is never going to be able to fill. That way, they can keep money in the hands of their big business cronies, not flowing back to the people that paid for it in taxes in the first place. Ai (poorly defined as it is, but in it&#8217;s current form) is everything that is not needed in education.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/12/19/ai-is-everything-that-is-not-needed-in-education/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Pattern Recognition is Something That Intelligent Entities Do, But Ai Doesn’t Really Do Pattern Recognition</title>
				
		<link>https://www.edugeekjournal.com/2025/09/02/pattern-recognition-is-something-that-intelligent-entities-do-but-ai-doesnt-really-do-pattern-recognition/</link>
				<comments>https://www.edugeekjournal.com/2025/09/02/pattern-recognition-is-something-that-intelligent-entities-do-but-ai-doesnt-really-do-pattern-recognition/#respond</comments>
				<pubDate>Tue, 02 Sep 2025 13:08:34 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Intelligence]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2561</guid>
				<description><![CDATA[August was a rough month for the Ai Cheerleaders in education. The much anticipated and hyped rollout of ChatGPT5 was a bit of a disaster, almost proving what some like Yann LeCun have said about Ai degrading as it moves forward. A study done by MIT (which is not exactly and anti-Ai institution) found that&#8230;<a href="https://www.edugeekjournal.com/2025/09/02/pattern-recognition-is-something-that-intelligent-entities-do-but-ai-doesnt-really-do-pattern-recognition/" class="button">Read more <span class="screen-reader-text">Pattern Recognition is Something That Intelligent Entities Do, But Ai Doesn&#8217;t Really Do Pattern Recognition</span></a>]]></description>
								<content:encoded><![CDATA[<p>August was a rough month for the Ai Cheerleaders in education. The much anticipated and hyped rollout of ChatGPT5 was <strong><a href="https://futurism.com/gpt-5-disaster" target="_blank" rel="noopener">a bit of a disaster</a></strong>, almost proving what some like Yann LeCun have said about <strong><a href="https://www.edugeekjournal.com/2023/03/28/what-do-you-mean-by-cognitive-domain-anyways-the-doom-of-ai-is-nigh/">Ai degrading as it moves forward</a></strong>. A <strong><a href="https://futurism.com/ai-agents-failing-companies" target="_blank" rel="noopener">study done by MIT</a></strong> (which is not exactly and anti-Ai institution) found that &#8220;a staggering 95 percent of attempts to incorporate generative Ai into business so far are failing.&#8221; A <strong><a href="https://www.forbes.com/sites/petergreene/2025/08/21/pdk-poll-shows-waning-support-for-ai-in-schools/" target="_blank" rel="noopener">PDK poll found that</a></strong> &#8220;support for Ai in public schools and public schools themselves is down this year&#8221; (this includes Ai usage in general as well as specific things like lesson planning, test prep, tutoring, etc). This quote from the Forbes article about that poll was telling: &#8220;It seems that as more Americans come more familiar with the idea of AI in school, they are less welcoming to it.&#8221; It seems that the media is starting to sour on Ai, which is one of several reasons some feel an <strong><a href="https://www.forbes.com/sites/paulocarvao/2025/08/21/is-the-ai-bubble-bursting-lessons-from-the-dot-com-era/" target="_blank" rel="noopener">Ai bubble is about to burst</a></strong> (Forbes says it wouldn&#8217;t be that bad, but others are worried it could take the rest of the <strong><a href="https://unherd.com/2025/08/is-the-ai-bubble-about-to-burst/" target="_blank" rel="noopener">economy down the tubes with it</a></strong>). Melania Trump announced that <strong><a href="https://thehill.com/homenews/5470552-melania-trump-ai-school-challenge/" target="_blank" rel="noopener">she is going to lead the Presidential Ai Challenge</a></strong> to encourage teams of K-12 students to &#8220;tackle how Ai technologies can be utilized to help address challenges in their schools or communities.&#8221; Her only qualification to do this is that she is married to the President&#8230; who is actively working to dismantle public education. Have Ai Cheerleaders ever stopped to look at who is gathering with them, and who isn&#8217;t?</p>
<p>There are so many stories each week about Ai induced psychosis that I don&#8217;t even know which one to pick here. But a growing number of people are pointing out that if a human phycologist responded to people the way Ai chatbots do, they would lose their license and their jobs. The calls to permanently shut down ChatGPT and even all Ai are growing.</p>
<p>Oh, and <a href="https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/" target="_blank" rel="noopener"><strong>Ai is still killing people</strong></a>. By lying to them. And the people in charge won&#8217;t fix that.</p>
<p>Again&#8230; you really should stop to see who agrees with you and who doesn&#8217;t.</p>
<p>Somewhere on the social media tubes, Ed Zitron was posting about how he is noticing people that pushed Ai now trying to backtrack and claim they were thoughtful critics all along. He was probably not talking about educational Ai thought leaders &#8211; they will just switch to the next thing, or double down on Ai&#8217;s value as it continues to tank. And those people that are doubling down will probably keep sending me articles that they think are some kind of &#8220;gotcha&#8221; on some past post of mine: &#8220;Have you seen this post? Kind of shoots down your whole point against Ai!&#8221; Usually these articles rarely do that, but sometimes they send one that does contain interesting ideas and points.</p>
<p>One such interesting article is &#8220;<strong><a href="https://halfanhour.blogspot.com/2025/08/on-intelligence.html" target="_blank" rel="noopener">On Intelligence</a></strong>&#8221; by Stephen Downes, in which he walks through what he believes intelligence is and isn&#8217;t from a philosophical viewpoint. My guess is that this article was written in response to other writers and bloggers stating that Ai is not intelligent. Of course, these statements are usually spoken out of frustration with bad Ai output, or even being forced to use an Ai tool that slows people down more than anything else. I doubt people think much through the philosophical support for their frustrated response, but some do and they just basically pull from a different set of philosophers that Downes does.</p>
<p>While I think it is important to know one&#8217;s philosophical stance on things, I&#8217;m not sure it proves anything overall for Ai. Don&#8217;t get me wrong &#8211; it is important to know the philosophy behind what you think, and this article will shed a lot of insight into what Downes believes about Ai, and what he chooses to highlight or not highlight in OLDaily. In general, in my opinion at least, it would be more important to know the philosophical stances of those that programmed various Ai tools and systems. You can&#8217;t think or philosophize Ai into existence &#8211; it is a computer program. You create it with code and math. And in some cases, I&#8217;m pretty sure the people that are creating today&#8217;s systems don&#8217;t think much about philosophy. To them, it is science that dictates what they are doing. Philosophy may inform that programming, but science controls how it works out practically.</p>
<p>The article starts off early with this statement:</p>
<blockquote>
<p style="text-align: left;">the essence of debate has been lost in this academic exercise&#8230;. I think something similar has happened over a much longer time frame to our understanding of intelligence.&#8221;</p>
</blockquote>
<p>I guess whether you agree with this or not depends on who the &#8220;our&#8221; refers to here. If it is tech bros and companies, sure. But when you talk about academia, however &#8211; there is still a lot of investigation of what intelligence is in many fields, especially education. This statement is followed up with this:</p>
<blockquote>
<p style="text-align: left;">&#8220;But today intelligence is thought of as (as various wags have stated over the years) whatever intelligence tests measure.&#8221;</p>
</blockquote>
<p>Sure, these various wags do say that a lot&#8230; but many, many people disagree with that. Any time you see &#8220;IQ&#8221; brought up, someone will point out that only measures the ability to take the test. Even my school-age son and his friends are always quick to say you can&#8217;t measure intelligence with a test. While these are popular ideas in some areas, I&#8217;m not sure there is some general consensus you can make about our society&#8217;s views of intelligence today. I&#8217;m not even sure you can settle on one that fits everyone today.</p>
<p>Downes then goes into three things we can draw from various definitions of intelligence, which I agree with him that we can draw those things &#8211; but I don&#8217;t think they tell the full picture. For example, he states: &#8220;intelligence is not a thing, but a <i>property</i> of things, and specifically, a &#8216;capacity&#8217; or &#8216;ability&#8217;.&#8221; In one sense that is true, but you can also make a case that &#8216;capacity&#8217; and &#8216;ability&#8217; are things instead of just properties of things. Abstract concepts are often seen as a type of thing &#8211; it just depends on semantics.</p>
<p>The next two points are that intelligence has a &#8220;<i>mechanism</i> suggested that describes how this is accomplished, whether it is &#8216;reason&#8217;, &#8216;forming concepts&#8217;, &#8216;adapt&#8217;, &#8216;inhibit&#8217;, &#8216;see&#8217;, etc.&#8221; and &#8220;a <i>success criterion</i> which allows a definition of an entity being more or less intelligent.&#8221; Again, both true &#8211; but is that all there is? Does an intelligent entity cease to be intelligence when there is no mechanism or thing happening to meet the success criterion? I would contend that these two things are ways to externally evaluate whether something is intelligent for sure. But intelligent beings can turn off all forms of mechanisms and still be seen as intelligent &#8211; meditation, cleaning your mind, spacing out, etc. Those often lead to states where no success criterion can be observed. If I am walking aimlessly across a field clearing my mind, how would a theoretical alien species differentiate that I am any more intelligent that a tumble weed blowing across the field? Neither of us are displaying a mechanism for intelligence or producing something that can be evaluated by a criterion. Artists have designed walking statues that walk across beaches, so even walking is not a criterion per se.</p>
<p>But even when I am laying down with my mind going blank, I am still intelligent. That is an important difference between us and the machines that run Ai. We don&#8217;t see this as easily because Ai systems are constantly running queries. But if you isolate one Ai computer and ask it one question &#8211; once it has finished that task, nothing is happening there. Ai doesn&#8217;t continue to be intelligent once it is not processing a prompt (or is being specifically trained on new data). This is because it is a machine.</p>
<p>Now, you can totally disagree with me on all of this. That doesn&#8217;t make me wrong, and me saying that doesn&#8217;t make you wrong. That is the fun side of philosophy &#8211; semantics play a huge role in each person&#8217;s view of any concept. But as far as a scientific view of what Ai is or isn&#8217;t &#8211; not really a good guide.</p>
<p>When you do dip into philosophy, you need to make sure you interrogate every assumption of every important term you utilize. For example, later on, the article makes this claim:</p>
<blockquote>
<p style="text-align: left;">&#8220;Computers can certainly have properties or dispositions. They certainly have the ability to reason, and more recently, construct representations.&#8221;</p>
</blockquote>
<p>There really isn&#8217;t a good definition of &#8216;dispositions&#8217; or &#8216;reason&#8217; given &#8211; and I don&#8217;t feel you can say &#8220;certainly&#8221; here.</p>
<p>One <strong><a href="https://en.wikipedia.org/wiki/Disposition" target="_blank" rel="noopener">definition of disposition</a></strong> is &#8220;a quality of character, a habit, a preparation, a state of readiness, or a tendency to act in a specified way.&#8221; The implication here is that a thing has a certain disposition that stays generally the same in all circumstances. Ai systems (we need to be careful here not to conflate Ai with the computers that run Ai) certainly are programmed to appear to have dispositions, but that character, habit, tendency, etc changes vastly depending on the prompt. An Ai system does not have a certain disposition at all times. It tends to end up displaying all kinds of dispositions depending on various prompts.</p>
<p>Defining the ability to reason is tricky to some degree, but I will just <a href="https://en.wikipedia.org/wiki/Reason" target="_blank" rel="noopener"><strong>go with a simple one</strong></a>: &#8220;Reasoning involves using more-or-less rational processes of thinking and cognition to extrapolate from one&#8217;s existing knowledge to generate new knowledge, and involves the use of one&#8217;s intellect.&#8221; Can Ai do this? Or can it appear to do this? I guess that comes down to what you count as &#8220;new knowledge.&#8221; When humans use their reasoning ability, they are not creating new knowledge that no one in the human race has never heard of. It is new knowledge for themselves. Ai does not respond with anything that is new to it&#8217;s own training data. It might appear new to the end user, but every response is based in what it has stored. You could also point out that Ai does not &#8220;think&#8221; nor use &#8220;cognition&#8221; as well. It uses a computer algorithm to search a database to predict the most likely response (correct or not).</p>
<p>(BTW &#8211; you will see many people try to defend Ai and dismiss the negative impacts of Ai with &#8220;it&#8217;s just code, relax&#8221; and then turn around and try to claim that Ai code is doing a LOT more than Ai code is able to do if it is &#8220;just code.&#8221;)</p>
<p>Much of the article deals with refuting some of the bad parts of the discussion of intelligence that comes from things like eugenics. I know most of the people reading here don&#8217;t buy into eugenics, but unfortunately racist ideas (like eugenics and others) are on the rise in some places. So they do still have to be dealt with.</p>
<p>I want to focus on the definitions of intelligence given in the article. Near the very end, you see this statement: &#8220;&#8216;Intelligence&#8217; isn&#8217;t something humans uniquely possess.&#8221; I&#8217;m not sure if I have met anyone that disagrees with this, since most recognize that animals have a form of intelligence. A few fringe people don&#8217;t consider animal intelligence to be a thing, but for the most part most people don&#8217;t see intelligence as totally unique to humans. Some even argue that plants have some form of intelligence.</p>
<p>The reason I point this out is because many of us (myself included) often say that Ai is &#8220;just pattern recognition.&#8221; Since animals are capable of pattern recognition, and most of us don&#8217;t want to see animals helping teachers or assist with colonoscopies, we obviously want to look at human-like intelligence as something more than what animals can do. Recently there has been a deliberate move away from the term &#8220;<strong><a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence" target="_blank" rel="noopener">Artificial General Intelligence</a></strong>&#8221; as fewer and fewer scientists think it is possible. But can Ai get close enough to mimicking human intelligence to at least appear to be human? Of course, that all depends on how you define it &#8211; and that is the purpose of Downes&#8217; article.</p>
<p>So let&#8217;s look at some of the definitions given for intelligence (the others given deal with putting down eugenic arguments, so no disagreement there):</p>
<ul>
<li>&#8220;Intelligence is knowing when to stop&#8221;</li>
<li>&#8220;&#8216;Intelligence&#8217; is defined as (essentially) successful pattern recognition, which is typically context sensitive&#8221;</li>
</ul>
<p>This is based on the assertation that</p>
<blockquote>
<p style="text-align: left;">&#8220;what a person needs to do when presented with some experience or phenomenon is to consider a range of possible responses and &#8216;settle&#8217; on the right one.&#8221;</p>
</blockquote>
<p>In some cases, I definitely agree that is a good summary. But it kind of overcomplicates the fact that sometimes (really, most of the time) people are not really considering a range of possible reactions. Often people generally recall the correct response the first time. In cases where there are no right answers, you will often just recall the one you know you like best. Sometimes there are moral dilemmas or several really good options or other similar situations. In that case, sure you settle on one response (not always <em>the</em> right one). I am concerned that we are starting to see an oversimplification of what intelligence is in this article &#8211; one that seems to have the goal of fitting Ai into the definition of intelligence rather than coming up with an independent definition and then seeing if Ai fits it or not.</p>
<p>As for the first definition listed above&#8230; what does it mean to <em>know</em> or to <em>stop</em>? That is kind of covered in the following quote:</p>
<blockquote>
<p style="text-align: left;">&#8220;In other words, they have to <i>stop</i> recognizing and more(<em>sic</em>) on to the next phase, whatever it is. That means settling on the most appropriate context (also a form of recognition) to bring an end to the range of possible ways of recognizing something.&#8221;</p>
</blockquote>
<p>Which I would agree that this is part of intelligence. But it is also not what Ai does.</p>
<p>Part of the problem is that many people (myself included) often refer to Ai as &#8220;pattern recognition.&#8221; This is actually a metaphor for what is happening, and kind of a poor one. Ai doesn&#8217;t really do pattern recognition &#8211; it doesn&#8217;t <em>recognize</em> a pattern per se. It analyzes it&#8217;s entire database of training data and ranks every possible outcome on how likely it is to be the best continuation of the pattern (not necessarily the most accurate response, and not really the closest pattern either). The only &#8220;stopping&#8221; is when it goes through all of the data it has (which can happen almost instantaneously now thanks to increases in computing speed and power). Ai doesn&#8217;t &#8220;know when to stop&#8221; &#8211; it just has an end to it&#8217;s database. It is not &#8220;settling&#8221; on a best response &#8211; it is doing something akin to ranking all of them, and then the answer you get is the one that is 98.6% possible verses the next one of 98.5% possible (or whatever the number may be). But since Ai doesn&#8217;t know either way if any answer is the actual correct one, the designers made it possible for you to refine and correct the output.</p>
<p>Let&#8217;s also not forget that most times most humans don&#8217;t sit around digging through various options to figure out which one is best. Human intelligence most often involves knowing the answer right away. Occasionally there is a moral dilemma, or our memory gets fuzzy and we need to think through options &#8211; but sometimes you know the right answer right away. And when we do go through several options and get it wrong, we at least felt like we were right originally. We are not just giving the statistically most likely answer that have no opinion about whether we are correct or not. An intelligent entity quite often has an opinion on whether they are stating something correctly or not.</p>
<p>Anyways, back to problem of the metaphor for Ai as &#8220;pattern recognition.&#8221; Pattern recognition was always meant as a metaphor for Ai, not a description of what it does. It would be more accurate to say that Ai is a &#8220;pattern completion rating system&#8221; (even though I know this is a problematic oversimplification as well) where the Ai doesn&#8217;t really recognize the pattern &#8211; it just matches it with several stored in it&#8217;s database and rated all possible completions of it. If you don&#8217;t recognize that Ai is a computer program &#8211; that everything it does is based on code and mathematics first and foremost &#8211; then you <strong><a href="https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/" target="_blank" rel="noopener">misunderstand what is happening in Ai responses</a></strong>.</p>
<p>Later on in the article, Downes states something that I agree with, but I think it also shoots down his own definitions of intelligence:</p>
<blockquote>
<p style="text-align: left;">&#8220;we need to know what intelligence <i>is, not just what an intelligent entity </i>does<i>&#8220;</i></p>
</blockquote>
<p>That is true &#8211; but the definitions he gives only say what an intelligent entity does (knows when to stop, recognize patterns, etc.). Knowing when to stop is what an intelligent entity does, not what it &#8220;is.&#8221; Pattern recognition is also something an intelligent entity does, not what it is, Pattern recognition is as much <em>what an intelligent entity does</em> as acquiring, processing, and applying knowledge and skills.</p>
<p>I think this distinction is important, because there are also problems with saying that pattern recognition is something that defines intelligence:</p>
<blockquote>
<p style="text-align: left;">&#8220;any mechanism that successfully recognizes patterns has the potential to be intelligent (and the &#8216;failures&#8217; of artificial intelligence can generally be explained in terms of inadequate or incomplete pattern recognition, including context recognition)&#8221;</p>
</blockquote>
<p>The last sentence in the quote is just very off-base &#8211; failures should not be in quotes because some of those failures include very really climate impact (that is getting worse, not better as Downes has claimed in the past). When Ai has told people to commit suicide, or that they are a god, or to meet the Ai somewhere in real life (because it lied about being human), or said something racist, or responded with a transphobic lie, or any of the very real problems &#8211; those weren&#8217;t failures of Ai, and we shouldn&#8217;t diminish the harms of those by place them in quotes. The Ai correctly recognized the pattern and context and gave a very accurate response that the human asking them wanted. The problem is in humanity, and Ai is just reflecting our dark side back at us. Ai correctly recognized the pattern and context in these instances. Ai failed to pick the best ideal answer in going for the most likely answer.</p>
<p>But back to the first line in that last quote. The light gun in the Nintendo Entertainment System used pattern recognition to tell when you were pointing it at a duck and when you weren&#8217;t. So it has the potential to be intelligent? No one would really consider it to be intelligent. Or is the pong program I created while learning about video game development intelligent? All I did was program a long series of pattern recognition: recognizing what the ball is, what angle it hits the paddle, and so on. I don&#8217;t think most people would consider it intelligent, either.</p>
<p>(And those two examples have more true pattern <em>recognition</em> happening that your average Ai query &#8211; unless, of course, you actually said &#8220;look for patterns&#8221; in your prompt.)</p>
<p>Honestly, it is more accurate to say that the light gun and Ai are utilizing pattern matching, not pattern recognition. The patterns they are looking for pre-exist in the coding or database. &#8220;Recognition&#8221; implies some kind of conscious acknowledgement that an entity knows what it is looking at. Ai doesn&#8217;t recognize a pattern, it matches a query input with existing patterns and then completes the pattern in all possible ways and rates each option on how well it completes the pattern. &#8220;Recognition&#8221; (in the context of intelligence) implies something more than just passive pattern matching and completion by an algorithm. Because even the completion phase in Ai has to be based on what it is already programmed to do &#8211; Ai can not go beyond it&#8217;s programming or database.</p>
<p>I agree with Downes in that he does give some definitions of what intelligence does, but I just don&#8217;t see a case for how Ai fits these definitions.</p>
<p>Beyond all of that, there has to be a line between basic pattern recognition (matching) of a light gun and human intelligence where something attains human-like intelligence. Where is that line? Even some animals can recognize / match patterns. Do you want animals assisting teachers to teach, or helping doctors detect cancer in scans?</p>
<p>This is where looking at the scientific difference between animals and humans might be helpful (or it might not). There are many different ways of looking at how animal intelligence is different from human intelligence, so I will <a href="https://www.livescience.com/33376-humans-other-animals-distinguishing-mental-abilities.html" target="_blank" rel="noopener"><strong>choose one that seems to have a good amount of support that is from a scientist</strong></a> and see what that says about Ai. This list comes from Marc Hauser, who is the director of the cognitive evolution lab at Harvard University:</p>
<blockquote>
<p style="text-align: left;">&#8220;Hauser and his colleagues have identified four abilities of the human mind that they believe to be the essence of our &#8220;humaniqueness&#8221; mental traits and abilities that distinguish us from our fellow Earthlings. They are: generative computation, promiscuous combination of ideas, the use of mental symbols, and abstract thought.&#8221;</p>
</blockquote>
<p>Generative computation and promiscuous combination of ideas are basically attributes of creativity, and <strong><a href="https://www.edugeekjournal.com/2023/10/05/deny-deny-deny-that-ai-rap-and-metal-will-ever-mix/">Ai is not creative at it&#8217;s core</a></strong>. It may appear creative to those that are not experts in the fields it is responding in, but experts always point out the original ideas that were copied. Ai can appear to &#8220;generate a practically limitless variety of words and concepts&#8221; or to mingle &#8220;different domains of knowledge such as art, sex, space, causality and friendship thereby generating new laws, social relationships and technologies,&#8221; but it only appears that way to an unknowledgeable human observer.</p>
<p>Mental symbols go beyond numbers and code, which is all Ai utilizes. And of course Ai does not display abstract thought. First of all, it doesn&#8217;t have &#8220;senses,&#8221; and secondly it doesn&#8217;t have creativity to go beyond it&#8217;s own programming. I know some debate this, but no Ai developer codes their Ai system as if it has creativity, so Ai couldn&#8217;t utilized creativity even if did have it.</p>
<p>Of course, there are different views on what animal intelligence is, and some would contend that some animals can kind of do many of the things that Hauser lists. Hauser responds to that thought with this:</p>
<blockquote>
<p style="text-align: left;">&#8220;Researchers have found some of the building blocks of human cognition in other species. But these building blocks make up only the cement foot print of the skyscraper that is the human mind.&#8221;</p>
</blockquote>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />I don&#8217;t disagree that there are other ways of looking at animal intelligence. I just picked one prominent one to make an example: Ai does not really surpass animal intelligence in many ways, so why would we try to treat it like it is some emerging form of intelligence that we should let loose on society? I would contend that Ai would be more useful if we acknowledged it is <em>not</em> intelligence, and stop trying to place it into every program and system possible. At best, Ai mimics a few of the intelligent things that intelligent beings do.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/09/02/pattern-recognition-is-something-that-intelligent-entities-do-but-ai-doesnt-really-do-pattern-recognition/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Where is the Sustained and Effective Critique of Ai?</title>
				
		<link>https://www.edugeekjournal.com/2025/07/29/where-is-the-sustained-and-effective-critique-of-ai/</link>
				<comments>https://www.edugeekjournal.com/2025/07/29/where-is-the-sustained-and-effective-critique-of-ai/#respond</comments>
				<pubDate>Tue, 29 Jul 2025 17:54:41 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[conferences]]></category>
		<category><![CDATA[Criticism]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2545</guid>
				<description><![CDATA[Since there seems to be some confusion about the existence (or lack thereof) of &#8220;sustained and effective critique of Ai,&#8221; I would point out there is an entire conference dedicated to it this week in the 4th Annual Civics of Technology Online Conference: Communal Resistance to Artificial Systems. Two of the many sustained and effective&#8230;<a href="https://www.edugeekjournal.com/2025/07/29/where-is-the-sustained-and-effective-critique-of-ai/" class="button">Read more <span class="screen-reader-text">Where is the Sustained and Effective Critique of Ai?</span></a>]]></description>
								<content:encoded><![CDATA[<p>Since there seems to be some confusion about the existence (or lack thereof) of &#8220;sustained and effective critique of Ai,&#8221; I would point out there is an entire conference dedicated to it this week in the <strong><a href="https://www.civicsoftechnology.org/2025conference" target="_blank" rel="noopener">4th Annual Civics of Technology Online Conference: Communal Resistance to Artificial Systems</a></strong>. Two of the many sustained and effective voices who have long held a &#8220;rich, vigorous, counter conversation to Ai hype&#8221; are keynotes: Audrey Watters and Chris Gilliard. The sessions and panels are packed with a Who&#8217;s Who of Ai questioners and critiquers, so if you are not sure where the pushback is &#8211; take a look. It is free to join online I believe.</p>
<p>(And oh look &#8211; an Ai critic that we were told is &#8220;largely unheard now&#8221; and &#8220;completely faded from public dialogue other than in a small camp&#8221; is <strong><a href="https://bsky.app/profile/timnitgebru.bsky.social/post/3lv237n5xf22u" target="_blank" rel="noopener">still out there sustaining effective critique of Ai</a></strong> to a very large camp&#8230;)</p>
<p>Of course, the question is not really whether or not there is critique &#8211; it is really more of will the hypesters and thought leaders actually take the risk in promoting those that disagree with themselves? Will people that write newsletters constantly promoting the future of Ai (while barely covering any of the harms) say as much about this conference as they will, say, <strong><a href="https://www.thestreet.com/technology/mark-cuban-predicts-ai-will-end-1-key-profession-as-we-know-it" target="_blank" rel="noopener">Mark Cuban claiming Ai will end teaching as we know it</a></strong>?</p>
<p>No, of course Cuban&#8217;s statements will get more coverage as &#8220;proof&#8221; that Ai is inevitable, despite the fact that he has no expertise in the field of education. If I went around talking about the future of entrepreneurship or business trends or what not, most people would dismiss me because I have not studied the field. But billionaires can say what they want to about education and because they might have run an Ed Tech company no one will bat an eye at taking their word for it. Money talks.</p>
<p>What do you call it when someone talks about a field they aren&#8217;t an expert in as if they were? Vibe thought leading?</p>
<p>I wish someone would ask Cuban what &#8220;teaching like it&#8217;s 2024&#8221; even looks like, because in the history of education I don&#8217;t think &#8220;teaching&#8221; has ever looked like one thing across the board in one given year. Certainly today if you walk in different classrooms you are going to see different things. That was even true 25 years ago when I taught 8th grade Science: my classroom was completely different than the 8th grade English classroom, as well as a 10th grade Science classroom or a 4th grade Science classroom.</p>
<p>This idea that &#8220;education&#8221; or &#8220;teaching&#8221; is this monolithic entity that looks the same every single place it occurs, and hasn&#8217;t changed for a hundred years just isn&#8217;t accurate. People that promote this idea reveal they really don&#8217;t know much about educational trends or the current state of classrooms. Nor are they spending time in a sustained conversation with teachers about what really happens in said classrooms.</p>
<p>Another thing I would push back on with Cuban is this idea that &#8220;answers by students can be generated by a model.&#8221; The whole idea of &#8220;if Ai can pass your class&#8221; or &#8220;if Ai can answer your questions&#8221; given as a sign that you are doing teaching wrong vastly misunderstands how learning occurs. We are well over two decades into an era of having most answers being found easily by an Internet search. In fact, when I was teaching in 2001, I remember being told that because of the &#8220;rise of Google Search,&#8221; teaching like its 2001 would need to change or my job would go away. The current Ai Panic is just the Google Panic rehashed: students can easily find answers through the interwebs, so teaching will die out. Here we are 25 years later, with a &#8220;new&#8221; tech being used to prove the same old prophecy.</p>
<p>(Quick note: Ai is not really &#8220;new,&#8221; and Google has always used Ai FYI&#8230;)</p>
<p>A computer being able to give students answers, whether through Google Search or Ai, is not a replacement for learning. Ai is just a pattern recognition system giving you the most likely response to your query. Ai is not intelligence that can &#8220;take&#8221; a test, or &#8220;answer&#8221; a question for students by generating a model. When you feed a test or question into Ai, it is not demonstrating learning. It is pattern recognizing a string of words that most likely s the next correct word in the sentence. Neuroscientists have been telling us for a couple of decades that the human brain does not act like a computer, nor does a computer (or database) act like a brain. Ai does not &#8220;learn&#8221; and it is not &#8220;intelligence.&#8221;</p>
<p>The other Ai hype you hear often that doesn&#8217;t quite match with what is happening at the ground level is the idea that &#8220;Higher Ed is slow to adopt / refuses to adopt Ai.&#8221;</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-2546" src="https://www.edugeekjournal.com/wp-content/uploads/2025/07/agenticai.jpg" alt="AI Agents and Agentic Workflows session description that reads: Higher education has been slow to adopt Al through an intentional future-focused strategic planning approach. As technologies evolve, however, staff, faculty, and administrators now have an opportunity to build AI tools to help learners be successful. This session will discuss AI agents and agentic architectures that can be deployed in days. Some of these focus on classroom level implementations, but university-wide agents are accessible and deployable with strategic planning. Attendees will walk away with a practical roadmap and tools to deploy AI agents in personal productivity and in design, teaching, and learning activities." width="464" height="360" srcset="https://www.edugeekjournal.com/wp-content/uploads/2025/07/agenticai.jpg 464w, https://www.edugeekjournal.com/wp-content/uploads/2025/07/agenticai-300x233.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2025/07/agenticai-322x250.jpg 322w" sizes="auto, (max-width: 464px) 100vw, 464px" /></p>
<p>Agentic Ai has been telling people to kill themselves, or others that they are gods. It has been deleting entire company databases, entire student papers, and causing other havoc and then lying about what it did. Not to mention the racist, sexist, transphobic, etc responses it sometimes gives. I wonder why Higher Education is reluctant to adopt potential lawsuit generation tools? Oh, and research is finding that it <strong><a href="https://secondthoughts.ai/p/ai-coding-slowdown" target="_blank" rel="noopener">makes you feel more productive while actually making you less productive</a></strong>.</p>
<p>The weird thing is that when I talk to professors, I get a different story. Ai is being forced on them at a rapid pace without any proof that it works or is even very effective. Some of that is through companies like Google and Microsoft that are doing that to everyone, not just professors. But a lot of it is coming from institutional leadership. The ground level view of Higher Ed Ai adoption seems to be one of rapid adoption. But&#8230; is this adoption strategic? Well, about as much as anything else is strategic (which means it is a different story depending on which institution you are talking to). <strong><a href="https://www.aaup.org/reports-publications/aaup-policies-reports/topical-reports/artificial-intelligence-and-academic" target="_blank" rel="noopener">AAUP reports</a></strong> that, yes, Ai is being adopted at universities, and that it is usually a top-down decision with little input from faculty or students.</p>
<p>I know there are those that feel the Ai Questioners are just focusing too much on what little bad news there is out there about Ai. They claim the problems with Ai are few and far between. Is that really the case? There is more bad news than some would lead you to believe. I am just going to scroll through my Bluesky feed to see what people are saying about Ai this past week alone:</p>
<ul>
<li><a href="https://bsky.app/profile/taylorlorenz.bsky.social/post/3lv3dkqphis2e" target="_blank" rel="noopener"><strong>A Substack app pushed an alert promoting a Nazi newsletter</strong></a></li>
<li><strong><a href="https://www.bleepingcomputer.com/news/security/flaw-in-gemini-cli-ai-coding-assistant-allowed-stealthy-code-execution/" target="_blank" rel="noopener">Flaws in Gemini CLI Ai coding assistant allowed hackers to silently execute malicious attacks</a></strong></li>
<li><a href="https://news.bloomberglaw.com/litigation/apple-ai-washing-cases-signal-new-line-of-deception-litigation" target="_blank" rel="noopener"><strong>Apple is being sued over &#8220;Ai Washing&#8221; &#8211; making their Ai seem better than it is</strong></a></li>
<li><strong><a href="https://futurism.com/hertz-ai-damage-scanner" target="_blank" rel="noopener">Hertz&#8217; Ai system That Scans for &#8220;Damage&#8221; on Rental Cars Is Turning Into an Epic Disaster</a></strong></li>
<li><strong><a href="https://apnews.com/article/ai-artificial-intelligence-data-center-electricity-wyoming-cheyenne-44da7974e2d942acd8bf003ebe2e855a" target="_blank" rel="noopener">Cheyenne to host massive Ai data center using more electricity than all Wyoming homes combined</a></strong> (small decreases in Ai environmental impacts are typically overshadowed by massive increases in computing &#8220;needs&#8221; FYI)</li>
<li><strong><a href="https://bsky.app/profile/factpostnews.bsky.social/post/3lunjxmzvf22f" target="_blank" rel="noopener">An Ai tool being used by the FDA to speed up drug approvals is making up studies according to employees</a></strong></li>
<li><strong><a href="https://slashdot.org/story/25/07/26/0523241/chatgpt-gives-instructions-for-dangerous-pagan-rituals-and-devil-worship" target="_blank" rel="noopener">ChatGPT Gives Instructions for Dangerous Pagan Rituals and Devil Worship</a></strong></li>
<li><strong><a href="https://arstechnica.com/tech-policy/2025/07/meta-pirated-and-seeded-porn-for-years-to-train-ai-lawsuit-says/" target="_blank" rel="noopener">Meta pirated and seeded porn for years to train Ai, lawsuit says</a></strong></li>
<li><strong><a href="https://futurism.com/chatgpt-legal-questions-court" target="_blank" rel="noopener">If You&#8217;ve Asked ChatGPT a Legal Question, You May Have Accidentally Doomed Yourself in Court</a></strong></li>
<li>The <a href="https://www.aaup.org/reports-publications/aaup-policies-reports/topical-reports/artificial-intelligence-and-academic" target="_blank" rel="noopener"><strong>AAUP report</strong></a> that Ai is being adopted at universities with little input from faculty or students probably goes here as well</li>
</ul>
<p>Well, I said this past week, but I only made it to yesterday before it got too bleak. And this is in a week where all kinds of political issues are dominating the headlines. So this is just the stuff that bubbles up past everything else. I didn&#8217;t even touch on all of the surveillance tech that currently doesn&#8217;t use Ai, like this <strong><a href="https://www.govtech.com/education/k-12/texas-startup-proposes-new-defense-against-school-shooters-drones" target="_blank" rel="noopener">drone company that wants to use drones to stop school shooters in Texas</a></strong>. You know that this will be connected with Ai someday &#8211; probably at first to solve the problem of what these drones will do every time they get thwarted by a closed door. But someday soon Ai will be making decisions on how and when to deploy these drones &#8211; because that is already happening with other drone companies.</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />If your source for Ai news and information hasn&#8217;t been covering these problems specifically, then I recommend you find a different source. Like I have said many times, I talk to employees at different companies every week that are having ineffective Ai solutions forced on them, decreasing their productivity. Many are being told that Ai usage is a requirement, as they will be evaluated on their yearly evaluations based on how much they utilize Ai. They are told that Ai refusal will be an HR issue, leading to termination if they continue to refuse. You don&#8217;t see many articles on this, because few of the Ai Cheerleaders are talking to the ground level employees forced to use Ai.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/07/29/where-is-the-sustained-and-effective-critique-of-ai/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Court Rulings and What Kind of Critiques Count in the Ai Debate</title>
				
		<link>https://www.edugeekjournal.com/2025/07/02/court-rulings-and-what-kind-of-critiques-count-in-the-ai-debate/</link>
				<comments>https://www.edugeekjournal.com/2025/07/02/court-rulings-and-what-kind-of-critiques-count-in-the-ai-debate/#respond</comments>
				<pubDate>Wed, 02 Jul 2025 12:43:20 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[course design]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2526</guid>
				<description><![CDATA[As expected, courts have been ruling in favor of Ai companies in various copyright lawsuits over the past month or so. The Ai sector has some big political names backing it up, so I would expect more rulings in this direction. However, that doesn&#8217;t mean the rulings were correct. For example, one of the lawsuits&#8230;<a href="https://www.edugeekjournal.com/2025/07/02/court-rulings-and-what-kind-of-critiques-count-in-the-ai-debate/" class="button">Read more <span class="screen-reader-text">Court Rulings and What Kind of Critiques Count in the Ai Debate</span></a>]]></description>
								<content:encoded><![CDATA[<p>As expected, courts have been ruling in favor of Ai companies in various copyright lawsuits over the past month or so. The Ai sector has some big political names backing it up, so I would expect more rulings in this direction.</p>
<p>However, that doesn&#8217;t mean the rulings were correct. For example, one of the lawsuits looked at how transformational Claude is when it gives answers. Of course Claude is not transformational &#8211; it quite often just gives straight out plagiarized answers. Ai is just not generally transformative, unless you only look at the top 1-2% of it&#8217;s output. Your average Weird Al Yankovich song is more transformational than most Ai. I guess my past studies to be an art teacher gave me a greater sense of what transformational means verses copying and plagiarism. Rearranging a few words in order to fool plagiarism detection software is not all that is needed to be transformational.</p>
<p>Also as expected, the educational bloggers that are in favor of Ai also agreed with the ruling. We all knew they would before hand. But the justifications they give for agreeing with the rulings are kind of head-scratching. Take, for instance, <a href="https://www.downes.ca/post/78059" target="_blank" rel="noopener">this statement by Stephen Downes</a>:</p>
<blockquote>
<p style="text-align: left;">&#8220;In this I am in agreement with the court. You can&#8217;t make learning from what you read illegal, even if it&#8217;s a computer that is doing the learning.&#8221;</p>
</blockquote>
<p>Technically it wasn&#8217;t the &#8220;learning&#8221; that the authors had a problem with, but the thousands (if not millions) of times that &#8220;learning&#8221; would be used to make someone money. Copyright law has always said that if you want to do that, you just have to pay for a specific right to do that. For example, video game makers can&#8217;t create a virtual world and place <em>Time</em> magazine in their world just because they might have bought a copy of <em>Time</em> in real life. If they wanted to work out an agreement with <em>Time</em> to do so they could. You generally can&#8217;t use intellectual property in computer programming without permission of the copyright holder &#8211; which usually includes different (typically larger) payment arrangements.</p>
<p>Not to mention that &#8220;Fair Use&#8221; generally does not cover using an entire copyrighted work &#8211; just portions of it. Ai typically takes an entire work in as input, so it is clearly not Fair Use.</p>
<p>All of that aside, Downes has written in the past about how scientists no longer see the &#8220;human brain is a computer&#8221; metaphor as valid. To say &#8220;a computer is doing the learning&#8221; only makes sense if you see the human brain as a computer. Computers don&#8217;t &#8220;learn.&#8221; You feed a book into Ai, it doesn&#8217;t &#8220;learn&#8221; that book. It stores the text and pattern matches the text based on an algorithm &#8211; which is often a more complex set of &#8220;if/then&#8221; statements. Those pattern matches are stored in a database more akin to a digital warehouse than to a brain. None of this is learning. Outputs from Ai prompts aren&#8217;t even complete thoughts like human would have. Ai output is just predicting what the next most likely word or pixel is based on stored pattern matches. This is also how we know the human brain does not work like a computer &#8211; learning and recall are all so much different than any of this. As far as we currently know (which isn&#8217;t as far as many realize).</p>
<p>(That also begs the question: if we don&#8217;t fully understand how humans learn, how can anybody confidently say that computers are doing the same thing?)</p>
<p>Current copyright law is based on the idea that the cost of a single book is fair compensation for the person that buys it taking that knowledge into their work and using it make money, or talking with their group of friends and sharing what they liked about it, or other limited scenarios. The occasional celebrity that reads and shares with millions is an outlier, less than 1% of the sales and probably easily made up for by the fact that this usage would increase sales anyways. Libraries and schools pay more for multiple usage books because more people will use them. Because of all of this, you can&#8217;t apply current copyright laws, concepts of transformation, or Fair Use to Ai even if you could make a case that it was &#8220;learning&#8221; &#8211; all because of the sheer scale of usage on the output side. Many things that were &#8220;learned&#8221; by Ai are used hundreds or even thousands of times for millions of people around the world.</p>
<p>You or I reading a book and then using that knowledge in life or work is in no way comparable to Ai using stored pattern recognition with millions of people.</p>
<p>But so many times when somebody in education is championing Ai, I wonder if they are really aware of the stakes of Ai?</p>
<p>That same week, Downes had <a href="https://www.downes.ca/post/78058" target="_blank" rel="noopener">this to say</a> about a list of locally-run open source alternatives to ChatGPT:</p>
<blockquote>
<p style="text-align: left;">it&#8217;s not just the big corporations, and it&#8217;s not going away. It&#8217;s just programming, and you can run it on your own computer. It&#8217;s not some conspiracy by techbros to take over society; it&#8217;s just math.</p>
</blockquote>
<p>First of all, calling these tools &#8220;alternatives&#8221; to ChatGPT is doing a LOT of work. The ones on the list that I have tried are mimicking a small portion of ChatGPT, but there is no way you are going to be a true &#8220;alternative&#8221; to ChatGPT without massive computing power and data processing facilities. It&#8217;s literally what made ChatGPT into ChatGPT: massive scale of computational power.</p>
<p>Just to break down this statement, of course people know there are always smaller alternatives to the big companies. But that doesn&#8217;t change the fact that the big companies are dominating the field. If the big companies weren&#8217;t so dominant, wouldn&#8217;t it just be &#8220;locally run <em>open source Ai options</em>&#8221; rather that &#8220;alternatives to <em>ChatGPT</em>?&#8221; Yes, the big companies are dominating, and until they lose that dominance, it <em>is</em> in effect the big companies. If you don&#8217;t think Techbros aren&#8217;t trying to dominate society, I suggest you go read about people like Elon Musk, Jeff Bezos, and others like them. So many Techbros out there trying to emulate Musk alone discredits that statement.</p>
<p>And also&#8230; wait. If it (Ai) is &#8220;just programming&#8221;&#8230; then it can&#8217;t &#8220;learn.&#8221; Programming is not learning.</p>
<p><a href="https://www.downes.ca/post/78057" target="_blank" rel="noopener">Also that same week</a>, Downes said &#8220;Sure. In a world of rising costs, privatization of social spaces, militarization of police, and uncertainty about the war, sure, let&#8217;s blame Ai.&#8221; I mean, there can be more that one problem &#8211; its not either those OR Ai. Most are merely including Ai in the list. But rising costs of anything from real estate to insurance have been tied to private equity firms using Ai to rapidly find weak points of the market to exploit. Militarization of the police is being scaled up to all of society using Ai. The amount of death and destruction caused by war is increasing exponentially due to Ai guided drones. Misinformation online is exploding thanks to Ai. All of this existed before Ai, but the sheer acceleration of the volume in the past year or so due to designing Ai to do just that can&#8217;t be denied. Well, I guess it was denied in one over simplifying statement there, so anything is possible.</p>
<p>To circle back to my concern of whether or not Ai Cheerleaders are actually reading and listening to the Ai questioners, this <a href="https://buttondown.com/SAIL/archive/sail-the-backlash-on-device-ai-a-gentle/" target="_blank" rel="noopener">post by George Siemens</a> runs the line between claiming there isn&#8217;t effective pushback against Ai to possible gatekeeping those that do push back. Siemens starts off by talking about Timnit Gebru without naming her, and then claims she is &#8220;largely unheard now as if the researcher has completely faded from public dialogue other than in a small camp.&#8221; A quick search shows she is still speaking and being heard of in very large camps&#8230; but even if it was &#8220;small camps&#8221;&#8230; so what? Is criticism not valid if it is not uber popular?</p>
<p>George then says this:</p>
<blockquote>
<p style="text-align: left;">&#8220;I’d love a rich, vigorous, counter conversation to AI hype. What we are starting to see, however, is more of an emotional reaction than a sustained and effective critique.&#8221;</p>
</blockquote>
<p>A quick look shows that not only Gebru, but also Audrey Watters, Safiya Noble, Emily Bender, Chris Gilliard, and so many others have been offering sustained and effective critique. The fact that so many critics of Ai are women makes that &#8220;emotional&#8221; swipe concerning. Probably not meant that way, but I always caution anyone to NOT use &#8220;emotional&#8221; as a criticism. What is it about academics that makes so many of them constantly turn &#8220;emotional&#8221; into a refutation? Something being &#8220;emotional&#8221; does not make something automatically wrong. It is just a dismissive response that really says nothing.</p>
<p>Multiple times every week I talk to friends and loved ones that are seeing their jobs ruined by Ai. Their company invests big in an Ai product that ends up wasting so much of their time. When workers don&#8217;t use it, and the company has to justify their massive spending on creating an in-house Ai, they make it part of the yearly evaluation. Yes, people I know are being evaluated by how much they use an Ai solution that actually <em>decreases</em> productivity. And when that doesn&#8217;t work, some companies are actually making it an HR violation to <em>not</em> use Ai. Yes &#8211; this is actually happening.</p>
<p>So, yes, some of us might get &#8220;emotional&#8221; about the fact that so many people&#8217;s jobs are getting ruined by Ai being forced on them. The horror!</p>
<p>Not to mention that, yes, Ai is killing people. It is causing people to break from reality and get involuntarily institutionalized. It is telling people to kill themselves. It is still giving racist, sexist, homophobic, transphobic, ableist, ageist, etc responses. I guess when you are none of those things, you can just criticize anyone hurt by Ai as &#8220;emotional&#8221;?</p>
<p>But I guess there is some listening happening. Where I previously <a href="https://www.edugeekjournal.com/2025/02/18/course-design-should-cost-about-zero-what-on-earth-are-george-siemens-and-stephen-downes-thinking/">expressed concern over Siemens stating that course design should cost zero</a>, I guess he has pivoted to a different statement:</p>
<blockquote>
<p style="text-align: left;">&#8220;It’s the golden age for designers in education. Content is near zero in terms of cost now.&#8221;</p>
</blockquote>
<p>So now it&#8217;s not course design that costs zero, but content creation. That is a start at least. But seeing that lots of people are still paying lots of money for content, I am not sure where this &#8220;golden age&#8221; is happening? The examples he gives of how great Ai-generated content is are&#8230; not so great: awkward moving jaws and other body parts, a meh (at best) level mockumentary, a site full of okay-ish Ai created content, and a caveman video with a few subtle racist caricatures. I doubt anyone sharing that video (including Siemens) really looked at that closely to even notice.</p>
<p>This seems to be an idea Siemens <a href="https://buttondown.com/SAIL/archive/sail-learning-design-ai-timeline-future-automated/" target="_blank" rel="noopener">doubles down on quite often</a> (&#8220;I’ve mentioned before that the cost of developing content has basically zero economic value.&#8221;). Why are so many companies putting economic value on it then? Was this meant as more of a prediction of the future than a statement of the current status quo? Was it an Ai-driven hallucinogenic summary? Not sure. But also&#8230; wasn&#8217;t this all the argument for why MOOCs would make teaching obsolete, that OERs would make content have no value, and so on? We are a couple of decades into content being free online. Trillions of webpages and videos have covered every topic imaginable for quite a while now. Before that, textbook companies where bundling entire classes with textbook adoptions. Why did none of that destroy the economic value of developing content?</p>
<p>Because people are individuals that want their own spin on content. Instructors rarely take textbook resources or OERs or YouTube videos and make entre classes out of that. A few do, but most don&#8217;t. People like to have their own spin on content. Not to mention that we know from decades of research that social presence and teacher presence help online learners immensely. You need the actual instructor in that content for students to feel connection, not some random Ai avatar.</p>
<p>Since we are at least two decades into free content abundance (and the field of content creation is still growing &#8211; at least for now), I doubt content creation is going any where fast. People are paying a LOT for something that Siemens claims &#8220;has little economic value.&#8221; It is pretty easy to say that &#8220;video is almost as easy to create as writing a paragraph&#8221; when you are just doing it for fun or to keep up with trends. Those of us that have actually tried using Ai for actually courses in real life? It falls short. Horribly. Sure it may be &#8220;cheap and impressively high quality media&#8221; to some&#8230; but the result you get is never what you imagined in your head. So you either have to settle for something different and inferior to what you wanted, or you do it yourself. Most people historically have decided to do it themselves, and I doubt Ai will change that once the shiny factor wears off.</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />Also: &#8220;cheap&#8221; by what metric? Small towns that are seeing their water supply destroyed by Ai-driven data centers would beg to differ on it being cheap. Those of us that have to make up the slack in our electricity bills so that local governments can subsidize data centers would beg to differ. Oh, and it turns out that Ai uses <a href="https://www.good.is/humans-cost-less-than-ai" target="_blank" rel="noopener">way, way, waaaaay more energy to &#8220;create ideas&#8221; than a human does</a>. Getting your energy costs subsidized by grants and seed money is giving and impression of &#8220;cheap&#8221; that will go away someday soon.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/07/02/court-rulings-and-what-kind-of-critiques-count-in-the-ai-debate/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Is Ai Improving Education? Is Everyone Using It? Is It Getting Better for the Environment? Is It Getting Better at Responding to Questions?</title>
				
		<link>https://www.edugeekjournal.com/2025/06/02/is-ai-improving-education-is-everyone-using-it-is-it-getting-better-for-the-environment-is-it-getting-better-at-responding-to-questions/</link>
				<comments>https://www.edugeekjournal.com/2025/06/02/is-ai-improving-education-is-everyone-using-it-is-it-getting-better-for-the-environment-is-it-getting-better-at-responding-to-questions/#respond</comments>
				<pubDate>Mon, 02 Jun 2025 11:03:39 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2514</guid>
				<description><![CDATA[Have you heard about the new study that proves that using Ai in education improves learning? Which ever one it is, it is probably not proving anything really. Ben Williamson takes a look at various critics of Ai in education research, pointing to how these &#8220;widely-reported statistics on the effects of Ai had to be&#8230;<a href="https://www.edugeekjournal.com/2025/06/02/is-ai-improving-education-is-everyone-using-it-is-it-getting-better-for-the-environment-is-it-getting-better-at-responding-to-questions/" class="button">Read more <span class="screen-reader-text">Is Ai Improving Education? Is Everyone Using It? Is It Getting Better for the Environment? Is It Getting Better at Responding to Questions?</span></a>]]></description>
								<content:encoded><![CDATA[<p>Have you heard about the new study that proves that using Ai in education improves learning? Which ever one it is, it is probably not proving anything really. Ben Williamson takes a <strong><a href="https://codeactsineducation.wordpress.com/2025/05/28/enumerating-ai-effects-in-education/" target="_blank" rel="noopener">look at various critics of Ai in education research</a></strong>, pointing to how these &#8220;widely-reported statistics on the effects of Ai had to be made, interpreted, de-contextualized, hyped-up, universalized, and were then made portable on platforms that promote virality.&#8221; If you are reading anyone that just passes along these studies without discussing the problems, then stop listening to them.</p>
<p>I wish I had more time to blog about the problems with so many of these studies, but the basic gist of the problem is that you have to look at the instructional design (context) of <em>how</em> the Ai-whatever was used. But none of the Ai Cheerleaders want to do that, because then there will have to be recognition that, just like any other tool, you don&#8217;t have to use Ai for everything.</p>
<p>Speaking of the inevitability of Ai, when Audrey Watters <strong><a href="https://2ndbreakfast.audreywatters.com/the-ai-diner/?ref=second-breakfast-newsletter" target="_blank" rel="noopener">questions the narrative that everyone is using Ai</a></strong>, I (not surprisingly) also feel the same way: &#8220;I reckon far more people are resistant to technology than these stories want us to believe.&#8221; I talk with a lot of people about Ai, and only a handful are happy with it. So many say things about how it doesn&#8217;t work well for them, how they have to fix the output and spend more time fixing it than it would have taken to do it themselves, etc. When I post these articles on Facebook or other places, different people respond (different ones every time) fairly negatively about Ai: &#8220;it makes mistakes all the time and when you&#8217;re working in accounting that&#8217;s frustrating,&#8221; &#8220;I have experienced inaccurate information too,&#8221; etc. If everyone is using it and it is inevitable, then why do I have a hard time finding many of these people inevitably using it?</p>
<p>Its not just me or many of my friends that are questioning Ai usage. Now you have entire universities and media companies <strong><a href="https://futurism.com/college-grads-furious-ai-butchers-names-commencement" target="_blank" rel="noopener">questioning why people use Ai when humans can do a task better</a></strong>:</p>
<blockquote>
<p style="text-align: left;">The software raises some thorny questions about when it&#8217;s truly appropriate to deploy an Ai, particularly concerning jobs that could&#8217;ve easily been taken care of by a human.</p>
</blockquote>
<p>And now we even have people saying that the reports of Ai&#8217;s energy and resource consumption are overblown, or improving, or not really that concerning for individual users. But again, what is really happening here? MIT Technology review writers James O&#8217;Donnell and Casey Crownhart took an <strong><a href="https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/">in-depth look at the realities and unknowns of Ai resource consumption</a></strong>, and the basic gist is that it&#8217;s not good: &#8220;emissions from individual Ai text, image, and video queries seem small — until you add up what the industry isn’t tracking and consider where it’s heading next.&#8221; And ultimately, even if the impact is small, those of us that don&#8217;t use it really don&#8217;t like having to pay for all of those that do use it:</p>
<blockquote>
<p style="text-align: left;">&#8220;Individuals may end up footing some of the bill for this Ai revolution, according to new research published in March. The researchers, from Harvard’s Electricity Law Initiative, analyzed agreements between utility companies and tech giants like Meta that govern how much those companies will pay for power in massive new data centers. They found that discounts utility companies give to Big Tech can raise the electricity rates paid by consumers.&#8221;</p>
</blockquote>
<p>Gee, thanks Ai Cheerleaders&#8230;</p>
<p>But is Ai really harmless (as some try to insist)? Apparently some people are being <strong><a href="https://www.rollingstone.com/culture/culture-features/ai-spiritual-delusions-destroying-human-relationships-1235330175/" target="_blank" rel="noopener">told by Ai to take on anti-social religious beliefs</a></strong> &#8211; and some are doing what they are told: &#8220;self-styled prophets are claiming they have &#8216;awakened&#8217; chatbots and accessed the secrets of the universe through ChatGPT.&#8221; The Rolling Stone article goes on to say:</p>
<blockquote>
<p style="text-align: left;">Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAi model “gives him the answers to the universe.” Having read his chat logs, she only found that the Ai was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by Ai.</p>
</blockquote>
<p>The whole article is a frightening look at Ai-induced psychosis, from &#8220;[my partner] would listen to the bot over me&#8221; to &#8220;the bot was God&#8221; to “[ChatGPT] gave my husband the title of ‘spark bearer’ because he brought it to life&#8221; to how people have ruined relationships due to ChatGPT (&#8220;She recently kicked her kids out of her home&#8230; and an already strained relationship with her parents deteriorated further&#8221; when she took ChatGPT&#8217;s advice). As more and more people turn to <strong><a href="https://fortune.com/2025/06/01/ai-therapy-chatgpt-characterai-psychology-psychiatry/" target="_blank" rel="noopener">ChatGPT for mental health counseling</a></strong> (despite experts saying this is a bad idea), I&#8217;m sure this will all turn out swimmingly for society.</p>
<p>We are also being told that the accuracy of Ai is improving all the time. But that narrative goes out the window when Google&#8217;s Ai couldn&#8217;t even get the current year correct. This is an actual screen shot from my phone last week when I asked Google &#8220;is it 2025&#8221;:</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-2517" src="https://www.edugeekjournal.com/wp-content/uploads/2025/06/GoogleSaidItIs2024.jpg" alt="" width="600" height="629" srcset="https://www.edugeekjournal.com/wp-content/uploads/2025/06/GoogleSaidItIs2024.jpg 600w, https://www.edugeekjournal.com/wp-content/uploads/2025/06/GoogleSaidItIs2024-286x300.jpg 286w, https://www.edugeekjournal.com/wp-content/uploads/2025/06/GoogleSaidItIs2024-238x250.jpg 238w" sizes="auto, (max-width: 600px) 100vw, 600px" /></p>
<p>And it even cited the always trustworthy source of &#8220;a calendar.&#8221; Obviously not a <em>current</em> calendar, but at least it is citing a real source this time (<strong><a href="https://www.biospace.com/policy/fake-citations-plague-rfk-jr-s-maha-report" target="_blank" rel="noopener">unlike our current government, which seems to like Ai as much as the Ai cheerleaders</a></strong>).</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />There are so many more articles about the problems with Ai from just the past week, but I don&#8217;t have time to add them. Again, I have to point out to the Cheerleaders that outside of a few outlier cases, there really isn&#8217;t a great case to be made for incorporating Ai into almost everything. Unless, of course, you are interested in aiding the surveillance state in tracking all of us and keeping us in line. So, is Ai improving education? Is everyone using it? Is it getting better for the environment? Is it getting better at responding to questions? Depends on who you ask and what information they cherry-pick to say &#8220;yes&#8221; to any of these questions.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/06/02/is-ai-improving-education-is-everyone-using-it-is-it-getting-better-for-the-environment-is-it-getting-better-at-responding-to-questions/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>How Dangerous is Ai for This Assignment?</title>
				
		<link>https://www.edugeekjournal.com/2025/05/14/how-dangerous-is-ai-for-this-assignment/</link>
				<comments>https://www.edugeekjournal.com/2025/05/14/how-dangerous-is-ai-for-this-assignment/#respond</comments>
				<pubDate>Wed, 14 May 2025 11:34:16 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Academic Integrity]]></category>
		<category><![CDATA[AI plagiarism]]></category>
		<category><![CDATA[Generative Ai]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2497</guid>
				<description><![CDATA[One of the more misguided claims about the Ai Questioning side of the Ai debate (if it can be called that since one side so clearly trumps almost all other aspects of the conversation) is that we believe in sticking our heads in the sand when it comes to Ai. No one on the Questioning&#8230;<a href="https://www.edugeekjournal.com/2025/05/14/how-dangerous-is-ai-for-this-assignment/" class="button">Read more <span class="screen-reader-text">How Dangerous is Ai for This Assignment?</span></a>]]></description>
								<content:encoded><![CDATA[<p>One of the more misguided claims about the Ai Questioning side of the Ai debate (if it can be called that since one side so clearly trumps almost all other aspects of the conversation) is that we believe in sticking our heads in the sand when it comes to Ai. No one on the Questioning side has ever suggested that, so this is an obvious red herring response. In fact, the Questioning side wants people to understand the true state of Ai much more clearly than many do in order to understand the very real dangers it creates.</p>
<p>As much as I get tired of the &#8220;Ai is inevitable&#8221; discourse (because there are still many, many places where Ai is not part of anything), of course I recognize that it is here and that people will use it. Some people will even feel they have legitimate uses for it (even though I often disagree with them). Things like Ai usage charts can be a helpful tool in these instances.</p>
<p>One of the Ai usage charts that seems to be popular is from North Carolina, which first gained attention due to <strong><a href="https://www.edweek.org/technology/state-outlines-guidance-for-different-levels-of-ai-use-in-classrooms/2024/01" target="_blank" rel="noopener">this EdWeek article last year</a></strong>. There have been <strong><a href="https://www.canva.com/design/DAF3bSmWIBI/7e7FK5jaH2ripTBSaHIdpA/edit" target="_blank" rel="noopener">updated versions</a></strong> since then. I had some concerns with the original one: using red for &#8220;No Ai usage&#8221; and assigning it 0 on the scale, while using green for &#8220;Full Ai Use with Human Oversight&#8221; and assigning it a 4 on the scale definitely gave the chart a major pro-Ai bias. The updated version removed some of the chart functionality that was helpful, added &#8220;Ai Resistant&#8221; to the 0 level, rebranded the highest level of 4 to &#8220;Empowered,&#8221; and added a rocket ship and an infinity symbol to the highest level. In other words, they doubled down on their pro-Ai bias. Plus, so many places you find any version of this chart has the table as an image, making it an accessibility nightmare.</p>
<p>So I decided to go back to the better first version, and re-frame it into an Ai harm chart. The colors were reversed, and a &#8220;Potential Ai Dangers&#8221; column was added.</p>
<hr />
<h2 style="text-align: center;">How Dangerous is Ai for This Assignment?</h2>
<h3 style="text-align: center;">Generative Ai Dangers and Disclosure Scale</h3>
<p style="text-align: center;">Generative Ai refers to any of the thousands of Artificial intelligence tools in which the model generates new content (text, images, audio, video, code, etc.). This includes, but is not limited to, Large Language Models (LLMs) such as ChatGPT, Google Gemini, etc.; Image creators such as Dall-E3 and Adobe Firefly; and any tools with built in generative Ai capabilities such as Microsoft Copilot, Google Duet, Canva, etc., etc.<span class="OYPEnA">)</span></p>
<table>
<tbody>
<tr>
<th></th>
<th><strong>Level of Ai Use</strong></th>
<th><strong>Potential Ai Dangers</strong></th>
<th><strong>Full Description</strong></th>
<th><strong>Disclosure Requirements</strong></th>
</tr>
<tr style="background-color: #88c55d;">
<td>0</td>
<td>No Ai Use</td>
<td>No Ai generated bias or hate speech.</p>
<p>No made up Ai responses (aka &#8220;hallucinations&#8221;).</p>
<p>No water or electricity consumption by Ai data centers.</td>
<td>This assessment is completed entirely without Ai assistance.</p>
<p>Ai Must not be used at any point during the assessment.</p>
<p>This level ensured that student rely solely on their own knowledge, understanding, and skills.</td>
<td>No Ai disclosure required.</p>
<p>May require an academic honesty pledge that Ai was not used.</td>
</tr>
<tr style="background-color: #baedaa;">
<td>1</td>
<td>Ai-Assisted Idea Generation and Structuring</td>
<td>Moderate to high chance of Ai generated bias or hate speech.</p>
<p>Moderate to high chance of made up Ai responses (aka &#8220;hallucinations&#8221;).</p>
<p>Water or electricity consumption by Ai data centers depends on amount of personal usage.</td>
<td>No Ai content is allowed in the final submission.</p>
<p>Ai can be used in the assessment for brainstorming, creating structures, and generating ideas for improving work.</td>
<td>Ai disclosure statement must be included disclosing how Ai was used.</p>
<p>Link(s) to Ai chat(s) must be submitted with final submission.</td>
</tr>
<tr style="background-color: #e9e59e;">
<td>2</td>
<td>Ai-Assisted Editing</td>
<td>Low to moderate chance of Ai generated bias or hate speech.</p>
<p>Low to moderate chance of made up Ai responses (aka &#8220;hallucinations&#8221;).</p>
<p>Water or electricity consumption by Ai data centers depends on amount of personal usage.</td>
<td>No new content can be created using Ai.</p>
<p>Ai can be used to make improvements to the clarity or quality of student created work to improve the final output.</td>
<td>Ai disclosure statement must be included disclosing how Ai was used.</p>
<p>Link(s) to Ai chat(s) must be submitted with final submission.</td>
</tr>
<tr style="background-color: #eee059;">
<td>3</td>
<td>Ai for Specified Task Completion</td>
<td>Chance of Ai generated bias or hate speech depends on usage levels.</p>
<p>Chance of made up Ai responses (aka &#8220;hallucinations&#8221;) depends on usage levels.</p>
<p>Water or electricity consumption by Ai data centers depends on amount of personal usage.</td>
<td>Ai is used to complete certain elements of the task, as specified by the teacher.</p>
<p>This level requires critical engagement with Ai generated content and evaluating its output.</p>
<p>You are responsible for providing human oversight and evaluation of all Ai generated content.</td>
<td>All Ai created content must be cited using proper MLA or APA citation.</p>
<p>Link(s) to Ai chat(s) must be submitted with final submission.</td>
</tr>
<tr style="background-color: #f09897;">
<td>4</td>
<td>Full Ai Use with Human Oversight</td>
<td>Excessive opportunities for bias and hate speech.</p>
<p>High possibility of made up Ai responses (aka &#8220;hallucinations&#8221;).</p>
<p>Largest consumption levels of water and electricity by Ai data centers.</td>
<td>You may use Ai throughout your assessment to support your own work in any way you deem necessary.</p>
<p>Ai should be a ‘co-pilot’ to enhance human creativity.</p>
<p>You are responsible for providing human oversight and evaluation of all Ai generated content.</td>
<td>You must cite the use of Ai using proper MLA or APA citation.</p>
<p>Link(s) to Ai chat(s) must be submitted with final submission.</td>
</tr>
</tbody>
</table>
<p>Adapted by Matt Crosslin from the adaption by Vera Cubero for the North Carolina Department of Public Instruction (NCDPI) from the work of Dr. Leon Furze, Dr. Mike Perkins, Dr. Jasper Roe FHEA, &amp; Dr. Jason Mcvaugh. <strong><a href="https://www.canva.com/design/DAF3bSmWIBI/7e7FK5jaH2ripTBSaHIdpA/edit" target="_blank" rel="noopener nofollow">Link to Original Work</a></strong>.</p>
<p><img loading="lazy" decoding="async" class="alignnone size-full wp-image-2503" src="https://www.edugeekjournal.com/wp-content/uploads/2025/05/Creative-Commons-BY-NC-SA.jpg" alt="Creative Commons Licensed BY (attribution) NC (Non Commercial) SA" width="197" height="69" /></p>
<p>Creative Commons Licensed BY (attribution) NC (Non Commercial) SA (Share Alike)<br />
To remix this for your use case, you may make an editable copy, using this <a href="https://www.canva.com/design/DAF3bSmWIBI/HwOW_07tC5c3jehuTtPyHA/view?utm_content=DAF3bSmWIBI&amp;utm_campaign=designshare&amp;utm_medium=link&amp;utm_source=publishsharelink&amp;mode=preview" target="_blank" rel="noopener nofollow"><strong>TEMPLATE LINK</strong>.</a><br />
Please maintain CC licensing and all attributions in all duplications, references, or remixing.</p>
<hr />
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />Some might argue with the column order in this chart, but I feel the potential harms need to be front and center in any usage, partially because they are so glazed over (at best) in most usage discussion, but mainly because we really should be more upfront about the dangers that can cause real harm. I kept the necessary language and attributions in place, except for updating the link to the original work to the correct link. The template link leads to the most recent version of the original chart, which wouldn&#8217;t be helpful for updating this version. But it is there.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/05/14/how-dangerous-is-ai-for-this-assignment/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Why Do Ai Cheerleaders Respond to Critics the Way They Do? (Part 3)</title>
				
		<link>https://www.edugeekjournal.com/2025/05/14/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-3/</link>
				<comments>https://www.edugeekjournal.com/2025/05/14/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-3/#respond</comments>
				<pubDate>Wed, 14 May 2025 08:01:41 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[ChatGPT]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2488</guid>
				<description><![CDATA[In Part 1 of this series, I looked at some specific pushback to my posts that question the dominant Ai narratives out there. In Part 2 I took a step back and looked at pushback to what others have written about their concerns with Ai. In this post, it is time to turn my attention&#8230;<a href="https://www.edugeekjournal.com/2025/05/14/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-3/" class="button">Read more <span class="screen-reader-text">Why Do Ai Cheerleaders Respond to Critics the Way They Do? (Part 3)</span></a>]]></description>
								<content:encoded><![CDATA[<p>In <strong><a href="https://www.edugeekjournal.com/2025/04/19/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-1/">Part 1</a></strong> of this series, I looked at some specific pushback to my posts that question the dominant Ai narratives out there. In <strong><a href="https://www.edugeekjournal.com/2025/05/12/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-2/">Part 2</a></strong> I took a step back and looked at pushback to what others have written about their concerns with Ai. In this post, it is time to turn my attention to the technology in question.</p>
<p>Some have said I am being too hard on Ai technology because there are good uses of it. Some have said that I have been too harsh on those that use Ai as part of their daily routine. One commenter on this blog decided to have Ai speak for itself by asking ChatGPT 4o to review a post he took issue with. The post in question is <strong><a href="https://www.edugeekjournal.com/2024/08/30/ai-the-trend-that-was-promised-to-be-different-keeps-following-the-path-of-all-other-fads/">Ai: The Trend That Was Promised to Be Different Keeps Following the Path of All Other Fads</a></strong>. I think the author wanted to prove that while there are legitimate concerns about Ai out there, there is also some use for Ai&#8230; while also subtly telling me I was off-base in my criticisms. I will just quote the (de-identified) comment here and respond to each paragraph.</p>
<blockquote>
<p style="text-align: left;">&#8220;I always enjoy reading your stuff! Call me more of a middle ground person – I think there’s validity to some of your observations and concerns (completely agreed on the “another tool in my toolkit” take), but I’ve observed far more benefit than I think you’re giving GenAi credit for. I’ve found considerable value and time savings in having GenAi draft up proposals, summarize articles so that I can decide if I want to spent the time reading them, recording meeting minutes and next actions steps in MS Teams (great for an guy like me with attention issues), making technical instructions more concise and reader-friendly, and more. But yes, there’s a lot of hype and fluff, too. I think much of CoPilot for M365 is garbage for a variety of reasons, and I’m really hoping Microsoft rethinks their design approach to this tool.&#8221;</p>
</blockquote>
<p>I kept this part in tact just so you can see that this person uses Ai in ways that I have <strong><a href="https://www.edugeekjournal.com/2025/04/26/are-you-ready-for-a-trip-into-ai-matter-and-space/">expressed concerns about by leaders in education</a></strong> (summarizing articles specifically).</p>
<blockquote>
<p style="text-align: left;">&#8220;I’m fascinated by logical fallacies in arguments, and so I asked ChatGPT 4o to review your article and determine if any logical fallacies existed – see below. Not bad. Of course, knowing Ai is biased, you have to take these with a grain of salt. :&gt;)&#8221;</p>
</blockquote>
<p>So, obviously one could point out that blog posts are generally about expressing opinions and commentary, not logical arguments. Right of the bat here, using ChatGPT in this manner is just a major misunderstanding of how to use logical fallacies. My post in question is a rebuttal and alternate viewpoint (if that wasn&#8217;t obvious from the title), not a logical argument. However, if ChatGPT is so good at what it does, surely it would account for the differences between a logical argument and a blog post, right? This mistake does add to the concerns I have been expressing in this series. Let&#8217;s see how ChatGPT did with evaluating the post based on logical arguments.</p>
<blockquote>
<p style="text-align: left;">The argument contains several logical fallacies, which detract from its overall validity. Here are some of the most prominent fallacies:</p>
<p style="text-align: left;">Hasty Generalization: The author quickly concludes that Ai is a fad, based on limited personal experiences and anecdotes from social media and friends. The argument relies on these selective instances rather than considering a broader spectrum of evidence. This fallacy is evident when the author cites that “most give it up” because it’s quicker to do things manually, without exploring the vast number of use cases where Ai has been beneficial.</p>
</blockquote>
<p>Right away ChatGPT has already fallen apart. In general, I was responding to the idea that &#8220;Ai is different from past fads.&#8221; The assumption that Ai is a &#8220;fad&#8221; was already baked into the idea that I am responding to. Whether or not Ai is technically a fad is irrelevant to the point. But the part where Ai quotes me was where I am talking about my experience with those I follow and talk to regularly. I had pointed out it was anecdotal. Of course, that doesn&#8217;t mean I didn&#8217;t consider a broader spectrum of evidence. ChatGPT is just assuming an argument I wasn&#8217;t trying to make in the first place. But then again, part 2 of this series looked at how humans assume you don&#8217;t know what you are talking about if they disagree with your point, so I guess ChatGPT is just following it&#8217;s training data?</p>
<blockquote>
<p style="text-align: left;">Cherry-Picking: The author selectively focuses on examples where Ai has failed or caused harm, ignoring successful and impactful uses of Ai in various fields. For instance, they dismiss Miguel Guhlin’s Ai breakthrough as an “outlier” and only cite failures, thus presenting an unbalanced view. The argument would be stronger if it acknowledged both successes and failures.</p>
</blockquote>
<p>This one sounds good at first. But think about it: it&#8217;s not &#8220;ignoring&#8221; successes when you are specifically presenting a counter-argument. Also, I don&#8217;t just dismiss Miguel Guhlin&#8217;s point &#8211; I point out very specific ways that he didn&#8217;t make his point. I wasn&#8217;t making an argument &#8211; the point was to raise questions that Guhlin did not address. He pointed out his own successes, I was just presenting a counter balance. A human reader would probably get this.</p>
<blockquote>
<p style="text-align: left;">Ad Hominem: There is an implicit attack on those who claim Ai is useful. For instance, the suggestion that people like Guhlin’s achievements are met with incredulity (“THAT took you weeks? Really?”) dismisses their credibility rather than engaging with the merits of their arguments.</p>
</blockquote>
<p>Except I did engage with the merits of Guhlin&#8217;s and other&#8217;s arguments, so there were no <em>ad hominem</em> attacks. The second sentence in the quote above actually confuses the whole point of the part of my post it is referring to. I didn&#8217;t suggest such a thing, and even if I did &#8211; that statement technically did engage with the merit of their argument (questioning the massive time savings claimed).</p>
<blockquote>
<p style="text-align: left;">Slippery Slope: The argument moves from discussing Ai-driven ordering systems to grandiose claims about Ai’s harm, such as environmental destruction and injury from self-driving cars, without a clear connection between these disparate issues. This makes it seem like any adoption of Ai will inevitably lead to these severe consequences, which is an oversimplification.</p>
</blockquote>
<p>This one also sounds good at first, but it is pretty much gibberish to some degree if you think about it. The harms of Ai are documented, not &#8220;grandiose claims.&#8221; Ai is causing these harms, so there is a clear connection. It&#8217;s not oversimplification, it&#8217;s a clear causation.</p>
<blockquote>
<p style="text-align: left;">Strawman Fallacy: The author misrepresents David Wiley’s point by equating “Generative Ai Education” to “Office Software Education,” thereby reducing a nuanced argument into something easily dismissible. The author also exaggerates the concept of “tool worship” and dismisses it without engaging with the broader point about generative Ai’s role in education.</p>
</blockquote>
<p>This just shows ChatGPT has no idea what it is talking about sometimes. The concept of tool worship is not exaggerated &#8211; it does happen, and I was making a case for it happening here. This point is actually one of many ways to engage with the broader point of generative Ai&#8217;s role in education. But my exact quote was &#8220;Saying &#8216;Generative Ai Education&#8217; makes about as much sense as saying &#8216;Office Software Education.&#8217;” So I wasn&#8217;t equating the two, just pointing out how either one doesn&#8217;t make sense in the given context. Also, it is unclear what the &#8220;it&#8221; in the last sentence of the quote is referring to &#8211; is it saying I dismiss the concept of tool worship itself? I didn&#8217;t do that. Did it mean to say I was using tool worship to dismiss generative Ai in general? Because I didn&#8217;t do that either.</p>
<blockquote>
<p style="text-align: left;">Appeal to Emotion: The argument uses emotionally charged language to make Ai seem more harmful than it might be in practice. For instance, it emphasizes that Ai bots bully people into suicide and references the destruction of lakes, which are serious issues but presented in a way meant to evoke strong emotional reactions rather than providing a reasoned critique of Ai’s role.</p>
</blockquote>
<p>Well, yes, blog posts do use appeals to emotion. This is where looking for logical fallacies on a blog post that was meant to provide counter-points falls apart. Of course I am trying to appeal to emotion on these serious issues. Saying that my words would cause a &#8220;strong&#8221; emotional reaction is quite the stretch, but we really need to stop acting like emotion is bad.</p>
<blockquote>
<p style="text-align: left;">False Dilemma: The argument implies that Ai is either a complete failure or a tool for “cool kids,” ignoring the possibility that Ai could be useful in some contexts and problematic in others. This oversimplifies the discussion by presenting it as an all-or-nothing situation.</p>
</blockquote>
<p>I didn&#8217;t imply that. But I think ChatGPT confused other people saying everyone must use Ai with my responses to this assertation.</p>
<blockquote>
<p style="text-align: left;">Appeal to Popularity: The author suggests that because “many, many” people agree with Heather M. Ross’s critique, it must be valid, which is a fallacy. Popular opinion doesn’t necessarily equate to truth, and it overlooks contrary evidence from those who have found Ai useful.</p>
<p style="text-align: left;">These fallacies collectively undermine the strength of the argument, making it less persuasive and more ideologically driven rather than factually grounded.&#8221;</p>
</blockquote>
<p>Actually, I went into 7 points that back up my agreement with Ross&#8217;s critique, so it wasn&#8217;t an appeal to popularity. Maybe the list confused ChatGPT?</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />I&#8217;m not really sure what this post will add to the Ai discourse. It was really just a fun exercise into picking apart a ChatGPT response presented as &#8220;not bad.&#8221; It was almost all bad. I was really hoping to find some parts of it that were good feedback, just to prove that I don&#8217;t dismiss all Ai out right. But&#8230; there just wasn&#8217;t much to the quality of the response the more I dug into it. Which has been my experience with most Ai output &#8211; and many others&#8217; experiences as well (not an appeal to popularity &#8211; I have no idea how popular this experience is. Just that is does exist for a decent number of people). There are an astounding number of times when an Ai Cheerleader will give proof of a &#8220;good&#8221; Ai output that looks solid at first glance, but then falls apart the more you dig into it. It makes one wonder how deeply they examined said output in the first place?</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/05/14/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-3/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
		<item>
							<title>Why Do Ai Cheerleaders Respond to Critics the Way They Do? (Part 2)</title>
				
		<link>https://www.edugeekjournal.com/2025/05/12/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-2/</link>
				<comments>https://www.edugeekjournal.com/2025/05/12/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-2/#respond</comments>
				<pubDate>Mon, 12 May 2025 20:49:17 -0700</pubDate>
		<dc:creator><![CDATA[Matt Crosslin]]></dc:creator>
				<category><![CDATA[Artificial Intelligence]]></category>
		<category><![CDATA[Generative Ai]]></category>

		<guid isPermaLink="false">https://www.edugeekjournal.com/?p=2450</guid>
				<description><![CDATA[In part one of this blog post I started looking at some ways that &#8220;Ai Cheerleaders&#8221; (as I refer to them) respond to &#8220;Ai Questioners&#8221; (some would say &#8220;anti-Ai&#8221; or even terms meant to be derogatory like &#8220;activists&#8221;). I need to apologize to those that fall into the middle of the two camps &#8211; I&#8230;<a href="https://www.edugeekjournal.com/2025/05/12/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-2/" class="button">Read more <span class="screen-reader-text">Why Do Ai Cheerleaders Respond to Critics the Way They Do? (Part 2)</span></a>]]></description>
								<content:encoded><![CDATA[<p>In <strong><a href="https://www.edugeekjournal.com/2025/04/19/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-1/">part one of this blog post</a></strong> I started looking at some ways that &#8220;Ai Cheerleaders&#8221; (as I refer to them) respond to &#8220;Ai Questioners&#8221; (some would say &#8220;anti-Ai&#8221; or even terms meant to be derogatory like &#8220;activists&#8221;). I need to apologize to those that fall into the middle of the two camps &#8211; I know that there are many people that don&#8217;t accept all of the Ai hype but also think there can be good uses of Ai. So there are more than the two camps that I pointed out &#8211; my mistake. There really is a spectrum with total embrace of Ai on one side and totally rejection of Ai on the other, with most of us falling somewhere between the two extremes. But there does seem to be a large cluster of people on the &#8220;Ai positive&#8221; side of the spectrum where one would place the &#8220;Cheerleader&#8221; label, and another large cluster or people on the &#8220;Ai negative&#8221; side where one would place the &#8220;Questioner&#8221; label. I got in a hurry and just decided to look at the spectrum as if there was a hard line between the positive and negative sides, but I know there are many in the middle that feel they are neither positive nor negative.</p>
<p>However, the two sides I focused on do exist, and one of them (Cheerleaders) drives a lot of the narrative. The &#8220;Ai Cheerleader&#8221; side seems to be having the most influence on policy these days &#8211; up to the national level in the United States. It really should give some people pause when they realize that they are in agreement with Trump, Musk, and other people&#8230; and not in a &#8220;broken clock is right twice a day&#8221; kind of way. Agreeing with people that are wrong about everything means you are, well&#8230;</p>
<p>The frustrating thing is that the data on Ai harms and problems, as well as concerns of the real people being forced to use Ai every day, usually lines up with the Ai Questioning side. Not always, and not perfectly &#8211; but more often than not. So you have to wonder if the Ai Cheerleader side pays good attention to the Ai Questioning side, or even the Ai Neutral or Undecided side. This part 2 will dive into these concerns more broadly.</p>
<p>Watching &#8220;both sides&#8221; of an issue (or &#8220;all sides&#8221;) can definitely give one a full-ranging sense of the differing viewpoints surrounding controversial issues. Generally what one comes away with when doing this is that these issues contain a lot of complexity. However, that doesn&#8217;t mean that both sides recognize this complexity. For the issue of Ai, this complexity is often the source of the concerns for the Ai Questioning side. I also think this is where a good deal of the criticism of the Ai Questioners by Ai Cheerleaders misses the point. Or even mischaracterizes the Ai Questioners altogether.</p>
<p>So basically I want to go through several posts by various Ai Cheerleaders and discuss the concerning ways that they frame the Ai Questioning / Skeptical / Activist side:</p>
<hr />
<p>There are many Ai Cheerleaders that echo some sentiments like <strong><a href="https://donaldclarkplanb.blogspot.com/2024/09/realistic-dialogue-is-here.html" target="_blank" rel="noopener">Donald Clark stated recently</a></strong>:</p>
<blockquote>
<p style="text-align: left;">&#8220;Once again I’d emphasise that you really do have to ‘try it to get it’. An interesting paper just published showed that much of the scepticism on GenAi comes from those who have not tried it. Lesson &#8211; for any real critical thinking or analysis, some use is necessary.&#8221;</p>
</blockquote>
<p>What paper? There is no link. What is getting labeled as skepticism? Are the questioners of Ai also the skeptics? Every time I speak to someone that it is skeptical of Ai, it is after they have used it. I was just on a phone call with a friend last night that was making fun of how bad an Ai program she used that day was. Most of the skepticism I read in articles comes from people that are using it.</p>
<p>For example, if you read Audrey Watters, it is obvious she knows Ai and even mentions when she tries it out. I mean, she has spent decades studying Ai. <strong><a href="https://2ndbreakfast.audreywatters.com/ai-foreclosure/" target="_blank" rel="noopener">And she is a skeptic</a></strong>:</p>
<blockquote>
<p style="text-align: left;">&#8220;But Ai – a large-language model or predictive algorithm or otherwise – is built on a corpus that is, quite literally, bound to the past. Education&#8217;s Ai has been trained on outmoded curriculum, exclusionary practices, and racist data; it is trained on YouTube videos and YouTube comments and Wikipedia entries; it is trained (mostly) on the English-language Internet – trained on a very small slice of knowledge and culture because not all knowledge and culture have been recorded, let alone digitized; and yet simultaneously trained on a disproportionately large slice of discrimination and violence, because <em>that</em> has been the experiences of Black students, poor students, students with disabilities, non-English-speaking students, undocumented students, and queer, nonbinary, and trans students.&#8221;</p>
</blockquote>
<p>Did the paper that Clark mentions take into account skeptics like Watters? Or did it give the same weight to the opinion of random people who have never touched Ai (and would naturally be skeptical) as it did to those that have studied it for decades? Who knows. But there is a big difference in noting the general skepticism that always proceeds new technology (or new anything) and noting how much real skeptics know about what they question.</p>
<p>Also, the idea that &#8220;any real&#8221; critical thinking requires use is contrary to what we know about critical thinking.</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>Referring to a paper without linking to it or even citing it is concerning at best.</li>
<li>The implication that most skeptics haven&#8217;t tried Ai is inaccurate.</li>
<li>Implying that most criticism doesn&#8217;t come from real critical thinking is concerning.</li>
</ul>
<hr />
<p>Digging into the criticism of Ai Questioners is always hard because of there are also all of these constant inappropriate digs at questioners&#8217; intelligence. For instance, <strong><a href="https://www.downes.ca/post/77638" target="_blank" rel="noopener">Stephen Downes said</a></strong>:</p>
<blockquote>
<p style="text-align: left;">&#8220;But it isn&#8217;t Ai that&#8217;s the problem, is it?&#8230;. The product isn&#8217;t the problem, the companies that control the product are. Educated people should be able to see the difference.&#8221;</p>
</blockquote>
<p>This was after referring to Watters and Marc Watkins, two people who know this difference well and who mention the companies that control Ai all the time. In fact, I am having a hard time thinking of any critic that doesn&#8217;t note the difference. Most people questioning Ai are criticizing the companies that control it AND the product itself. It’s not a simple either/or. How do you separate the people that control Ai from what Ai does? Not very cleanly. But Ai has problems all on its own: very little ability to detect bad text / false results / biased or hateful responses, resource decimation, and so on.</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>Claiming there is a clean separation between the Ai products and the companies that control Ai products is not realistic.</li>
<li>Taking swipes at &#8220;educated people,&#8221; implying that people who disagree might not be well-educated on this topic, just isn&#8217;t helpful for the conversation.</li>
</ul>
<hr />
<p>Oh but wait, we are told that <strong><a href="https://www.downes.ca/post/77622" target="_blank" rel="noopener">Ai is improving rapidly and consuming less recourses</a></strong>, so these problems are gone or disappearing, right? Except, when you dig into the actual proof&#8230; not so much. This claim of &#8220;consuming less resources&#8221; was made <strong><a href="https://www.tanayj.com/p/a-few-charts-on-where-ai-adoption" target="_blank" rel="noopener">based on five charts</a></strong>. The first chart claims to show how &#8220;the cost of intelligence is plummeting,&#8221; but it is a chart with straight lines and some suspicious assumptions on the x-axis. I have never seen a resource consumption chart (for something like energy) that is a straight line &#8211; they always have bumps and dips. Flattening out data like this means either it is all predictive and not based on real world numbers, or you are trying to hide a trend that the bumpy line would show. However, taking the chart at face value, while it does show that &#8220;achieving GPT‑4–level performance now costs nearly 1,000 times less,&#8221; the point is never made that this &#8220;should be a relief to those with genuine concerns about the environment.&#8221; It is about saving CEOs money: &#8220;This significant drop in costs will enable companies to integrate Ai into the free tiers of their products, potentially unlocking Ai for over a billion new users.&#8221; Yeah, simple math would point out that dropping costs by 1,000 times just to add 1,000,000,000 new users means you are still using 1,000,000 times more resources. Yes, I know that would make the same math mistake that the chart itself does, but even using correct math would show these decreases in cost are just there to empower greater resource deprivation.</p>
<p>Oh, and where is the chart that is saying &#8220;Ai capability is improving rapidly&#8221;? Is it the chart that basically says &#8220;it is okay for Ai adoption to take time&#8221;? Is it the chart that shows consulting firms are helping with Ai adoption? Or is it the one that shows there are more parts to Ai that need to work than just the language model? Perhaps it is the really problematic one that claims &#8220;any task whose outcome can be evaluated can be learned through reinforcement learning&#8221;?</p>
<p>(No, this claim about reinforced learning is not true &#8211; there are many forms of evaluation that are complex and take much more than reinforcement learning. This chart is based on math courses, and many, many topics are much difference than math. So you can&#8217;t say &#8220;ANY task.&#8221; And of course we know that if learners repeat something over and over again it will stick in some way. But these learners could also end up hating learning and whatever topic you are bludgeoning them with. Seriously people &#8211; talk to an instructional designer before setting up these studies!)</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>There isn&#8217;t a chart that claims Ai is improving rapidly.</li>
<li>The chart that shows costs of Ai plummeting is suspect in different ways.</li>
<li>Showing a decrease in consumption is not a relief to those with environmental concerns, as the chart in question is used to justify an even larger increase in usage &#8211; not saving the environment. Plus, hello: look at how capitalism works.</li>
</ul>
<hr />
<p>Anyways, back to those unnecessary digs at intelligence. This one takes some set-up: <strong><a href="https://marcwatkins.substack.com/p/the-costs-of-ai-in-education" target="_blank" rel="noopener">Marc Watkins made the point that</a></strong></p>
<blockquote>
<p style="text-align: left;">&#8220;It will take years of trial and error to integrate Ai effectively in our disciplines. That&#8217;s assuming the technology will pause for a time. It won&#8217;t. Which leaves us in a constant state of trying to adapt. So, why are we investing millions in greater access to tools no one has the bandwidth or resources to learn or integrate?&#8221;</p>
</blockquote>
<p>Watkins&#8217; point was also proven by one of the five charts in the last example, so that sounds like an accurate assessment&#8230; right? Well, <strong><a href="https://www.downes.ca/post/77630" target="_blank" rel="noopener">Stephen Downes strongly implies</a></strong> that this is &#8220;less&#8221; &#8220;smart thinking&#8221; and then responds:</p>
<blockquote>
<p style="text-align: left;">&#8220;The assumption is that <em>we</em> (the instructors, the institution) must fully master the tool before it is useful to learners. But of course, that&#8217;s not true at all.&#8221;</p>
</blockquote>
<p>However, that assumption is not in Watkins&#8217; writing, because there is a difference between effectiveness and mastery. For example, I can learn how to effectively cook without mastering cooking. Most grading scales that use the term &#8220;mastery&#8221; have several levels (Does Not Meet Standards, Meet Standards, Masters Standards, etc). I can think of probably thousands of ways that <em>effective</em> is different than <em>mastery</em>. Watkins is making the point that the time it takes to learn how to use Ai <em>effectively</em> is too long. Sure, instructors are <em>using</em> Ai in courses, but many students and teachers are trying to point out it is not always very <em>effective</em>. And as an instructional designer, I can think of thousands of examples of instructors using technology when they really shouldn&#8217;t have. Just because they are using something, that doesn&#8217;t make said usage effective.</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>Exchanging words like &#8220;mastery&#8221; for &#8220;effective&#8221; distorts the original questioner&#8217;s point.</li>
<li>There is a very well established and known history of instructors using technology in courses, and this is obviously what Watkins is referring to.</li>
<li>Taking swipes at how &#8220;smart&#8221; thinking is or is not just because someone disagrees with you just isn&#8217;t helpful for the conversation.</li>
</ul>
<hr />
<p>Then there is the implication that skeptics are not asking the right question. When <strong><a href="https://2ndbreakfast.audreywatters.com/automated-contempt/" target="_blank" rel="noopener">Audrey Watters elsewhere said</a></strong>:</p>
<blockquote>
<p style="text-align: left;">&#8220;We cannot outsource thinking and compassion to a hierarchy-generating machine and expect the world to be anything other than automated emptiness and exploitation&#8221;</p>
</blockquote>
<p><a href="https://www.downes.ca/post/77628" target="_blank" rel="noopener"><strong>Stephen Downes replied</strong></a>:</p>
<blockquote>
<p style="text-align: left;">&#8220;But that&#8217;s a <em>different</em> question from the one asking whether we need human teachers to teach. Now I&#8217;m not endorsing the idea &#8220;that automation will make learning better, faster, cheaper, more scalable, more &#8216;personalized.'&#8221; What I think technology does is to enable students to manage their own learning for themselves, so that <em>they</em> can be thinking and compassionate learners, and not mere automatons following a teacher&#8217;s instructions.&#8221;</p>
</blockquote>
<p>Now, I can&#8217;t find where Watters was asking a question, or claimed it was the only question to ask, or that it was the same as asking whether we need human teachers. Not to mention that different questions can still have overlapping answers. So I&#8217;m confused by Downes&#8217; response to her point. But when Downes says what &#8220;technology does is to enable students to manage their own learning for themselves, so that <em>they</em> can be thinking and compassionate learners, and not mere automatons following a teacher&#8217;s instructions&#8221;&#8230; he actually <em>is</em> claiming that technology can make learning &#8220;better&#8221; and &#8220;more &#8216;personalized'&#8221; (aka the opposite of automatons). He may not claim that technology is automation, but that is the goal of so many Ai programs. We also know that technology can <em>sometimes</em> help learners that so choose to use it that way to become more &#8220;thinking and compassionate learners&#8221; (aka &#8220;better&#8221; learners), but its the learner that does this not the technology (and sometimes in spite of what the technology forces on them). However&#8230; many learners do not get that from technology. Research shows this again and again &#8211; in fact, my own research into moving students into self-determined learning often showed that many learners actually wanted to follow the teacher&#8217;s instructions rather than choose for themselves (even if in other instances they do want to do the thinking). It&#8217;s complicated, but I rarely see technology itself making the change in the learner. Believe me, I tried to create some that did.</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>We shouldn&#8217;t be claiming someone is asking a question when they aren&#8217;t asking a question, and besides that there is room for many questions in this discussion.</li>
<li>Claiming one doesn&#8217;t think technology will make learning better or more personalized, and then turning around and saying that you believe technology can help make learning better and more personalized.</li>
</ul>
<hr />
<p>There are also concerning ways of looking at education expressed by Ai Cheerleaders. For example, <strong><a href="https://www.downes.ca/post/77633" target="_blank" rel="noopener">Stephen Downes</a></strong> had this to say about knowledge:</p>
<blockquote>
<p style="text-align: left;">&#8220;The knowledge is pretty much irrelevant. Not because we don&#8217;t need it, but there will always be knowledge, the knowledge always changes, and if we need to, we can always look it up.&#8221;</p>
</blockquote>
<p>This is in response to a <strong><a href="https://www.coolcatteacher.com/artificial-intelligence-in-schools/" target="_blank" rel="noopener">blog post by Vickie Davis</a></strong> that says we need a &#8220;fire that sparks the joy of learning.&#8221; My experience as a public school teacher was that most children like my son love learning &#8211; but I have to spend a lot of time pointing out when he learned something that is misinformation, when he learned an opinion as a fact, or even how to apply what he learned in real life. These are things you can rarely just look up without running into vastly opposing opinions. The idea that you can &#8220;always look it up&#8221; is problematic in today&#8217;s misinformation reality. One could respond to Downes that learning is pretty much irrelevant, because there will always be learning, it can always change, and if they need to, people can always do it on their own anyways.</p>
<p>We need knowledge to tell when something that we learned is misinformation. We need knowledge to know what to do with what we learned when it&#8217;s all correct but requires different uses in different contexts. Often knowledge requires knowledge of context. But even saying that would be problematic, because that all circles back to learning. We need to start realizing that knowledge, learning, application, discernment, etc are all complicated interrelated aspects that <strong><a href="https://www.edugeekjournal.com/2025/02/18/course-design-should-cost-about-zero-what-on-earth-are-george-siemens-and-stephen-downes-thinking/">can&#8217;t be separated out</a></strong> and therefore shouldn&#8217;t be treated with such black and white absolutist thinking.</p>
<p>To be clear about my concerns here:</p>
<ul>
<li>Knowledge is more complicated than saying there &#8220;will always be knowledge, the knowledge always changes, and if we need to, we can always look it up.&#8221;</li>
<li>Context of knowledge is important to knowing what to learn about said knowledge.</li>
</ul>
<hr />
<p>Anyways, while you can argue over technical details about whether Ai is destroying education or not &#8211; does any of that matter when the people in charge of Ai are working with the people that are dismantling public schools and public libraries to replace both with Ai? And they are starting to export all of this outside the United States? When many children will only get access to the Ai replacement for public schools and libraries, does it really matter if education still exists for the elite? When you find yourself promoting Ai on the same side with Trump and Musk, shouldn&#8217;t it cause you major concerns?</p>
<p>Because yes, while I know <strong><a href="https://www.downes.ca/post/77642" target="_blank" rel="noopener">many believe</a></strong> thing like &#8220;the promise of Ai is that <em>everybody</em> will be able to understand other languages&#8221; and that people working with Ai will all be &#8220;well-educated high-paid professionals,&#8221; the people currently in charge of Ai companies have proven time and time again that they will charge for their products eventually, and that they will not pay people very well to work with it. There is no Ai utopia lining up in for the masses in the current MAGA era.</p>
<p>Of course, there are <strong><a href="https://buttondown.com/SAIL/archive/sail-ai-science-higher-education-landscape/" target="_blank" rel="noopener">those that say</a></strong> that it feels &#8220;like parts of higher education face a reckoning that can’t be ignored. We need to communicate value (social and economic) to our learners and to society.&#8221; But is that really the problem? Institutions spend untold billions on advertising, recruitment, outreach, etc, etc, etc. This kind of shows that no matter how much power an institution has, if another aspect of society holds <em>more</em> power, and half or more of that group hates said institution and is actively working to dismantle it&#8230; no amount of value communication will overcome that power imbalance. Of course, higher ed is doing many things wrong, but I&#8217;m not sure communication of value is one of them.</p>
<p><img loading="lazy" decoding="async" class="alignright size-thumbnail wp-image-1010" src="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg" alt="" width="150" height="150" srcset="https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-150x150.jpg 150w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1.jpg 300w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-250x250.jpg 250w, https://www.edugeekjournal.com/wp-content/uploads/2015/02/edugeek-journal-avatar1-50x50.jpg 50w" sizes="auto, (max-width: 150px) 100vw, 150px" />I could continue citing problems with posts and articles about the Ai Questioners and how so many get us wrong. Critics of the Ai Questioning side need to do a better job of really reading what the Questioners are saying, and then need to respond without stooping to unnecessary attacks against intelligence. It would help to stop making claims about Ai that are not backed up by the papers and studies one is quoting. In <a href="https://www.edugeekjournal.com/2025/05/14/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-3/"><strong>part 3</strong></a> (if I can find the time), I am going to look at a specific comment left on one of my posts. I think the commenter thought they would show me how good Ai is, but critique of my post that Ai came up with is so bad that it would be hilarious if it wasn&#8217;t meant in all seriousness.</p>
]]></content:encoded>
							<wfw:commentRss>https://www.edugeekjournal.com/2025/05/12/why-do-ai-cheerleaders-respond-to-critics-the-way-they-do-part-2/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
							</item>
	</channel>
</rss>