WE HAVE A tech-savvy 12yo in our home and he's facing restrictions on his use of a smartphone while on school grounds. Even though he doesn't bring a phone to school, he's watching with interest as the Irish Minister for Education Norma Foley writes a memo to Cabinet. The Minister wants a nationwide policy where parents are discouraged from buying smartphones for children of primary age.
While I agree with the need to educate parents about the addictive effects of smartphone usage I believe the government should offer a media campaign that shows how young people might be better equipped for the real world by creating content instead of simply swiping into doom scrolls.
I also believe the anti-smartphone posturing is a smokescreen for the lack of resources in our schools. I'd pump €10,000 grants to school principals who were willing to run smartphone usage information evenings for parents. Those grants could be used to fund educational supports and resources that children need more than they need smartphones. Throughout my newsfeeds I read about schools unable to pay heating and electricity bills this winter without support from Norma Foley's Department of Education.
Any Government initiative that attempts to remove the use of smartphones from school students will fail if the local school community--the parents--fail to support the bans. So everyone's ahead if the Department of Education gives money to schools to educate the community about the addictive nature and the negative impact on personal development when phones are substituted for in-person conversations.
Natasha Singer investigated what happened in Florida when phones were banned on school campuses in Orlando.
For members of an extremely online generation, their activities were decidedly analog. Dozens sat in small groups, animatedly talking with one another. Others played pickleball on makeshift lunchtime courts. There was not a cellphone in sight — and that was no accident.In May, Florida passed a law requiring public school districts to impose rules barring student cellphone use during class time. This fall, Orange County Public Schools — which includes Timber Creek High — went even further, barring students from using cellphones during the entire school day.
In interviews, a dozen Orange County parents and students all said they supported the no-phone rules during class. But they objected to their district’s stricter, daylong ban.
Ireland doesn't plan to impose day-long bans. Students have described the all-day ban as unfair and infantilizing.
“They expect us to take responsibility for our own choices,”said Sophia Ferrara, a 12th grader at Timber Creek who needs to use mobile devices during free periods to take online college classes. “But then they are taking away the ability for us to make a choice and to learn responsibility.”Like many exasperated parents, public schools across the United States are adopting increasingly drastic measures to try to pry young people away from their cellphones. Tougher constraints are needed, lawmakers and district leaders argue, because rampant social media use during school is threatening students’ education, well-being and physical safety.
In some schools, young people have planned and filmed assaults on fellow students and then uploaded the videos to platforms like TikTok and Instagram. Teachers and principals warn that social apps like Snapchat have also become a major distraction, prompting some pupils to keep messaging their friends during class.
As a result, many individual districts — among them, South Portland, Maine, and Charlottesville City, Va. — have banned student cellphone use throughout the day. Now Florida has instituted a more comprehensive, statewide crackdown.
The new Florida law requires public schools to prohibit student cellphone use during instructional time and block students’ access to social media on district Wi-Fi. It also requires schools to teach students about “how social media manipulates behavior.”
Some Irish commentators believe Norma Foley's proposed ban is designed to protect young people from the grips of social media. Others believe bans on smartphones in school yards will lead to better social interactions.
Without a complementary information campaign, a nationwide ban on smartphone usage equates to State control of personal technology habits. And as experiences in public Schools in Tampa shows, smartphone bans increase surveillance of students while hindering crucial communications for teenagers with family responsibilities or after-school jobs.
It is unclear how many other schools ban student cellphone use. Statistics from the U.S. Department of Education, published in 2021, reported that about 77 percent of schools prohibited nonacademic cellphone use during school hours.“It was getting out of hand,” Ms. Rodriguez-Davis said, describing how students texted each other during class to arrange meetings in the bathroom, where they filmed dance videos. “I call them ‘Toilet TikToks.’”
The ban has made the atmosphere at Timber Creek both more pastoral and more carceral.
Mr. Wasko said students now make eye contact and respond when he greets them. Teachers said students seemed more engaged in class.
“Oh, I love it,” said Nikita McCaskill, a government teacher at Timber Creek. “Students are more talkative and more collaborative.”
I'm interested in hearing how our 12yo son and 16yo daughter weigh the cost of restrictions on their smartphone usage. As Natasha Singer has observed in the NY Times, "Such bans are upending the academic and social norms of a generation reared on cellphones." And if the ban extends to secondary schools, students will no longer use their phones to check class schedules during school, take photos of their projects in art class, find their friends at lunch, or add the phone numbers of new classmates to their contact lists. For some young people, losing access to their phone is being placed into an isolation chamber.
[Bernie Goldbach teaches digital transformation with a five year old phone on the Clonmel Digital Campus of the Technological University of the Shannon.]
by Bernie Goldbach in Clonmel.
Image from The Emotion Machine.
MOST OF MY WORK with Artificial Intelligence has been in the field of Machine Learning. I teach students how to improve their online profiles so they enhance their prospects of gaining good jobs. And last semester, I introduced generative AI into classrooms to increase the speed of creating responsive web sites.
On several elite university campuses, there are societies such as Stanford's Club for Effective Altruism, that have funding from benefactors who want to examine ways to keep rogue AI at bay. I've gifted content from the Washington Post that explains these initiatives. Some of the deep thoughts and interest groups sound a bit cultish to me.
[Bernie Goldbach teaches digital transformation for the Technological University of the Shannon.]
by Nitasha Tiku
Extracted from The Washington Post, July 5, 2023
A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project.
Paul Edwards, a Stanford University fellow who spent decades studying nuclear war and climate change, considers himself “an apocalypse guy.” So Edwards jumped at the chance in 2018 to help develop a freshman class on preventing human extinction.
Working with epidemiologist Steve Luby, a professor of medicine and infectious disease, the pair focused on three familiar threats to the species — global pandemics, extreme climate change and nuclear winter — along with a fourth, newer menace: advanced artificial intelligence.
On that last front, Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
Science fiction has long contemplated rogue AI, from HAL 9000 to the Terminator’s Skynet. But in recent years, Silicon Valley has become enthralled by a distinct vision of how super-intelligence might go awry, derived from thought experiments at the fringes of tech culture. In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us. Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats. Open Philanthropy alone has funneled nearly half a billion dollars into developing a pipeline of talent to fight rogue AI, building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding and scholarships — as well as a new fellowship that can pay student leaders as much as $80,000 a year, plus tens of thousands of dollars in expenses.
At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.” It also hosts an annual conference and sponsors a student group, one of dozens of AI safety clubs that Open Philanthropy has helped support in the past year at universities around the country.
Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research. And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
“The conversation is just hijacked,” said Timnit Gebru, former co-lead of Ethical AI at Google.
Gebru and other AI ethicists say the movement has drawn attention away from existing harms — like racist algorithms that determine who gets a mortgage or AI models that scrape artists’s work without compensation — and drown out calls for remedies. Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
The foundation began prioritizing existential risks around AI in 2016, according to a blog post by co-chief executive Holden Karnofsky, a former hedge funder whose wife and brother-in-law co-founded the AI start-up Anthropic and previously worked at OpenAI. At the time, Karnofsky wrote, there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the inside.
Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent. Over the past year and a half, AI safety groups have cropped up on about 20 campuses in the United States and Europe — including Harvard, Georgia Tech, MIT, Columbia and New York University — many led by students financed by university fellowships.
The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation. Among them is Gabriel Mukobi, 23, who graduated from Stanford in June and is transitioning into a master’s program for computer science. Mukobi helped organize a campus AI safety group last summer and dreams of making Stanford a hub for AI safety work. Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
“This just seems like a really, really important thing,” Mukobi said, “and I want to make it happen.”
When Mukobi first heard the theory that AI could eradicate humanity, he found it hard to believe. At the time, Mukobi was a sophomore on a gap year during the pandemic. Back then, he was concerned about animal welfare, promoting meat alternatives and ending animal agriculture.
But then Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk). To guard against the “reputational hazards” of toiling in a field some consider sketchy, Mukobi wrote, “we’ll prioritize students and avoid targeted outreach to unaligned AI professors.”
Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
EA’s drive toward maximizing good initially meant convincing top graduates in rich countries to go into high-paying jobs, rather than public service, and donate their wealth to causes like buying mosquito nets to save lives in malaria-racked countries in Africa.
But from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top. Extreme practitioners began to promote an idea called “longtermism,” prioritizing the lives of people potentially millions of years in the future, who might be a digitized version of human beings, over present-day suffering.
In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors. Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
“Black are more stupid than whites,” Bostrom wrote, calling the statement “logically correct,” then using the n-word in a hypothetical example of how his words could be misinterpreted as racist. Bostrom apologized for the slur but little else.
After reading Bostrom’s diatribe, SAIA stopped giving away copies of “Superintelligence.” Mukobi, who identifies as biracial, called the message “sus” but saw it as Bostrom’s failure — not the movement’s.
Mukobi did not mention EA or longtermism when he sent an email to Stanford’s student listservs in September touting his group’s student-led seminar on AI safety, which counted for course credit. Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
Students who join the AI safety community sometimes get more than free boba. Just as EA conferences once meant traveling the world and having one-on-one meetings with wealthy, influential donors, Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
The movement has successfully influenced AI culture through social structures built around swapping ideas, said Shazeda Ahmed, a postdoctoral research associate at Princeton University’s Center for Information Technology Policy. Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments.
“It’s really readable writing, which is great,” Edwards said, but it bypasses the precision of vetting ideas through experts. “There’s a kind of alternate universe where the academic world is being cut out.”
Edwards’s first book was on the military origins of AI and he recently served on the United Nations’ chief climate panel, leaving him too rooted in real-world science and politics to entertain the kind of dorm-room musings accepted at face value in the forums.
Could AI take over all the computers necessary to end humanity? “Not happening,” Edwards said. “Too many humans in the loop. And there will be for 20 or 30 years.”
Since the launch of ChatGPT in November, discussion of AI safety has exploded at a dizzying pace. Corporate labs that view advanced artificial intelligence as inevitable and want the social benefits to outweigh the risks are increasingly touting AI safety as the antidote to the worst feared outcomes.
At Stanford, Mukobi has tried to capitalize on the sudden interest.
After Yoshua Bengio, one of the “godfathers” of deep learning, signed an open letter in March urging the AI industry to hit pause, Mukobi sent another email to Stanford student listservs warning that AI safety was being eclipsed by rapid advances in the field. “Everyone” is “starting to notice some of the consequences,” he wrote, linking each word to a recent op-ed, tweet, Substack post, article or YouTube video warning about the perils of unaligned AI.
By then, SAIA had already begun its second set of student discussions on introductory and intermediate AI alignment, which 100 students have completed so far.
“You don’t get safety by default, you have to build it in — and nobody even knows how to do this yet,” he wrote.
In conversation, Mukobi is patient and more measured than in his email solicitations, cracking the occasional self-deprecating joke. When told that some consider the movement cultish, he said he understood the concerns. (Some EA literature also embraces nonbelievers. “You’re right to be skeptical of these claims,” says the homepage for Global Challenges Project, which hosts three-day expenses-paid workshops for students to explore existential risk reduction.)
Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
[Portions of this blog post were extracted via a subscription-funded gift link from the technology section of The Washington Post.]
by Bernie Goldbach in Clonmel
I WORK ON a small university campus in County Tipperary and have major concerns about the Clonmel Sports Hub. I walked around the new amenity with Councillor Michael Murphy in order to point out specific areas worth consideration.
My son Dylan (12) feels stressed by the roaming gangs of teens that he sees jumping on the zip line and crawling along the top of the Sports Hub buildings. He met up with Tipperary County Councillor Michael Murphy to share his concerns.During Councillor Murphy's discussion with university staff on the TUS-Clonmel campus, it emerged that a Memorandum of Understanding might be the best plan to ensure this €5 million amenity is handed over to a team that will be able to ensure its safe-keeping.
You can follow the development and community usage of the Clonmel Sports Hub by listening to the Little Bit of Tipp podcast on Spotify or on YouTube.
[Bernie Goldbach teaches creative media for business on the Clonmel Digital Campus.]
HIGH HEATING COSTS have knocked winter strawberries out of our shopping basket.
Keelings, the Irish fruit company, told the Sunday Times that "energy costs have resulted in a 30 per cent decresase in its raspberry production and are unsustainable for its strawberry crop". This means fewer Irish berries in our local supermarkets and it feels like smaller amounts of berries in the standard packages.
The berries seem smaller at Martin's Fruit and Veg in Clonmel too. We've become accustomed to the down-sizing.
Keelings produces 70% of Ireland's raspberries--my daughter's favourite fruit. It costs more than ever before to grow fruit in the winter. Brian Moran, Keeling's Chief Financial Officer, estimates it costs Keelings €4 million annually to heat its strawberry crop. At that rate, Keelings will need to be on Ireland's Horticultural Exceptional Payment Scheme. "Keelings uses about 15 acres of heated glasshouses to produce strawberries between September and November. In 2020 and 2021, Keelings spent €760,000 and €1.64 million respectively on the heating of its strawberry crop," according to facts revealed by Laura Roddy in The Sunday Times.
Between 2019 and 2021 gas made up 5% of all costs associated with fruit production but today is accounts for 20% of the bill. The absence of a heated glass crop of strawberries will result in a production loss of €6 million for Keelings.
We've been fortunate when growing our own strawberry crop and we have two cats who are well-trained to swat slugs off our plants. So we plan for another summer strawberry bonanza and just reorient our expectations for red berries in Special K cereal.
Laura Roddy -- "Energy crisis puts berries in a squeeze" on the front page of Business & Money in The Sunday Times, February 19, 2023.
[Bernie Goldbach shops local. The berries in the top shot were purchased in England.]
by Bernie Goldbach after walking George's Dock
I'VE WALKED THE LANEWAYS between George's Dock and Custom House Quay and know the area where 49-year-old Mongolian Urantsetseg Tserendorj was murdered by a drunken, thieving young thug roaming the night on a bike. Urangsetseg had been walking home from the cleaning shift that was putting her two children through college. You can bet that the scumbag who killed her would never accept unskilled work for minimum wage during antisocial hours.
In Ireland, many Irish people simply won't accept work. I think the jobseeker's allowance for those aged under 25 is now €220 a week. You can earn 100 euro more than that by taking jobs that I see advertised in several takeaways and shops but you have to get on your bike before the sun comes up or you have to plan on walking home late at night like Urantsetseg did before she died.
During a Laois County Council meeting, Fine Gael councillor Aisling Moran said, "We need to look after the people who are getting up for work." Aisling made the comments while making the case for a working man who had lost his home but was told by the Council Housing Office that he could not qualify for council accommodation because he was earning too much."It's scandalous," the councillor said, "that we would treat working families like that."
I think it's perverted that people who have never worked for generations should feel entitled to government housing. I've lived in Ireland for more than 20 years and I have encountered young men who have never worked, whose fathers have never worked, and whose grandparents have figured out the system to ensure their nobody in their family line has to work. And why should they worry? They can stay at home, buy new appliances, and enjoy short breaks to relieve their stress while other people are on the path before 8AM, headed to work where they pay the tax to ensure others have a warm house and a lifestyle protected by generous social welfare supports.
Who can blame people for quitting their jobs and joining the ranks of the unemployed if the most direct way of getting a roof over your head is by remaining actively unemployed?
And who would be bold enough to break free of the feral youth culture where you can top up your drinks budget by attacking people walking home from cleaning pub toilets at night?
I will always think of Urantsetseg Tserendorj whenever I see a hooded teenaged boy on his mountain bike or e-scooter. And I will hope Irish county councillors will have the strength and fortitude to start ensuring the 21st century social vision restores the working class back to a place of pride.
[Bernie Goldbach is an American with Irish and German roots who grew up in a working class family. The Laois County Council appears to be constraining public debate about this topic by actively posturing to prevent the live streaming of its council meetings.]
I'M ADDING CONTENT from today's Irish Times opinion column because the commentary by Fintan O'Toole helps explain the demise of Twitter, a social networking site I've used since 2006.
It used to be cocaine that was, as Robin Williams had it, God’s way of telling you that you have too much money. Now it’s buying Twitter for $44 billion and setting fire to it.
But perhaps there is some continuity here. Both drugs heighten the sense of invincibility you feel while you are making a complete fool of yourself.
Elon Musk has made himself the idiot savant of our times, half genius, all man-child. Yet he has, in the process, illuminated two important truths about contemporary culture.
The first is that there really is such a thing as having too much money. Right-wing economics is based on the belief that the super-rich will ultimately use their vast wealth for the common good. Musk seems to have set out to disprove this thesis by a spectacular demonstration of the wanton wastefulness of excessive riches.
We’re living in an age of grotesque inequality in which a tiny number of people have cornered a vast share of the wealth. We’re no longer even talking about the top 1 per cent, or the top 0.1 per cent. Or even the top 0.01 per cent.
Musk belongs, rather, to the top 0.001 per cent in the United States. That’s 2,400 people who had (in 2016, the latest year for which there are such detailed calculations) $1,631,821,000 between them.
The theory is that these are the “wealth-creators” whose accumulation of such astronomical riches somehow benefits all of humanity. They are the Medicis or the Carnegies of our time.
Agent Musk has set out, presumably on behalf of the worldwide communist conspiracy, to explode such notions. For he has shown – like a global version of our own Seán Quinn – that there is a level of accumulation beyond which “wealth-creators” become wealth-destroyers.
It may well be true that, up to a point, the profit motive drives innovation. But profit gets boring. Too much of it leads to satiety and saturation.
A much more potent and primitive force takes over: megalomania. Beyond the satisfaction of basic needs, beyond security and comfort, there is the search for status, the need to be number one.
And this drive is unbounded. The manic ego knows no limits. Its hunger for domination is insatiable. It eventually takes the shape of an ouroboros, the ancient symbol of a serpent eating its own tail.
It becomes, even by the very narrow measure of money, destructive. Musk has managed, not just to incinerate his own investment in Twitter, but has seen the value of its main enterprise, Tesla fall by half.
The top 10 investors in Tesla have alone lost $133 billion since Twitter’s board accepted Musk’s buyout in April. It must surely be dawning on them by now that, even for lovers of buccaneering capitalism, the madness induced by excessive wealth corrodes the very thing it seeks.
We also have to thank Musk for exposing the myth of libertarian devotion to free speech. His fairytale transformations from self-declared “free speech absolutist” to whiny little snowflake to authoritarian censor is this season’s premature Christmas panto in which the whole Twitter sphere gets to call out: “Oh no he isn’t.”
It has long been obvious that the libertarian commitment to free expression is mostly one-sided: I have absolute freedom to say what I like but if you answer back, you are oppressing me. For the over-privileged (and yes they are still nearly all rich white men) “free speech” really means “Shut up and listen to me.”
Yet no one has managed to make this point so clearly and memorably as Agent Musk. What he got for his $44 billion is a big red card to wave at his enemies and rivals and send them off the pitch.
Presumably he got jealous of Antonio Mateu Lahoz, the referee who issued 15 yellow cards in the Argentina-Netherlands match at the World Cup. Musk needed to prove he could be an even more ridiculous martinet. It does not seem to have occurred to him that if he keeps sending people off there will be no one left to play the game.
Musk banned an account that uses public information to track his private jet and those of Russian oligarchs. Then he banned journalists, some (like Donie O’Sullivan) more or less randomly, some (like Linette Lopez) because they have been reporting critically on his business practices. Then he banned links to the rival social media platform Mastodon.
This is tantrum capitalism. Any notion of making Twitter a profitable business comes a very distant second to the instant self-gratification of banishing the insolent and the impertinent from the perfect realm of Musk’s digital Freedonia.
Rampant egomania is not creation. It is not even, in the jargon of neoliberalism, creative destruction. It is merely destruction.
The (not unreasonable) criticism of Twitter used to be that it is an echo chamber. Yet, in the Greek myth, Echo was ultimately destroyed by Narcissus, who fell in love with his own reflection. At the end of the story, the echo and the narcissist both withered away and died.
Musk has provided 21st-century feral capitalism with its own moral tale of self-destruction. The narcissism that springs from excess wealth kills the thing it loves most: itself.
[Opinion column written by Fintan O'Toole, The Irish Times, December 20, 2022. Links inserted by Bernie Goldbach to aid in classroom discussion with a class of students studying digital transformation. Paper copies of this opinion piece have been archived in the Clonmel library of the Technological University of the Shannon.]
EVEN AFTER THE IRISH government rolled out yet another plan to improve the prospects for getting on the property ladder, Ireland still lacks an adequate supply of family homes and the Central Bank still uses an unrealistic algorithm that stifles mortgage approvals.
I pass by dozens of unused properties in Clonmel on my way to our university campus. I'm sure there are reasons why small properties in the town are vacant. I just don't understand them.
I also know there are tracts of land that builders bought with the intention of building homes when the financial prospects emerged of getting top selling prices for new homes. I think those tracts that are zoned residential should be hit with a site value tax. But I don't hear any politician advocating that measure because it would mean levying a tax on existing property owners today. And elected politicians are also landlords so they would be paying for more than one property tax.
There needs to be more homes constructed in Ireland. It means homes need to be built all the time to serve the poor, the well-off, my young graduates, and older couples who might be enticed to sell their four bedroom home now that their children have left the nest.
Knowing all this, I can see 2021 is the worst ever time to buy property. Tens of thousands of first-time buyers are caught with reduced options and high payments. For people who are trying to get started as homeowners, new mortgages have little value because there is no supply. What I can see being offered in South Tipperary is of such low quality that many new buyers will have to pay more money than they ever imagined, to live in a place they never really desired, and may be stuck with it through the middle of the century.
[Bernie Goldbach is an American who has lived in County Tipperary longer than he lived in his home county of Lancaster, Pennsylvania.]
THERE IS A LOT of cross-talk among some of my friends, some in the word cloud above, when they're put into Facebook jail or when friends of theirs are deplatformed. I wonder if you've ever fallen into any of those spaces? This is something I've been considering since 2008. [1]
Facebook, Twitter, Instagram, and Amazon run their own internal processes to adjudicate disputes about speech, accommodation, meals, commerce, elections, and reputation. When Google is warned about defamatory content in top search results, Google weighs the protection of one person’s public image and another’s profits or speech. Amazon routinely weighs in when product reviews flame up between consumers and third-party merchants about defective or counterfeit items. On Trip Advisor or Google Local, some small businesses have to lay off employees or cease trading when reviews get heated.
I teach a Law module at the university level that reviews the processes that the largest social networks use to resolve disputes. Behind the scenes sits credit card companies who can rule on disputed charges between a merchant and consumer. In the case of consumer spending disputes, European and United States federal law offers frameworks for timely notices, reasonable investigations, and other procedural minimums. But the large social network platforms can set their own discretionary standards. During COVID-19 and in aftermath of the 2020 American national election, Facebook established an independent oversight board that can overrule content moderation decisions. But none of the social audio apps I use have similar procedures--and why would they?
I wonder if you've been caught in the crosshairs and been put in Facebook jail? Have you received a red card for a copyright violation? Have you been blocked by people because they don't agree with your perspective? If we continue to engage on social media platforms we need to trust the methods they use to resolve disputes. Perhaps legal standards, like the ones used in the financial services sector, are now needed to ensure civility in the information age. As Rory Van Loo writes in the University of Chicago Law Review, "The procedures would aim to improve the administration of justice through public accountability and separation of at least one of platforms’ executive, legislative, and judicial powers." I'm interested in following these discussions and hope some sort of international standard applies to public discourse online.
1. Bernie Goldbach -- "Social Media Plumbers" on InsideView, 2008.
2. Rory Van Loo, "Federal Rules of Platform Procedure," 88 University of Chicago Law Review 829 (2021). Available at: https://scholarship.law.bu.edu/faculty_scholarship/905
[Bernie Goldbach teaches creative media for business on the Clonmel Digital Campus of the Technology University of the Shannon.]