Higher-order bullshit

Three snapshot applications of AI:

During early phases of the war in Gaza, the Israeli military used software to select bombing targets on a scale that would not have been possible for human analysts.1

During the Trump administration’s initial attack on the federal government, there was lots of nonsense about how Elon Musk and DOGE were using software to identify waste. House Speaker Mike Johnson commented that Musk has “created these algorithms that are constantly crawling through the data, and… the data doesn’t lie.”2

Health Secretary Robert F. Kennedy Jr. commissioned a report with the ridiculous title “Make America Healthy Again.”3 It turns out that many of the citations in the study are erroneous, including references to articles which simply do not exist. Incorrectly citing things and misrepresenting results is plausibly human malfeasance or incompetence, but totally inventing sources suggests chatbot hallucination.

The wrongs committed here are morally different, and I don’t want to suggest a false equivalence. But each of these cases provides a specimen of how reliance on AI has been used to further dangerous agendas. Yet the reliance on AI is really just a sideshow.

Continue reading “Higher-order bullshit”

“On trusting chatbots” is live

My paper On Trusting Chatbots is now published at Episteme. It is in the penumbral zone of publication, with a version of record and a DOI but without appearing yet in an issue.

Publishing things on-line is a good thing. Waiting for space in a print issue is a holdover from the 20th-century. But it creates the awkward situation where the paper will be cited now as Magnus 2025 but, if it doesn’t get into an issue this year, cited in the future as Magnus 202x (for some x≠5).

If we care about careful and accurate citation, there’s got to be a better way.

Assorted hogwash

If I post here, I can close the tabs:

Via Gizmodo: The contract between OpenAI and Microsoft specifies a change in their relationship if OpenAI manages to develop Artificial General Intelligence. In subsequent contract negotiations, “the two companies came to agree in 2023 that AGI will be achieved once OpenAI has developed an AI system that can generate at least $100 billion in profits.”

Via Retraction Watch: Most of the editorial board of the Journal of Human Evolution has resigned in response to malfeasance by the publisher. Among the infractions: “In fall of 2023… without consulting or informing the editors, Elsevier initiated the use of AI during production… These AI changes reversed the accepted versions of papers that had already been properly formatted by the handling editors.”

Doctor gpt

At Daily Nous, there’s discussion of Rebecca Lowe’s post about how great it is to talk philosophy with the latest version of Chat GPT.

There’s pushback in the comments. Others reply that the critics haven’t used the latest version (which is only available behind a paywall). Discussion of LLMs will always allow this: Complaints about their shortcomings are answered by pointing to the next version that’s supposed to resolve all the issues.

Lowe and other commenters reveal that lots of philosophers are using LLMs in the regular day-to-day of their research. I’m still trying to figure out what I think about that. For now, let’s deflect Lowe’s ridiculous offhand claim that “Gpt could easily get a PhD on any philosophical topic.” I say ridiculous for a few reasons—

Continue reading “Doctor gpt”

It’s still rapacious capitalism

From Cory Doctorow:

The fact that AI can’t do your job, but that your boss can be convinced to fire you and replace you with the AI that can’t do your job, is the central fact of the 21st century labor market.

I’m not sure that it’s the central fact of contemporary labor, what with the resurgence of fascism, the retheming of jobs as gigs, and the casual evasion of hard-fought safeguards. But, as I’ve noted before, it is a thing.

Labour-squandering technology

Australian regulators sponsored a test using generative AI to summarize documents. The soft-spoken conclusion was that “AI outputs could potentially create more work… due to the need to fact check outputs, or because the original source material actually presented information better.”

Coverage of the study leads with the headline: AI worse than humans in every way at summarising information

Parrot progress

Emily Bender famously coined the phrase stochastic parrot to describe text-only chatbots. The trend towards parrots continutes: Ars Technica has a clip of a ChatGPT-4o test run where the bot, which has a canned voice it is supposed to use, replies in the user’s own voice.

OpenAI promises that the actual release version totally doesn’t do this.4

Continue reading “Parrot progress”

AI problem solving, a snapshot

I ask Copilot: “A man and a goat are on one side of the river. They have a boat. How can they go across?”

It replies: “The man takes the goat across the river first, leaving the goat on the other side. Then he returns alone to get the boat and brings it back to the original side. Finally, he takes the goat across the river again. 🚣‍♂️🐐”

Finishing with relevant emoji is very much on-brand for Copilot. In ability to find relevant emoji, it is a match for any human.

OSZAR »