University Post
University of Copenhagen
Independent of management

Opinion

Cut the 'AI' bullshit, UCPH

Illusion — Why do we keep believing that AI will solve the climate crisis (which it is facilitating), get rid of poverty (on which it is heavily relying), and unleash the full potential of human creativity (which it is undermining)?

OPINION ON THE UNIVERSITY POST

This is a featured comment/opinion piece. It expresses the author’s own opinion.

We encourage everyone to read the whole piece before commenting on social media, so that we only get constructive contributions.

Disagreement is good, but remember to uphold a civil and respectful tone.

Before the summer holidays, the University Post posted a call to the University of Copenhagen to more tightly integrate generative AI in teaching and the »university’s daily life«. The well-intentioned advice came from the researchers behind a study of students’ AI usage, which showed that the majority of students at SAMF do not use large language models. Thus, it was understood, the university is ostensibly failing to prepare the students for the »labour market of the real world«.

READ ALSO: New study: Students need more teaching in artificial intelligence

The underlying premise in the article is that because AI seems to be everywhere these days, it should be embraced by educators and students, who must »realize« that they should »exploit the potential« rather than »obstruct the trend«.

In response to this, I would like to offer a few reflections on the real and the imagined, the present and the future, and the responsibility of the university and of the students.

A different »real world« than today

The aspiration to prepare students for the real world is commendable, indeed vital. But when encountering such calls, we should be mindful of what world exactly we are being invited to consider real.

Again and again, a future is being substituted for the present

The »real world« of which the article speaks is notably not of today, but of tomorrow. The article talks about »the reality of society and the labour market that [the students] will meet« even though the students currently enrolled in a study programme are, by definition, going to enter a different labour market from the one of today.

The article then goes on to predict that future academics will have no choice but to use AI extensively. Again and again, a future is being substituted for the present, someone’s imagination for reality.

Illusion of inevitability

This substitution of future for present tense manufactures consent in two ways.

First, it removes the burden of proof from anyone who makes a statement about the benefits of AI, no matter how vague and grandiose. The AI ​​revolution is upon us, we are being told. Oh, you haven’t already found yourself in a more just and enjoyable world? But of course, that’s because we’re in the midst of a transition, and so you may have to wait just a little longer to really feel its impact!

READ ALSO: The future is now: UCPH softens up on AI rules

Second, this blurring of lines creates an illusion of inevitability. We may lack evidence that AI is a force for good – but that doesn’t matter, because opposing its ever-broader deployment is wasted effort.

This preemptively brushes aside any possible criticism of the technology: whether or not this is a future we want, the argument goes, it is the future we are going to get. We are being led to believe that interrogating whether this is a desirable (or, indeed, plausible) future is futile and even counterproductive.

AI pollutes, divides and exploits

Back in 2020, Microsoft committed to becoming carbon negative by 2030. Since then, the company has instead increased its emissions by 30 percent, largely due to new data centers used to run generative AI models (with other companies and governments, including the Danish government, following suit). However, we needn’t worry because, as Bill Gates baselessly claimed in a recent interview: »AI will pay for itself«.

The algorithms are trained by stealing creative and scholarly work

We are asked to ignore the fact that the push for mass adoption of AI is fueled by immense harm to the environment. That the hardware on which AI runs relies on extraction of conflict minerals by miners trapped in modern-day slavery. That the algorithms are trained by stealing creative and scholarly work and by exploiting a vast global underclass of ghost workers tasked with helping finetune these models under unfair, often traumatizing conditions.

And what are we offered in return? A technology with such dubious utility and low reliability that its outputs are referred to as »soft bullshit« by academics, and attempts to shoehorn it into much simpler and lower-stakes contexts than education (for example, ordering a burger) are being abandoned after massive failures. Even Wall Street firms are growing tired of unsubstantiated claims that AI is cost-effective or even just meaningfully useful.

Yet time and time again we are asked to ignore all these present harms and misfires, because the future in which AI has solved the climate crisis (which it is facilitating), done away with poverty (on which it is heavily relying) and unleashed the full potential of human creativity (which it is undermining) is inevitable and ever so close.

UCPH should do something completely different

The university has an obligation to interrogate the proposition that a world in which AI is widely used is desirable or inevitable. We don’t need to cheer for a vision of tomorrow in which scientists feel comfortable with not personally reading the articles their peers have written and students are not expected to gain insight through wrestling with complex concepts: a world in which creative and knowledge work is delegated to a mindless algorithm.

READ ALSO: Is AI a good study buddy? We asked students

The real world is what we make it. It is our responsibility as educators to make sure our students remember this and actively participate in deciding how to best shape a common future.

Is the future we want one where we’re all drowning in ChatGPT’s soft bullshit?

As Richard Shaull writes in the foreword to Paulo Freire’s Pedagogy of the Oppressed: »There is no such thing as a neutral educational process. Education either functions as an instrument that is used to facilitate the integration of the younger generation into the logic of the present system and bring about conformity to it, or it becomes ‘the practice of freedom’, the means by which [people] deal critically and creatively with reality and discover how to participate in the transformation of their world.«

By insisting that the future is predetermined and that the best we can do is accept whatever next product is pitched to us by the company with the highest market cap, the university is betraying its responsibility to enable its students to perceive themselves as subjects capable of affecting the world and thinking of it critically.

To avoid falling into this trap, the university and the students should be asking this: Is the future we want one where we’re all drowning in ChatGPT’s soft bullshit? Or does our imagination allow for any different ‘real worlds’?

Latest