University Post
University of Copenhagen
Independent of management

Academic life

Studying with AI at the University of Copenhagen — what’s allowed?

Uni and AI — Artificial intelligence is increasingly shaping how students learn, take exams and navigate university life. This brings new opportunities — but also dilemmas that call for technical precision and academic reflection. Read on to learn when you're allowed to use AI — and when it is considered cheating.

Language models are here, there, and everywhere. Artificial intelligence is no longer a futuristic vision, but has rapidly become a fixed part of many people’s daily lives — as a kind of digital co-thinker, assistant, and typewriter. Technologies are developing at breakneck speed, and universities are forced to keep up.

At the University of Copenhagen (UCPH), it’s not just about understanding the technology, but about figuring out how to use it. What can AI contribute to teaching? What are the risks? And how do you make sure that artificial intelligence becomes an academic tool and not a shortcut to cheating?

Rules for using AI at UCPH

As a rule, you are permitted to use AI in assignments and exams where all tools are permitted — unless explicitly prohibited in the exam regulations of the course description.

If AI is allowed, you must declare how you have used it — either in the methodology section or by filling out a template. The format may vary, so it is important that you familiarise yourself with what applies to your specific course and exam.

If a lecturer suspects AI-assisted cheating, it will be handled like any other case of exam misconduct.

It is considered cheating if you use AI where it is prohibited — or if you fail to declare your use of it.

UCPH recommends the use of the AI chatbot Microsoft Copilot Chat, as it has been deemed the most secure in terms of data protection. All students and staff have free access to the paid version via their UCPH login.

You can find all current rules, guidelines and declaration templates on UCPH intranet KUnet. It is your own responsibility as a student to stay up to date with the rules that apply to your subject and your exam regarding the use of AI.

Find out more about the rules via this link.

According to Stefan Nordgaard, head of section at the Digitalisation, Exams and Room Scheduling at UCPH, and Ruth Horak, Senior Consultant in Digitalisation, AI has now entered both classrooms and administrative systems. And although there is no one-size-fits-all solution, the university has a key objective: Students should be equipped for a reality in which AI is part of daily life.

»We need to ensure that, regardless of academic background, students are ready to meet the demands of the labour market,« says Stefan Nordgaard, and continues:

»Future employers will certainly expect graduates from UCPH to be able to use the latest digital tools, including AI,« he says.

READ ALSO: AI has exploded among students: University of Copenhagen to now change the rules

Its particularly in teaching that AI is beginning to leave its mark, and lecturers at UCPH are being offered courses in digital literacy within AI.

»In this way, a lot is being done both to prepare the lecturers and to help them prepare students — by developing teaching practices and through rules and policies,« says Ruth Horak.

An academic tool — not a shortcut

According to UCPH, the aim is not to reject AI, but to use it with care. In some degree programmes, it is already taken for granted that students should be able to use AI as part of their academic work.

»It would be strange if future lawyers were unable to use tools that can retrieve precedents or compile legislative histories quickly — allowing them to spend more time with clients and less time ploughing through thousands of pages of documents,« says Stefan Nordgaard.

But precisely because AI can be so efficient, there is also a need to rethink core academic disciplines — especially the writing.

»The entire process where academics express themselves in writing and think through writing is affected by this. That’s why it requires us to rethink, to some extent, classical academic outputs and what we want from them,« says Ruth Horak.

READ ALSO: New study: Students need more teaching in artificial intelligence

The response from UCPH has been to adopt a clear principle: AI should be understood and used within the framework of sound academic practice. The university has been inspired, among other things, by the European Commission’s values and guidelines for the responsible use of generative AI in education and research.

»We’ve flipped the discussion on its head by saying that, as a rule, the use of artificial intelligence is permitted in exams with open access to tools, because we believe students should learn how to use them. But of course, this must be done in a sound and transparent academic manner,« says Ruth Horak.

Transparency instead of AI detection tools

UCPH has chosen not to use automated AI detection tools to detect cheating. This is because, according to Stefan Nordgaard and Ruth Horak, the current tools are not accurate enough.

»Even the most precise tools still have false positives. So we’d risk having several hundred students wrongly accused of cheating every year — and we can’t accept that,« says Ruth Horak.

Tools and technologies may change, but sound academic practice must remain constant

Stefan Nordgaard, Head of Digitalisation, Exams and Timetabling

To promote the ideals of sound academic practice, the central rule is that students must declare how they have used AI — for example, in a methodology section or by using a special declaration template. This could include idea development and structure, language assistance, or clarification or simplification of complex theories. The main point is to be transparent.

»We believe it is far more valuable for assessors to know how AI was used than to try to measure how much of an assignment was written by a robot,« says Stefan Nordgaard.

READ ALSO: Is AI a good study buddy? We asked students

The guiding principles, then, are transparency, responsibility, and academic reflection. But what if a lecturer still suspects cheating?

»We have prepared guidelines for both lecturers and heads of study on how to handle AI-related suspicions. It is based on observations, patterns, and written assessments. And if a case is opened, it will be treated like any other case of misconduct,« says Ruth Horak.

The latest figures show that in 2023 and 2024 there were six confirmed cases of AI-related cheating. In addition, there were three more cases of suspected AI misuse that did not lead to any rulings.

The rules depend on the exam

On paper, the rules are fairly simple: If all aids are allowed in an exam, you may use AI — as long as you declare it.

The use of artificial intelligence is only prohibited in these exams if this is explicitly stated in the course description’s exam regulations.

»It may be prohibited in individual courses where it is believed that AI directly undermines the learning objectives,« says Ruth Horak.

»That’s why it’s incredibly important that students familiarise themselves with the rules for each individual exam,« she says.

UCPH is running various campaigns for students — including at the start of the semester — to spread awareness of the correct use of artificial intelligence in an academic context.

The central message is that the use of AI is not prohibited. But neither is it a free-for-all.

»The overarching principle that UCPH has chosen is that the use of artificial intelligence must take place within the framework of sound academic practice. Tools and technologies may change, but sound academic practice must remain constant,« says Stefan Nordgaard.

This article was first written in Danish and published on 19 August 2025. It has been translated into English and post-edited by Mike Young.

Latest