Permitted aids for exams, how to write and cite in academic writing and the use of AI

It is important that you know what the permitted aids for your exam/test are and only use these. Learn how to write academic text and quote correctly from the sources you use.

Learn to use artificial intelligence (AI) to promote learning.

Available aids for exams at Campus

It is important that you know what the permitted aids for your exam/test are and only use these.

  • Where to find information about permitted aids for exams?
    • The course's web based course description should include the permitted aids (aid code) for the course. See the list below with codes.
    • If the aid code says "specified aids" the aids should be described in canvas. The aids may be printed notes, e.g. which written materials are permitted to be used, etc.
    • Permitted aids should also be stated in the exam questions. 


    During an exam at campus, the open internet is not permitted, but access to certain websites can be available, e.g. Lovdata Pro.

    Permitted aids for examinations at campus:

    • A1: no calculator, no other aids
    • A2: no calculator, other aids as specified
    • B1: calculator handed out, no other aids
    • B2: calculator handed out, other aids as specified
    • C1: all types of calculators, other aids as specified
  • What aids are permitted and what not to bring

    Permitted aids

    • You must bring permitted aids to the campus-based examination premises yourselves. This does not apply to examinations where NMBU provides the aids (calculators, collection of formulas etc.).
    • You are responsible for ensuring that the aids you bring with you to campus-based examinations do not contain unauthorized notes.
    • For campus-based examinations, the aid 'all types of calculators', means calculators that do not contain files, are not connected to the internet, a power supply or a printer, that do not communicate with other units, that do not make a noise and that only consist of a single object.

    Non-permitted aids

    • You are not allowed to bring or have access to other aids than those specifically permitted for the examination in question.
    • Students are not permitted to share aids during the examination.
    • Any access to or use of mobile phones during a campus-based examination will be regarded as cheating. Accordingly, the same rules apply for other digital aids containing communication. Exceptions are only made if the examination question paper or course description specify that aids containing communication equipment is permitted.
    • Students are not permitted to communicate with each other or other persons during an examination, unless communication has been specified as a permitted aid in the examination question paper or the course description.

    Inspection of aids

    For campus-based examinations where invigilators are used, all aids that the students bring with them will be individually inspected by the invigilators.

  • Dictionaries at exams at campus

    You may always bring a bilingual dictionary if the exam is held in a language other than your first language (mother tongue). That is to say: from your mother tongue to Norwegian or from your mother tongue to English.

    You cannot bring an advanced dictionary that explains words and expressions.

    The dictionary may not contain any own notes.

Available aids for home exams, assignments and the like

For home exams, assignments etc., all aids will be available. An aid code is therefore not given for these assessments/courses.

  • What aids are permitted and what is not permitted?

    Students are not permitted to communicate or cooperate with each other or other persons during an examination, unless such communication has been specified as a permitted aid in the examination question paper or the course description.

    During your exam, you should show what you have learned by answering the exam questions using your own words and formulations. This is also important to keep in mind if you use notes that contain direct transcripts from lecture foils etc. or eg. joint notes with other students and the like.

    NMBU permits the use of AI-based programs when completing assignments that are part of a course's compulsory work, unless it is explicitly stated in the course description that AI-based programs are not allowed. If you use AI, it is crucial that you familiarize yourself with its correct usage. Please refer to NMBU's specific guidelines in the section below.

    Tips!

    • It is important that you learn the correct use of sources and references. See info on reference styles and literature lists and book a tutorial in academic writing - green link buttons at the top of the page.
    • Write the answer in your own words. Do not "cut and paste" from others.
    • See NMBU's own guidelines for the use of AI.
    • If you have not been informed that collaboration/communication with others is allowed, then such collaboration will be considered cheating.

How to use artificial intelligence (AI)

Artificial intelligence (AI) is a useful tool that can be used for many purposes and also to promote learning. But before using the tool you need to reed and understand NMBU's own regulation for the use of AI and understand the technology behind the AI tools.

  • NMBU's guidelines for the use of AI

    Guidelines for the use of AI 

    NMBU allows the use of Artificial Intelligence (AI)-based programs in the creation of assignments that are part of compulsory work or assessment in a course, provided it is not stated in the web-based course description that AI-based programs are prohibited.

    All AI use must comply with the following guidelines: 

    Quality Assurance of AI-Generated Information

    Students are responsible for ensuring that information generated using AI is reliable. They must adhere to academic norms of integrity, transparency, and accuracy by clearly indicating how and where AI tools were used. AI-generated texts should not be the sole source of information. Students must critically evaluate the results and ensure they are based on research or other relevant literature represented by actual sources in the text and reference list. 

    Documentation of AI Use in Assignments: 

    If AI is used, the student must provide an explanation of its use in the methodology chapter or section of the assignment. How detailed this explanation needs to be depends on whether AI was used as a writing aid or as a research method: 

    • Proofreading falls under writing support, and a brief explanation of the program used (chatbot or other AI-based tools) and what the student asked the program to do with the text will suffice. The student is responsible for the final content after completing the proofreading process. Spell checking in Word and grammar checking programs, like Grammarly, do not fall under AI. However, Grammarly GO and other AI-based systems that allow users to generate (or re-generate) larger amounts of text must be documented. In courses where language skills are part of the assessment criteria, there may be restrictions on using AI for proofreading. Students are responsible for familiarizing themselves with faculty- or course-specific guidelines for AI (if such guidelines are available). These guidelines should be part of the course syllabus.   
    • If AI is used to process data or sources in relation to each other, create analytical frameworks, or develop procedures for an experiment/project and the like, it falls under research methodology. In such cases, a description must be provided detailing the purpose, the programs used, and the interaction process (between the user and the AI program) that forms the basis of the methods on which the project (or text) is based. Faculties can give more detailed guidelines on what is required in the documentation depending on the nature of the course.  
    • For courses where coding/programming is a part of the subject, the course coordinator must decide which AI programs are permitted based on the learning objectives. This must be specified in the course description. 

    Sources and References in AI-Based Literature Searches: 

    AI tools can provide fictitious sources and/or refer to non-academic sources. This applies to both conventional chatbots (e.g., ChatGPT) and platforms for literature searches based on generative AI (e.g., Keenious). AI-based literature search programs can be used, but they do not have access to all relevant and up-to-date academic literature. Such programs, therefore, cannot stand alone but can be used alongside conventional forms of literature searches (e.g., database searches at the University Library). The student is responsible for ensuring the quality of literature suggested or generated by AI-driven programs. It is further recommended to use sources from reputable scientific journals and databases (e.g., Web of Science, Scopus, and Econlit).  

    Responsibility for Guidance and Academic Practice: 

    The supervisor or course instructor is responsible for addressing the use of AI with their students. Academic staff should be able to provide advice on constructive and critical use of AI that aligns with the course’s and NMBU’s guidelines and supports students in practicing good academic conduct. NMBU offers training materials and resources to support supervisors and course instructors in implementing these guidelines. 

    Regulations on Cheating: 

    Students are required to familiarize themselves with NMBU’s regulations on cheating. The regulations specify that exam submissions and assignments must be the student’s own independent work. All use of AI to generate text or academic content must be disclosed.  

    Failure to do so is considered cheating and may result in the cancellation of the exam and/or suspension from the university. 

    Terminology: 

    • Artificial Intelligence (AI): Technologies that mimic human intelligence to perform tasks and can improve based on data. 
    • AI Tools: Programs or platforms that use AI to generate text, analyze data, or perform other tasks. 
    • Generative AI: AI systems that can generate new content such as text, images, or audio based on training data. 

    The guidelines for AI must be simultaneously considered with NMBU's regulations on plagiarism and cheating, where the following points are particularly relevant: 

    3.1 Cheating is any action aimed at providing the student with an unjustified academic result or an unfair advantage in the evaluation of academic performance. 

    3.2 a. Violations of examination regulations at NMBU can be considered cheating. 

    3.2 g. Plagiarism is considered cheating. Examples of plagiarism include reproducing or quoting from books, articles, websites, one's own or others' assignments, using images, graphs, etc., without proper citation, quotation marks, or any indication in the text/image/diagram showing where the material is sourced from. 

  • What is AI?

    Artificial intelligence (AI) consists of self-learning systems based on neural networks, where artificial neurons communicate with each other to solve various tasks. A purpose of AI tools may be to identify patterns in large amounts of data. This type of AI program is well established and has been in use for several years, an example of which is Rikshospitalet's use of AI to analyze X-ray images in the event of a fracture. 

    Large language models (LLMs) are a type of AI that has received a lot of attention in recent years. These models are trained on enormous amounts of text data and are essentially statistical tools that can predict the next word in a given sequence. In other words, it is a context-specific probability distribution of words. This enables them to generate human-like text, answer questions, and perform other language-related tasks. There are many such models (programs) on the market, each with their own strengths and weaknesses. The most modern models—referred to as multimodal — are capable of understanding and generating information from several different types of data, or "modalities." These can include text, images, sound, video, and sensory data. The most used LLMs are:  

    • ChatGPT 4o 
    • Open AI o1 (preview) 
    • Microsoft Copilot 
    • Google Gemini 
    • Claude 3.5 Sonnet 
    • Perplexity 
    • Llama 2 
    • Grok 

    Small language models (SLMs) are also increasing in popularity. These models are trained on less, but carefully curated, data. They often perform as well—or better—than LLMs on certain tasks. They are also more sustainable as they are more resource efficient. There are many SLMs, but the most powerful are: 

    • Claude 3 - Haiku 
    • Llama 2 7B 
    • Phi2 and Ocra 
    • Stack Beluga 7B 
    • X Gen 
    • Alpaca 7B 
    • Google Gemini Nano 
  • Accuracy - probability and bias

    It is in the nature of a language processing tool to generate incorrect information. It cannot distinguish right from wrong on its own and thus composes sentences based on probability, not judgment. 

    Here is an example: ChatGPT was asked the question: what is the most cited research article in economics of all time? The answer was "A Theory of Economic History" - written by Douglass North in 1969 and published in the Journal of Economic History. The only problem is that this article does not exist. By searching its own training data, it puts together the words that most often appear together based on the question: 

    "A Theory of Economic History" is the answer we receive from ChatGPT. The choice of author for the article is done the same way. Douglass North is the author who has published most about economics. Probability therefore dictates that he is the author.

    Factual errors (hallucinations in technical terms) are something that is being actively worked on, and their frequency has decreased significantly since the launch of GPT-3 in the fall of 2022. The fact that the models are now connected to the internet and can search for specific information is of great significance. However, hallucinations and low-quality generated responses still occur. Different providers have different solutions, but Google's Notebook LM is a good example. The platform, based on Gemini, requires you to upload the sources you want it to base its responses on. It can only generate text based on what you have uploaded. This significantly reduces the occurrence of hallucinations and gives the user far greater control than a conventional chatbot can offer.

    Even if the generated information from a language model does not contain factual errors, one must also be aware of the possibility of plagiarism. There are several examples of language models "generating" information identical to already published material. Reuse of generated content can therefore result in plagiarism accusations.

    GPT and other language models also have built-in ethical guidelines. They primarily deal with offensive content and privacy, but they are also influenced by dominant political views. Language models can therefore have a built-in bias, or reproduce biased views represented in the training material. This affects their accuracy.

    A language model also has the ability to give you the answer you want, even if it does not represent the consensus. This is especially important to be aware of when using platforms aimed at academia and literature searches. If you ask to have text generated based on a specific issue, you will get an answer that supports your claim (including factual sources), but it is not at all certain that this represents the field's general position. Contradictory research is, in other words, not considered unless you specifically ask about it. The models also do not have access to all peer-reviewed scientific literature. The suggestions you get are based on the "library" that the provider has managed to build up. They are often very incomplete and contain little contemporary scientific literature. Such platforms should therefore be regarded as a supplement, not a replacement for more conventional forms of information retrieval.

    A final problem is the use of AI for summarizing academic texts. An AI tool cannot necessarily weigh what is important and what is not. A generated summary can therefore give a misleading representation of the text's content. However, several of the tools have added features that make them much more accurate and useful. Nevertheless, it is important to quality-assure the content

    Prompting

    When speaking with language models, it is important to be able to ask specific questions that contain a lot of context. This action is called prompting that can be explained as question formulations, instructions, or cues. Prompting is your way of giving the AI tool instructions on what you want it to do.  

    It can be challenging to create prompts that provide you with the desired response. Therefore, there is often need for adjustments, corrections, and follow-up questions in your prompts The stronger your grasp on the subject matter, the higher the likelihood of producing useful prompts. The most important thing is not to ask the perfect questions, but to try out and test. In some cases, you might get more accurate results if you prompt in English. For such AI tools to be useful to you, it is important to view the interaction as a dialogue.

    References

    It is important to point out that GPT et al. cannot be referred to as a source. Nor can you make use of sources that the tool states, as these may be not be real sources. This is because the language processing tool respond by searching and combining elements from their own training data. If you use AI in the process of developing your own text, you should be aware of:

    • AI-generated text may contain errors, inaccuracies or be misleading. So always verify the text with several other sources.  
    • AI-generatied text t is not your own. If you use it in your work, you need to be open about which parts of the text is AI-generated, how it was generated and used in your work.    
    • AI-generated text  usually does not refer to sources, or sources it refers to are not necessarily real or relevant. . To be an honest writer, you need to find, explore and reference real academic sources.  
    • AI-generated text may reflect biases or prejudices from the training data. By building on biases or prejudices ,you may contribute to   reinforce such bias and prejudices. .   
    • Do not use AI to write the text for you, but as a support in the writing process, for example to get ideas and improve your own text.

    Data storage and personal data

    The majority of the companies that own and develop the major language processing tools store user data in the US. They require log in by creating a password, registering a phone number and/or email (alternatively via Facebook or a Microsoft account), but also store metadata such as your internet history, browser type and which content you search for and are exposed to when using the program. In addition, some of the language processing tools will be trained further by you interacting with them. Everything you write and all the answers you get are used to further develop the technology. On the basis of regulations regarding data storage outside the EU (GDPR), NMBU or employees at NMBU cannot require students to use such tools as part of the teaching. However, this is something the tech companies are aware of and want to accommodate by changing their own data storage practices.