Policy and guidance for staff on the use of artificial intelligence (AI)
Policy info
- Published: November 2024
- Originally drafted: September 2024
- Updated (following comments received and feedback from members of the Academic Board): October 2024
- Approved by Academic Board: 13 November 2024
- Review date: July 2025
Jump menu
Jump to each section of the page:
- 1. Introduction
- 2. Scope of this policy
- 3. General policy statement
- 4. Definitions of AI
- 5. Policy statements
- 6. Acceptable use of AI and where its use should be encouraged
- 7. Risks and warnings over the use of AI, including limitations of its use
- 8. Unacceptable use of AI and where it must not be used
- 9. Ethical considerations
- 10. Which applications/software are permitted or prohibited and how to request the use of new tools and technologies
- 11. How the University utilises AI and monitors its use
- 12. Safeguarding UWL’s data: data protection, intellectual property, copyright and other compliance rules (and other legal constraints)
- 13. The human need in decision making
- 14. Consequences of non-compliance with this policy
- 15. Use of AI in learning and teaching
- 16. AI and academic integrity: guidance for assessment and feedback
- 17. Use of AI in research and innovation
- 18. Use of AI in management, administration and business operations
- 19. Examples of good practice
- 20. Examples of poor practice or unacceptable use
- 21. Training on the use of AI and how to seek more guidance or support
- 22. AI and sustainability
- 23. Endorsement and consultation
- Appendix A: Acknowledgement and explanation of AI use (template)
Policy and guidance for staff on the use of Artificial Intelligence
This policy and guidance should be read in full, however, the Key Policy Statements that all staff must adhere to are set out in Section 5.
1. Introduction
Artificial intelligence (AI) is a rapidly developing field that has the potential to transform various aspects of education, research, and administration in universities. AI can enhance learning outcomes, improve efficiency, and foster innovation. However, AI can also pose significant challenges and risks, such as ethical, legal, and social implications, data privacy and security, and human oversight and accountability. Therefore, it is essential to establish a clear and comprehensive policy and guidance on the use of AI at UWL, setting out the operational boundaries for both where AI should be encouraged and used, and where it should not.
2. Scope of this policy
The document is split into two sections. The first is policy which defines what staff must or must not do when using AI. The second is guidance, which sets out other considerations, tools and their use, sources of further information.
There is a separate policy for students, which can be found here.
This policy aims to provide guidance and direction for all members of staff of the University of West London (UWL) or those acting for or representing the University regardless of their employment status, including: academic staff, professional services staff, those employed by the University, those acting as if they were employed by the University, agency workers and those contracted by the University. This includes students who may be, from time-to-time, employed by the University and postgraduate students who are in receipt of a scholarship award.
Staff at our Academic Partner institutions should be aware of this policy and guidance and should aim to embrace its principles unless local policy or legislation dictates otherwise. For example, it is possible that some international partners may have legislation governing the use of AI.
Academic staff working with apprentices and employers should be mindful of any employer policies surrounding the use of AI in the workplace, especially where this differs from the guidance set out in this document.
The document covers the following aspects:
- Defining what AI is;
- Acceptable use of AI and where its use should be encouraged;
- Risks and warnings over the use of AI, including limitations of its use (inaccuracy, bias etc.);
- Unacceptable use of AI and where it must not be used;
- Ethical considerations;
- Which applications/software are permitted or prohibited and how to request the use of new tools and technologies;
- How the University utilises AI and monitors its use;
- Safeguarding confidential data, including data protection, intellectual property, copyright and other compliance rules (and other legal constraints);
- The human factor in decision making;
- Consequences of non-compliance with this policy;
- Examples of good and poor practice;
- Training on the use of AI and how to seek more guidance or support.
This document supersedes all prior versions and/or guidance on the use of AI. However, given the pace of change of AI technologies, it is likely that this policy will receive regular updates. The most recent version will always be found here.
3. General policy statement
UWL is committed to the responsible, ethical, and transparent use of Artificial Intelligence (AI). The policy supports the enabling strategy of “deploying and integrating AI and other digital technologies” of its strategic plan Impact 2028:
- Offer students the opportunity to gain skills using AI supportive technologies;
- Shape and lead the public debate around digital futures;
- Encourage the use of AI to collect and manage resources;
- Learning about ethical and plagiarism issues linked to AI machines;
- Grow research capability and capacity in the areas of AI in health, data science and bioinformatics, gamification and digital society.
This policy outlines our approach to AI in research, learning & teaching, and in business operations and administration.
The guidance section sets out how to use AI at UWL, ensuring that it is used responsibly, ethically, and in a manner that benefits our students, staff, and wider community.
There is an expectation that everyone tries the use of AI in their work, subject to this policy and guidance, and encourages its use University-wide. All staff will be offered basic AI training.
4. Definitions of AI
Artificial intelligence (AI) refers to the theory and development of computer systems capable of performing tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
Generative AI (sometimes called Gen AI) is artificial intelligence that can create original content – such as text, images, video, audio, or software code – in response to a user’s prompt or request. It relies on sophisticated machine learning models called deep learning models, which simulate the learning and decision-making processes of the human brain.1
Machine learning (ML) is a subset of AI that often uses statistical techniques to give machines the ability to “learn” from data without being explicitly given the instructions for how to do so. Once a ML algorithm has been trained on data, the output of the process is known as the model. This can then be used to make predictions. Models can be simple or complex and attempt to recreate what we see in the world.
A (Large) Language model (LM, LLM) is a model trained on textual data. The most common use case of a LM is text generation. The term “LLM” is used to designate multi-billion parameter LMs, but this is a moving definition.
Artificial general intelligence (AGI) or Super AI is a term used to describe future machines that could match and then exceed the full range of human cognitive ability. AGI does not exist yet. According to Gartner, AGI is a decade away2. However, this is the “AI” that some in society fear. It is also the form of AI that is most portrayed in film, gaming and other media.
AI needs data and context (“grounding”) to work effectively. AI can lead to misleading outcomes if the grounding is wrong. For example, AI is often used in an attempt to objectively predict something that cannot be objectively predicted. Generally, the better the grounding, the better the accuracy and value of AI generated material.
There are many uses of AI and many systems with AI embedded. This policy in concerned primarily with the use of Generative AI systems. There are many Gen AI tools available and the list is growing all the time. Examples – at the time of writing - include Microsoft’s Copilot, OpenAI’s ChatGPT, Google Gemini, Meta’s Llama, and Adobe’s Firefly. AI is also becoming embedded into other software solutions and platforms.
At the time of writing, the University, being a predominantly Microsoft-based organisation, has opted to use Microsoft Copilot as its base AI technology.
Microsoft Copilot is an advanced AI-powered tool integrated into various Microsoft applications, designed to assist users by providing contextual suggestions and automating repetitive tasks. Utilising machine learning algorithms and natural language processing, Copilot can help draft emails, generate reports, and offer insights within software like Microsoft Word, Excel, and Outlook. By enhancing productivity and efficiency, it serves as a virtual assistant, guiding users through complex processes and enabling more focused and creative work.
In addition, academic Schools and Colleges are encouraged to identify and use discipline specific AI software and solutions, subject to the guidance set out in Section 10.
Part 1: Policy
5. Policy statements
- Staff must not upload any personal data to AI systems without the express permission of the University’s Data Protection Officer.
- All staff must undertake the mandatory basic training in the use of AI (See Section 21 for details)
- Only use AI tools and technologies that have been approved. A list is maintained here.
- When using Microsoft Copilot ensure you are logged in with your UWL IT User Account, as this is the only secure way to keep UWL’s data safe. None of the queries, prompts or data uploaded to Copilot will leave UWL’s Office 365 tenancy and UWL’s data is not used to train any of the AI models.
- Staff should not to use systems that effectively replicate Microsoft Copilot, such as Google’s Gemini and OpenAI’s ChatGPT, for example, and all users should ensure that no UWL data ends up in any of these systems.
- From the academic year 2024-25 onwards, all academic courses at UWL are required to embed appropriate use of AI in the curriculum.
- AI should not be used to gain an unfair advantage, especially when passing AI generated content off as your own, when studying, creating documents or other pieces of work, or undertaking and reporting research. In other words, individuals must:
- acknowledge sources (and not plagiarise) and this includes acknowledging any use of AI;
- only submit work and results that are your own.
- Staff must not upload student work (such as assessments) to a Gen AI tool for any reason (including in the case of suspected academic integrity breaches) unless written prior permission has been obtained. Uploading student work without prior permission poses a data security risk and must be reported as a data breach in line with the Data Protection policy3.
- Academic staff should not attempt to use AI tools to detect students’ use of AI.
- Do not rely upon AI: it can and does provide incorrect information
- AI should be used ethically, so its use is transparent, accountable, fair, private, safe and secure, and with human oversight – see Section 9 for details
- Generally, AI should be used as a tool to support staff day-to-day activities and not as a replacement for them.
- Researchers must take full responsibility for the use of Generative AI tools in their Research and any data / information / material they have entered into those tools. (See Section 17 for more information).
Part 2: Guidance
The follow sections are designed to provide guidance on the use of AI at UWL, ensuring that it is used responsibly, ethically, and in a manner that benefits our students, staff, and wider community. The guidance will be updated from time to time, as technologies and tools develop and in response to feedback and the sharing of good practice.
6. Acceptable use of AI and where its use should be encouraged
UWL wishes to encourage staff and students to harness the power of AI safely, securely, ethically and appropriately. As a consequence, all staff and students will receive basic training on the use of broad AI and this policy sets out where it should be used and should not be used. As of the academic year 2024-25, all academic courses at UWL are required to embed appropriate use of AI in the curriculum. The University is looking to optimise its processes to utilise AI to improve services, efficiency and effectiveness. AI can support research, but again ethics, data security and academic integrity are major considerations.
7. Risks and warnings over the use of AI, including limitations of its use
One of the biggest risks over the use of AI is that it may give incorrect responses. At the time of writing, Generative AI produces text by calculating the probability that it will be relevant to the prompt a user has submitted. As a result, the responses produced by generative AI tools tend to reflect consensus understandings, including any biases and inaccuracies that inform those positions. Because generative AI predicts words based on probabilities, they mostly produce oversimplified or generic outputs. They are best at responding to prompts to summarise information and solve problems whose answers are already known or supporting one position among alternative theories or approaches. But AI tools can generate new ideas by combining existing knowledge in new ways, and users can submit follow-ups that prompt the bot to incorporate additional ideas into an existing response or to revise it for greater complexity.
Generally, AI should be used as a tool to support staff day-to-day activities and not as a replacement for them.
8. Unacceptable use of AI and where it must not be used
AI should not be used to gain an unfair advantage, especially when passing AI generated content off as your own, when studying, creating documents or other pieces of work, or undertaking and reporting research. In other words, individuals must:
- acknowledge sources (and not plagiarise) and this includes acknowledging any use of AI;
- only submit work and results that are your own.
(see also Section 16: AI and academic integrity - guidance for assessment and feedback.)
Staff must not upload student work (such as assessments) to a Generative AI tool for any reason (including in the case of suspected academic integrity breaches) unless prior permission has been obtained4 and the tools comply with UWL data security requirements. Uploading student work without prior permission poses a data security risk and must be reported as a data breach in line with the Data Protection policy.
9. Ethical considerations
The ethical considerations surrounding the use of AI are paramount to ensuring that these technologies are integrated responsibly within our organisation. UWL is committed to addressing bias in AI algorithms. We prioritise privacy and protection of data used in AI systems, necessitating purpose and storage limitations, data minimisation, accuracy controls, and strong security to ensure confidentiality of personal data. We also require human oversight of programs used by agencies to ensure "the ethical and responsible use of AI" to help address bias concerns.
Our ethical guidelines include:
- Transparency: ensuring that AI systems are transparent in their operations, allowing users to understand how decisions are made.
- Accountability: establishing clear accountability for decisions and actions taken by AI systems, ensuring that there are mechanisms to address any adverse outcomes.
- Fairness: striving to eliminate biases in AI algorithms to promote fairness and equality in their application and outcomes.
- Privacy: protecting individual privacy by adhering to strict data protection protocols, ensuring that personal data is handled with the utmost care.
- Human oversight: guaranteeing that human judgment and intervention are integral to AI decision-making processes, thereby upholding ethical standards and legal requirements.
- Safety and security: implementing robust security measures to prevent misuse of AI systems and protect them from malicious attacks.
By adhering to these ethical principles, UWL aims to foster a responsible and trustworthy environment for the use of AI technologies.
10. Which applications/software are permitted or prohibited and how to request the use of new tools and technologies
The University has chosen Microsoft Copilot as its default general AI solution. This is partly because UWL uses Microsoft technology widely, running the Windows operating system across the majority of its desktop PC estate, using Office 365 as the default productivity applications, and also because of the ease in assigning the more capable (and expensive) Copilot 365 licences. More importantly, Microsoft Copilot is more secure and Microsoft guarantee to keep UWL’s data safe. None of the queries, prompts or data uploaded to Copilot, when logged in with your UWL IT User Account, will leave UWL’s Office 365 tenancy and UWL’s data is not used to train any of the AI models.
Staff requiring access to Microsoft Copilot 365, the paid for add-on that provides embedded AI functionality in Office applications like Outlook, Word, Excel etc. can request this here. As the current cost of this is $30.00/user/month, staff would need to explain why they need access and demonstrate the value gained. Licenses will be reviewed on a monthly basis, to ensure that cost does not spiral.
There are many other general and specific AI software and solutions available. Staff should not to use systems that effectively replicate Microsoft Copilot, such as Google’s Gemini and OpenAI’s ChatGPT, for example, and all users should ensure that no UWL data ends up in these systems (see also Section 12 below). They may be cases where research needs require such access, but permission should be sought first.
UWL recognises that the AI landscape is constantly evolving, with new technologies emerging and existing systems and services adding in AI functionality. It is therefore important that we establish a mechanism for staff to request access to these tools and services but at the same time, ensure the University’s data is protected, we do not replicate functionality, and that any solution represents value for money. This applies equally to “free” tools as well as those that incur a cost or other licence.
To request access to a product/tool/software solution, users can fill out a form here, which will then be reviewed by IT Services and CELT. A response would normally be given within 10 working days.
A list of approved applications and services will be maintained here so that staff can see what is already available and what has been approved, thereby removing the need to request the same or similar functionality.
11. How the University utilises AI and monitors its use
UWL already uses AI in a number of ways. We use it to model student data to assist in devising courses and supporting students through their studies. We use (or plan to use) AI to streamline administrative processes and to develop a better student (and staff) experience, using AI enabled Chatbots on some of our front-line services.
IT Services will monitor which staff have access to paid for AI licences/subscriptions and will periodically assess AI tools for value for money.
We may use AI to help the University comply with its statutory and legal obligations, such as data reporting.
12. Safeguarding UWL’s data: data protection, intellectual property, copyright and other compliance rules (and other legal constraints)
The University (and by definition, end users) must comply with its obligations under law to protect personal data, its and others’ intellectual property, and to comply with its policies on data protection, cyber security and acceptable use.
Generally, free versions of AI tools should not be considered private or secure. Signing up for paid for subscription services does not guarantee appropriate privacy and/or security measures either. Therefore, the University must review and approve all AI tools/systems/services in use.
If information is not already in the public domain (or licensed for such use), it should not be put into a free Gen AI platform. Staff are cautioned to use conservative judgement in this area, as this emerging field is still changing nearly daily. Advice should be sought if in any doubt.
[See also Section 17 on the use of AI in Research.]
13. The human need in decision making
Decisions about students, especially regarding academic outcomes, must be made by a human, even if AI has been used to assist in the process (note: Generative AI may only be used to assist in assessment practice following a due diligence exercise to ensure data security, see also the example that uploading student work to AI is a potential data breach – Section 20). Decisions relating to members of staff, such as interview outcomes, promotions, following processes and procedures, must again be made by a human, even if AI has been used to assist in the process.
14. Consequences of non-compliance with this policy
Not adhering to the AI policy standards and guidance set out in this document could have serious consequences. For example, uploading personal data into an unapproved and public-facing AI system would be a breach of UK law and UWL policy. Such actions may lead to disciplinary action, up to and including dismissal for staff or expulsion for students. Using AI to commit academic misconduct or passing AI generated content off as your own may also have consequences and lead to disciplinary action.
15. Use of AI in learning and teaching
UWL embeds the use of generative AI and large language models in all of its courses to “offer students the opportunity to gain skills using AI supportive technologies” (Impact 2028). The overarching aim of embedding AI in the curriculum is to enhance the students’ learning experience by helping them develop the skills and criticality required for responsible and ethical use of AI with a view to career readiness.
The use of specific AI tools must be approved according to the following process. In the first instance each school/college’s AI lead must be consulted who will ensure the tool is ethical and safe for their students to use, and who will support staff with completing the IT Procurement Form. The tool exempt from this process is Microsoft Copilot. AI tools may not be used without prior approval.
Each course must meet the minimum requirements for embedding AI in the curriculum as set out in the “Embedding AI in the Curriculum Framework”, available on the UWL AI Toolkit.
Students may make use generative AI as a tool, resource, or consultant, but not as a replacement for their own knowledge, critical thinking, reasoning, or self-reflection. Course leaders are required to clearly outline the expectations for the use of AI in their specific assignments using a traffic light system as specified in the AI Toolkit Sharepoint site.
Where generative AI is used for an assignment, it must be appropriately cited (see also Appendix A for the acknowledgement cover sheet).
Students are responsible for any inaccurate, biased, offensive, or otherwise unethical content they submit, regardless of whether they personally authored it or used AI software to generate the content.
The Centre for the Enhancement of Learning and Teaching (CELT) have developed resources and CPD events to support staff with teaching and supporting learning responsibilities to develop the relevant skills to responsibly embed AI in Higher Education practice.
CELT have devised a traffic light system to make it easy for students to understand where they can use AI and where they should not:
- Red - students must not use Gen AI tools. The purpose and format of the assessments makes it inappropriate or impractical for AI tools to be used.
- Amber - students are permitted to use AI tools in an assistive role as specified by the module tutor and required by the assessment.
- Green - students can use AI as a primary tool, and it should be used as part of the assessment.
CELT have also provided examples of good practice around embedding AI in the curriculum from the different schools and colleges and made these available via the AI Toolkit. This resource includes examples of subject specific AI tools and recommended further relevant literature and examples from the wider sector.
16. AI and academic integrity: guidance for assessment and feedback
Assessment information shared with students must clearly define the expectation of the use of AI as part of each assessment using a traffic light system (see Section 15 above). All students will be required to complete and submit alongside their assessment submissions a declaration form indicating where AI has been used as part of their assessment completion. For information, a template can be found in Appendix A.
Academic Misconduct - may refer to submitting work that does not have Academic Integrity: the most common form of academic misconduct is plagiarism. Academic misconduct also includes the use of ghost writers (including AI tools), research misconduct, and exam misconduct, as well as all other kinds of cheating to gain an unfair academic advantage.
Academic Integrity – academic staff must promote Academic Integrity by:
- talking openly about Academic Integrity;
- upholding and acting according to those values;
- working in partnership with students to develop and maintain strong ethical practices;
- encouraging pride in submitting work that is original; and
- being transparent about risks to Academic Integrity and their consequences.
In addition, academics should actively support students and each other in developing good critical and ethical skills, including appropriate referencing and citation. To this end, additional support is available from subject librarians, the study support team, and fellow academic experts within the schools and colleges.
Another way of supporting students maintaining good practice, is through the use of formative assessments, which can help students develop the skills and practices needed for summative assessment.
Academic staff should support students to understand the importance of ownership and authorship, and why they should not share work, neither to non-UWL tools, nor other students.
Academics should also emphasise the risks of submitting work that is not original and ask students to reflect on how unfair practice affects the academic community, other students submitting honestly, and ultimately their future career options.
Prior to assessments, academics should:
- be clear on what will be considered as a breach of Academic Integrity in each assignment;
- use rubrics to communicate what is expected;
- make a habit of discussing expectations of Academic Integrity regularly with students, particularly before summative assessments are due. This includes establishing an understanding of codes of conduct and course requirements.
- be clear on the purpose and limitations of using AI as part of an assessment.
- ensure that the format and date of submission are clear, and that students have sufficient time to prepare and submit, in order to facilitate good practice.
Academic staff should engage with Authentic Assessments. These will usually be specific to the context of the University and course and makes it much more difficult for Essay Mills or AI to provide convincing work.
When preparing assessments academic staff are therefore encouraged to:
- create assessments that demonstrate ability to utilise, not just recall knowledge.
- reflect on what is the most suitable type of assessment and how it relates to real-world contexts.
- create assessments that allow students to demonstrate transferrable behaviours and skills.
- include clearly defined use of AI tools alongside a reflection of the performance and utility of any such tools in order to prepare students for future workplaces.
Authentic Assessments may also include critical reflections on the process of creating work for assessment, including on the use of AI.
Advice on detecting Academic Misconduct where AI has been suspected
By being aware of the benefits and limitations of AI and through the use of authentic assessments, the potential for academic misconduct should reduce.
In addition to creating authentic assessments, academics should:
- use Turnitin contextually to identify plagiarism.
- ensure they have a good understand Academic Offences processes and penalties.
- report suspicions of Academic Misconduct.
- use oral examinations to determine authorship.
Academics should not attempt to use AI tools to detect the use of AI. Currently the detection accuracy is low, meaning that use of such tools is limited, leading to missed detection and false positives. All automated tools have their limitations and recall that there is no ‘magic percentage’ on Turnitin that constitutes plagiarism. Each assessment and submission are different and must be interpreted uniquely.
When work is submitted and an AI tool has been used, it may not demonstrate the level of ability of the user or whether they have met their Learning Objectives. Academics should consider whether use of the tool has given a student an unfair advantage, and whether they misrepresent the AI tool’s work as their own.
When conducting an oral examination as part of an investigation into academic misconduct, two academics with subject knowledge should test the student’s understanding of the content and terminology within the work, and ask about the student’s work process, drafts, and how they found references. A written record of the meeting should be kept and submitted as evidence to the Academic Offences Panel.
It is the responsibility of module leaders to clearly specify where AI use is permitted for each module assessment using the traffic light system (See Section 15 above).
Clear examples should be provided to students on which AI tools may be used as part of specific stages of the assessment process. A 5-step process of how to implement AI in assessment has been provided by CELT and is available in the AI Toolkit. Furthermore, students will be required to complete and submit a declaration form alongside their assessment submission to create transparency and acknowledge the use of AI, contributing to students’ development of responsible and ethical use of AI in their practice.
17. Use of AI in Research and Innovation
The following information is put together with reference to the UEA Generative AI Policy for Research and Innovation.
For the purposes of this Policy, Research is broadly defined as any gathering of data, information and facts for the advancement of knowledge. The lifecycle of Research includes the planning and research design stage, the period of funding for a funded project or the duration of the research collaborations for those projects that do not receive external funding, and all activities that relate to the project during this time. This includes knowledge exchange and impact activities; the dissemination process, including reporting and publication; the archiving, future use, sharing and linking of data; and the protecting and other future research use of the outputs of research.
Researchers must consider all risks that may be relevant to their Research that arise from the involvement of Generative AI. Below is a non-exhaustive list of identified risks which are possible to mitigate against:
- Damaging academic integrity and Research integrity.
- Exposing Research results prior to publication or to any Intellectual Property protection being in place.
- Breaching funder terms and conditions where external funds are applied.
- Breaching third party confidentiality agreements.
- Breaching Intellectual Property restrictions.
- Breaching ethical standards.
- Harm to individuals.
- Unintended introduction of biases into Research analysis, affecting the scholarly record through Research outputs and publications including theses and dissertations.
- Factually incorrect assessment of Research analysis and results.
- Personal Data being used inappropriately or being stored or processed outside of the UK.
- Sharing of Confidential, Special Category, Third Party, or UWL Business Critical Data (for example, anything related to results, Innovations and patents).
- Unintended or inadvertent sharing of AI generated data with other organisations.
- Inappropriate reuse and misrepresentation of staff and student work, the data collected or the research results.
- Incorrect or inappropriate authorship status for AI generated data used in publications.
- Incorrect referencing of the contribution of AI.
- Exposing data that could be used to breach cyber security, hacking etc.
- Contravening UWL policy.
- Non-compliance with Data Protection legislation, the National Security and Investment Act 2021, or Export Control regulations.
- Detrimental effects on national / international collaborations and the University's research reputation.
Data accuracy: Researchers must take all reasonable steps to make sure that any Personal Data that is entered into a Generative AI tool, is not “incorrect or misleading as to any matter of fact”.
Transferring and / or accessing Personal Data outside of the UK: Researchers must discuss entering Personal Data into a Generative AI tool outside the UK with the University's Data Protection Officer.
Data security: Researchers should always check the security settings of the Generative AI tool being used and understand the cyber security risks such use might pose. For further information, researchers should contact the University Information Security Manager.
Automated decision-making (such as algorithms) including profiling: The UK Data Protection Act has provisions on:
- automated individual decision-making (making a decision solely by automated means without any human involvement); and
- profiling (automated processing of personal Data to evaluate certain things about an individual).
Profiling can be part of an automated decision-making process. A key question for Researchers to ask when using Generative AI in the context of decision-making is whether the decision is wholly automated or not. The UK GDPR prohibits solely automated decisions that have legal or similarly significant effects on individuals, with certain exceptions.
A DPIA is required by law if you will input personal data where the processing of the personal data is likely to result in a high risk to the rights and freedoms of individual data subjects. Even if you are not inputting personal data, the University recommends that you consider undertaking a DPIA screening when you are using a generative AI tool. For further advice, contact the University’s Data Protection Officer.
Researchers and their teams must apply caution in relation to the use of Generative AI tools within Research and to stay up to date with the policies, processes and guidance, the terms laid out by any external funders, external ethics and governance committees and all other relevant laws and regulations in regard to Generative AI.
Researchers must take full responsibility for the use of Generative AI tools in their Research and any Data / information / material they have entered into those tools.
The University also recommends no patentable Research is input into a Generative AI tool. Researchers need to ensure the necessary permissions are in place for them to input any Data / information / material legally into the Generative AI tool. Researchers should only enter third party content, including copyrighted material, into a Generative AI tool when express permission is granted from the owner of that Intellectual Property, even if content is made available by licences such as Creative Commons. Generative AI use must be declared and clearly explained. Researchers must act with integrity and responsibility to ensure the originality, validity, reliability and integrity of outputs created or modified by Generative AI tools. This includes ensuring funding applications, participant information, Research results, reports in relation to those results, publications and future innovative uses of said results contain accurate information as to the creation and use of the Research and do not contain false or misleading information.
Funding Applications: Funders advise Researchers and their teams to use caution in relation to the use of Generative AI tools in developing their funding applications including with collaborators and any other laws that apply (for example, international laws). The Research Funders Policy Group statement on Generative AI tools can be read here.
Ethics Reviews: The UREC requires that currently all projects undertaken by UEA staff and students or involving UEA that involve the use of Generative AI tools or that are building / developing a Generative AI tool must seek ethics approval before starting that Research. The exception is when using a Generative AI tool to undertake a literature review. Ethical and societal risks of Generative AI Research can manifest at different stages of Research.
NHS ethics review: Researchers planning to involve Generative AI as part of their NHS health Research or social care study should refer to the NHS document ‘Understanding Regulations of AI and Digital Technology in Health and Social Care’ for advice on how to do so appropriately.
Data and Publications for Research: Researchers must detail any use of Generative AI in collecting, analysing or otherwise processing Research Data in a Data management plan relating to the Research. Researchers should explain the reasons for using a particular Generative AI tool(s), including an evaluation of the risks associated with using that tool. Researchers must include information in the documentation and / or metadata that accompanies any Data that have been generated using processes involving Generative AI tools. Where practicable this should include naming the specific model(s) and software (including which version) used, when the tool was used, and specifying how content was generated, such as listing the prompt(s) used. This information must also be included in any publications or other outputs that report on such Data.
Research publications: Authors are accountable for the accuracy, integrity and originality of their Research outputs, such as publications, including any use of Generative AI. Research outputs must be the authors’ own work, not presenting others’ work or output from Generative AI tools without appropriate citation and referencing. Individual journals and publishers may have more specific requirements or guidelines relating to reporting the use of Generative AI and these must be followed where applicable.
Research Data and repositories: Researchers should specify the terms for reuse for any Data that they deposit in a repository or Data centre and consider including explicit information about how the Data can be used by Generative AI tools. This must be done in accordance with the terms of any permissions granted. When depositing Data in an external Data centre or repository, Researchers should follow the guidelines of that centre / repository for acknowledging the use of Generative AI tools. Use of a Generative AI tool must be given proper acknowledgement, but a Generative AI tool should only be credited as the Creator of a dataset if explicitly required by the repository.
Using Repository Data as an Input Source in Generative AI Tools: Researchers using third party material as input into any Generative AI tool must abide by any conditions for reuse specified for that material by the owner of the material (and see also Intellectual Property considerations covered in below). Where Data have been sourced from a repository or Data centre, this includes following any guidelines provided by that repository / centre on how Data must be used and acknowledged. Researchers using third party material whose terms for reuse are governed by permissions given by Research participants must make sure that the reuse of the material is in line with the original consent given by the participants, before using this material with a Generative AI tool.
Intellectual Property Rights (including Copyright): In common with many emerging technologies, the Intellectual Property environment and legal implications around the use of AI, is developing. There are Intellectual Property considerations when using Generative AI because entering content into a Generative AI, tool including Confidential or third party-owned information, could be considered as tantamount to publicly releasing that information. Generative AI tools may retain the rights to use any content entered to train their model. Not only may developers of that tool have full access to entered content, but AI model outputs in the future may also include content that has been used to train the tool. Intellectual Property, including copyright, can only be used to train an AI model if there is consent from the rights holder or if an exemption to copyright applies. However, due to the ongoing emergence of new Generative AI tools there is no clear-cut guidance on what counts as an exemption. For example, one of the exemptions to copyright law in the UK is that individuals are allowed to use limited extracts of copyrighted material for non-commercial Research or private study. However, if that copyright extract is entered into a Generative AI tool, the company developing that tool may be getting commercial benefit from training the model with user content, such as charging a subscription fee to users. Therefore, the use of that copyrighted material, although for non-commercial Research, falls outside of “fair dealing” which is the legal term used to establish whether a use of copyright material is lawful or whether it infringes copyright. The user terms of service for each Generative AI tool should outline what rights are granted to developers regarding any content entered into that tool.
Using third party Intellectual Property in Generative AI tools, including copyright licensed under Creative Commons Researchers should only enter third party content, including copyrighted material, into a Generative AI tool when express permission is granted from the owner of that Intellectual Property, even if content is made available by licences such as Creative Commons. This should be in the form of contemporaneous evidence, such as in an email or as part of a contract such as a licence. Failure to do so could result in infringement of third-party Intellectual Property rights and leave the University open to fees or lawsuits. Because Generative AI tools do not currently provide any acknowledgement of the source Data, inputting third party Creative Commons licensed material would require the copyright owner’s express permission to enter the Data into a Generative AI tool. The only Creative Commons licence where express permission is not required is the CC0 licence where the copyright owner has waived their rights to the work.
18. Use of AI in management, administration and business operations
One area that has not been developed so much at UWL, is how Gen AI can improve administrative efficiency. AI can automate routine administrative tasks such as scheduling, data entry, and resource management, freeing up staff to focus on more strategic activities.
AI can improve decision-making by analysing large volumes of data to provide insights and to support decision-making processes. This can help in areas such as student admissions, resource allocation, and performance tracking, and builds on UWL having competed Project ARM, and its data-rich approach (Dashboard, Student 360).
AI-powered chatbots can handle routine inquiries from students and staff, providing instant responses and reducing the workload on administrative staff, thereby enhancing communication.
AI can help in monitoring compliance with regulations and ensuring data security by identifying potential risks and vulnerabilities.
AI can support the university’s “committee engine” in terms of setting agendas, recording minutes, summarising content, and keeping track of actions.
Finally, AI can make accessing the wealth of information that universities hold much easier, be this easy access to policies at the time of need, summarising complex regulations, committee minutes and other university business.
Professional Services staff will be encouraged and expected to look for opportunities for where AI can be used to support management, administration and business operations.
19. Examples of good practice
[Note: this section will be updated as more examples of good practice at UWL emerge.]
The Centre for the Enhancement of Learning and Teaching (CELT) have developed resources and CPD events to support staff with teaching and supporting learning responsibilities to develop the relevant skills to responsibly embed AI in Higher Education practice.
20. Examples of poor practice or unacceptable use
[Note: this section will be updated as more examples of poor practice at UWL emerge.]
The following examples would be considered Academic Misconduct under the category of “misuse of generative artificial intelligence tools in preparation or production of submitted work”:
- Entering the original or modified essay question as a prompt and submitting the edited or unedited response as your own work.
- Using AI tools to paraphrase, summarise, or reword a specific source, without crediting the original source.
- Using AI tools to paraphrase, summarise, or significantly alter work, without crediting the AI tool used.
The following examples may be considered unacceptable due to data protection, intellectual property or copyright infringement:
- Uploading work with student details to AI checkers.
- Analysing any data about individuals using an AI tool that assimilates submitted data.
- Uploading copyrighted material to an AI tool as a style reference to inform its output, or other purposes.
- Using an AI tool to modify, rephrase, or paraphrase someone else’s idea, proposal, or question, and presenting it as your own.
The following examples may be considered poor practice:
- Uploading anonymised student work to AI checkers.
- Using AI tools for analysis without checking for inaccuracy, bias and hallucination.
- Using AI to make decisions without human input or oversight.
21. Training on the use of AI and how to seek more guidance or support
CELT have developed a 10-hour professional development programme, designed to advance skills and boost confidence in incorporating Generative AI into teaching and supporting learning practice: AI101 – Becoming Confident with AI, which is available on Blackboard. The programme blends self-paced online modules with interactive workshops, ensuring a comprehensive and adaptable learning experience. Its aims are to:
- Equip the participants with foundational knowledge and skills in Generative AI applications in education;
- Help integrate AI tools into curriculum design and delivery;
- Participate in a community of practice for continuous learning and support in AI education technologies; and
- Prepare participants to equip students with the necessary AI skills and knowledge, aligning with emerging job market requirements and technological advancements.
The programme comprises six modules, including:
- Module 1 (Core: 2 hours): Introduction to Generative AI in Education (Online Guided Independent Study)
- Module 2 (Core: 1.5 hours): Ethical Considerations and Bias in AI (Online Guided Independent Study)
- Module 3 (Core: 2 hours): Designing AI-Enhanced Learning Experiences (Training Workshop)
- Module 4 (Core: 2 hours): AI Tools for Assessment and Feedback (Training Workshop)
- Module 5 (Optional: 1.5 hours): Collaborative Learning and AI (Online Guided Independent Study)
- Module 6 (Optional: 1 hour): Continuing Professional Development and AI (Online Guided Independent Study)
The training workshops focus on developing AI-based useful skills for AI-enhanced tools for lesson planning and practical applications like AI-driven feedback mechanisms with specific rubrics for assessment.
Completion of modules 1-4 is mandatory for all permanent academic staff and all other staff with a teaching and/or supporting learning responsibility are encouraged to undertake it. Modules 5 and 6 have been created to provide further developmental opportunities to staff. To gain access to the course, please contact celt@uwl.ac.uk.
In addition, CELT have developed a whole toolkit of supporting material is available, including learning resources, videos, examples, and a place for staff to share what they are doing with AI. The toolkit can be found here.
Training on AI will form part of a broader Digital Futures strategy.
22. AI and sustainability
AI is known to be an energy-intensive technology. According to an article from MIT Technology Review, AI's electricity consumption is rising, with some AI models requiring significant energy to perform tasks, putting additional strain on the already burdened electricity grid.
AI also requires the consumption of water resources. An article from Yale E360 highlights that AI use is directly responsible for carbon emissions from non-renewable electricity and the consumption of millions of gallons of fresh water.
At the time of writing, both Microsoft and Google have announced that they will turn to nuclear power in order to reduce the carbon footprint of the energy required to power their AI service.
However, despite its high energy consumption, AI has the potential to reduce greenhouse gas emissions significantly. According to a study by BCG, using AI can reduce greenhouse gas emissions by between 2.6 and 5.3 gigatons of CO2e by 2030.
In summary, while AI's energy consumption and carbon footprint are considerable, there are ongoing efforts to mitigate these impacts through energy efficiency measures and the adoption of renewable energy sources. Additionally, AI has the potential to contribute positively to reducing global carbon emissions if used strategically. UWL staff and researchers should be mindful of the sustainable impact of using AI.
23. Endorsement and consultation
This policy and guidance has been informed by best practice from around the world5. It has been developed by the University’s AI Working Group, has been considered by the IT Steering Group, the Information Governance Group, and Academic Board.
- From the PVC People & Digital, the Academic Registrar or the Head of CELT.
- Gartner AI Hype Cycle 2024
- From the PVC People, the Academic Registrar or Digital or the Head of CELT
- From the PVC People, the Academic Registrar or Digital or the Head of CELT
- Montclair State University’s Practical Responses to ChatGPT and Other Generative AI; Yale University’s AI Guidance; Georgetown University’s Chat GPT and Artificial Intelligence Tools has example statements and classroom policies to effectively prohibit or clarify the acceptable use of ChatGPT; York University’s AI Technology and Academic Integrity; University of Pittsburgh’s ChatGPT Resources for Faculty; Miami University’s Incorporating ChatGPT Into Your Teaching; Ohio University’s ChatGPT and Teaching and Learning; University of Nebraska-Lincoln’s Classroom Implications of AI.
Appendix A: Acknowledgement and explanation of AI use (template)
Part A – Am I allowed to use AI?
Check your Assignment Brief for this assessment. This will give you the details on if, to what extent and how you are allowed to use AI for this assignment.
Part B - Summary of AI use for this assessment
Please complete this section with the following information:
AI tools used:
- [Specify AI tools or technologies you used, e.g., CoPilot, language models, etc.]
- [Provide a brief explanation of their purpose]
Selection rationale
- I selected these tools because [explain your reasons, e.g., efficiency, creativity, etc.].
- [Include a URL link to the tool if applicable]
Incorporation of AI output
- The output generated by the AI tools was included in [describe where it appears in your work, e.g., introduction, analysis, etc.].
- I carefully integrated this output to align with the assessment requirements.
Alterations and adaptations
- I modified, adapted, and built upon the AI-generated content to ensure its relevance and coherence. [provide the initial output here]
- My submission reflects both human input and collaborative AI assistance.
Part 3 - Declaration
I have used AI Tools in line with the assignment brief for this specific assessment and the UWL policies. I have documented my process to maintain transparency and academic Integrity and take full responsibility for the content submitted.
Signature: