SIT Principles for the use of AI

The Southern Institute of Technology (SIT) is committed to ensuring our ākonga | students and kaimahi | staff are equipped to thrive in a world increasingly influenced by Artificial Intelligence (AI). There are many ethical and societal concerns about AI, including the impact of bias, on the environment, and its contribution to misinformation. We follow a similar approach to the one adopted by Jisc and the Russell Group in the UK, but focussed on the needs of the vocational education sector in New Zealand. Our primary concern is the prevention and mitigation of harms from AI for our ākonga and kaimahi.

The principles below aim to help navigate challenges and maximising the opportunities of AI. They are centred around fair and responsible use of AI, giving a framework to provide ākonga with the AI skills they need to thrive, and to allow kaimahi to take full advantage of AI their daily activities. A shared set of principles across the organisation will benefit all, providing equality of opportunity to ākonga regardless of campus or programme. This will free programmes, schools and faculties to concentrate on the providing advice and guidance that are more specific to their ākonga and kaimahi, and curriculum. Whilst principles are universal, the best advice and guidance for ākonga and kaimahi should be tailored to the needs of each course.

Our Principles

1.1 Safety, security and robustness

SIT will place the safety of ākonga and kaimahi at the forefront of the use of AI.  This includes ensuring all systems are fully evaluated before being used, and that they are appropriate for the age group of the ākonga, including obtaining informed parental consent when needed.

Ākonga will understand how AI will be used and will be supported to make informed decisions about their own use of generative AI, including considerations about how their data might be used for model training, and how any personal data might be used.

SIT will also consider intellectual property rights, including ākonga work, which should not be used to train generative AI models without appropriate informed consent or exemption to copyright.

1.2 Transparency and explainability

SIT will be transparent about their use of AI, and provide information on how, when, and for which purposes an AI system is being used.  As well as aligning with the principles here, this also reflects the wishes and concerns of ākonga.

SIT will be open and transparent, ensuring that ākonga understand when AI is used to create learning resources, support learning delivery or within the assessment and monitoring process. Ākonga will also be informed how their personal data is being processed using AI tools.

Explainability in this context refers to explaining how the AI system makes its decisions. SIT will ensure that all AI systems it deploys have some explanation of how they work, obtained, for example as part of the procurement process.

1.3 Fairness

SIT will ensure AI systems used will ensure fairness for all, including considering issues around bias, data protection and privacy and accessibility. This will be built into the procurement and selection process of AI tools used within SIT, ensuring no learning is disadvantaged through the use of inappropriate or ineffective AI tools.

1.4 Accountability and governance

As with any IT system, AI systems should have a clear governance structure, with a clear line of accountability for their use. As AI systems performance may change over time, for example when the underlying AI models change or encounter new types of data, extra measures need to be put in place to periodically review the performance of any AI system, and this will be built into any AI project.

1.5 Contestability

AI systems in SIT are likely to be increasingly used in a way that directly impacts on outcomes for ākonga. This includes, for example, if used to assist in marking, exam proctoring, or the use of AI detection in assessment processes. SIT will ensure ākonga and kaimahi have clear guidance on how to contest the output of any AI system if they feel they have been unfairly disadvantaged.

2.1 AI skills and literacy

Because AI is evolving rapidly, ākonga must be taught broader AI skills and literacies to enable them to critically evaluate tools of the future as well as those of today. Literacy includes an understanding of the limitations, reliability, and potential bias of generative AI, and the impact of technology, including disruptive and enabling technologies and creating and using digital content safely and responsibly.

2.2 AI Workplace literacy

Whilst many AI skills in use in education translate directly to the workplace, a broader understanding of where AI fits into the workplace will also be needed, for example the understanding of data privacy and cyber security issues. SIT will work with employers, and other key stakeholders to ensure their ākonga are acquiring the AI skills needed.

2.3 AI Citizens and the Wider World

As well as preparing our ākonga for studies and work, we will help them become equipped to navigate the use of AI in their everyday lives. AI is becoming embedded into services we all use daily, and is affecting broader societal issues, such as our democratic processes, climate and environment, and the way we consume and share information.  We will ensure that ākonga have the critical AI skills to navigate this world safely and confidently.

2.4 Assessment for an AI enabled world

Authentic and relevant assessment, both formative and summative, need to be aligned to this aim. SIT, in collaboration with the broader education sector, will move towards a consistent approach for the use of AI in assessments, with the aim of making assessments authentic and relevant to an AI enhanced workplace and society, for all ākonga.

3.1 Saving time

AI has the potential to save time and reduce workloads for kaimahi when applied appropriately, enhancing both effectiveness and wellbeing. Alongside making tasks quicker, activities that were challenging before because of time constraints become possible.  Examples include improved differentiation for ākonga, using AI to create resources in multiple ways, and using AI to create formative assessment resources and materials. We aim to ensure this benefit is felt by all kaimahi, by providing appropriate access to AI tools and the training they need to take advantage of them.

3.2 New learning and teaching opportunities

Kaimahi will be supported to explore how AI can present new learning and teaching opportunities. Examples might include providing step by step explanations, providing guidance on coding, helping ākonga optimise designs, creating interactive simulations, generating ideas or material to critique, and creating interactive conversations to support learning.

4.1 Equality of access to AI tools

AI tools have the potential to improve equality, for example by providing proof reading and feedback expertise to all, and by enabling ākonga to obtain resources in a format and time that supports them.  However, this will only be possible if access is available to all. Whilst there is a perception that generative AI is free to access, those that have the means to pay often have access to a much wider range of tools and will be at a significant advantage.  Similarly, we will work to ensure access isn’t restricted for ākonga with learning difficulties and/or disabilities.

4.2 Equal access to data and devices

We acknowledge that access to devices and data are foundational issues that limit access to AI. We will work towards levelling access as much as possible.

5.1 Preventing malpractice

SIT will take reasonable steps to prevent malpractice involving the use of generative AI.   A mixed approach for this is needed, with clear guidance, well designed assessments, and appropriate use of AI detection tools.

5.2 Clear guidance to students

SIT will provide clear guidance to ākonga on appropriate use of AI in their assignments. This includes general principles and guidance, along with more specific guidance at assessment level.

5.3 Appropriate use of technology such as AI detection

Whilst AI detection tools may have a role in maintaining academic integrity, they are by no means a full solution. As co-creation of content increases, and assessments start incorporating AI usage, the use of AI detection needs clear guidelines. There is a risk AI detection can unfairly discriminate and compound existing bias, therefore, users of AI detection must understand that such systems cannot conclusively prove text was written by AI, can generate false positives, and are easy to defeat.  Where they are used, kaimahi will be given training and guidance to help understand these limitations.

SIT supports collaboration across the education sector, industry, and government on best practice around AI. The size and speed of change means we will be stronger if we work together. Best practice is still emerging, and we will work together to share what works, and what doesn’t. 

Acknowledgements & Further Information