1.1 Safety, security and robustness
SIT will place the safety of ākonga and kaimahi at the forefront of the use of AI. This includes ensuring all systems are fully evaluated before being used, and that they are appropriate for the age group of the ākonga, including obtaining informed parental consent when needed.
Ākonga will understand how AI will be used and will be supported to make informed decisions about their own use of generative AI, including considerations about how their data might be used for model training, and how any personal data might be used.
SIT will also consider intellectual property rights, including ākonga work, which should not be used to train generative AI models without appropriate informed consent or exemption to copyright.
1.2 Transparency and explainability
SIT will be transparent about their use of AI, and provide information on how, when, and for which purposes an AI system is being used. As well as aligning with the principles here, this also reflects the wishes and concerns of ākonga.
SIT will be open and transparent, ensuring that ākonga understand when AI is used to create learning resources, support learning delivery or within the assessment and monitoring process. Ākonga will also be informed how their personal data is being processed using AI tools.
Explainability in this context refers to explaining how the AI system makes its decisions. SIT will ensure that all AI systems it deploys have some explanation of how they work, obtained, for example as part of the procurement process.
1.3 Fairness
SIT will ensure AI systems used will ensure fairness for all, including considering issues around bias, data protection and privacy and accessibility. This will be built into the procurement and selection process of AI tools used within SIT, ensuring no learning is disadvantaged through the use of inappropriate or ineffective AI tools.
1.4 Accountability and governance
As with any IT system, AI systems should have a clear governance structure, with a clear line of accountability for their use. As AI systems performance may change over time, for example when the underlying AI models change or encounter new types of data, extra measures need to be put in place to periodically review the performance of any AI system, and this will be built into any AI project.
1.5 Contestability
AI systems in SIT are likely to be increasingly used in a way that directly impacts on outcomes for ākonga. This includes, for example, if used to assist in marking, exam proctoring, or the use of AI detection in assessment processes. SIT will ensure ākonga and kaimahi have clear guidance on how to contest the output of any AI system if they feel they have been unfairly disadvantaged.