AI Procurement: key questions
In the AI pioneers project we are frequently asked by teachers. and. trainers in Vocational Education and Training and Adult Education what they should be looking for if they intend licensing or buying AI based applications. The UK Jisc has developed and published an AI Maturity model. "As institutions move to the ’embedded’ stage," they say "we expect appropriate processes to be in place for the entire lifecycle of AI products, including procurement."
They continue to explain that: "This detailed scrutiny aims to facilitate a better understanding and mitigation of potential risks associated with AI deployment. Additionally, it is crucial to ensure that the use of AI in educational and research settings does not infringe on IP rights and that the data used in AI models is appropriately managed to maintain research integrity and protect proprietary information."
The model includes comprehensive due diligence processes for areas such as supplier information, financial stability, insurance coverage, modern slavery compliance, information security, and data protection. By thoroughly vetting these aspects, JISC says, we aim to ensure that any solutions are not only innovative and effective but also ethical and compliant with all relevant regulations and standards. The questions are intended to be dynamic and will be reviewed to reflect advances in technology or legislation.
1 | Outline which AI features of your system use third party AI models, and which use your own proprietary or in-house AI models. Please provide details of any third-party technologies used, including the name of provider and an outline of the features used. | Note that for major suppliers in the LLM supply chain, such as OpenAI, Google DeepMind, Anthropic, etc., due diligence should be conducted separately. There’s no need to request information about them from all third-party providers built on these large language models. |
2 | Where you are either creating your own model or fine tuning a third-party model, how is performance defined and measured? Include details of initial training and monitoring over time. | (UK AI Principle: Safety, security and robustness) |
3 | What data do your AI models require for initial training or fine tuning? If you are using third party models, you should only describe data that is unique to your application. | (UK AI Principle: Safety, security and robustness) |
4a/4b | Is data from user interactions with the system utilized to enhance model performance, and if so, please elaborate on the mechanisms involved? Furthermore, could you provide clarification on whether institutional data is integrated into external models? | (UK AI Principle: Safety, security and robustness) |
5 | What features does your solution have to make it clear when the user is interacting with an AI tool or AI features? | (UK Principle: Safety, security and robustness) |
6 | Could you please provide comprehensive information about the safety features and protections integrated into your solution to ensure safe and accessible use by all users, including those with accessibility needs and special education requirements? | (UK Principle: Safety, security and robustness) |
7 | Can you specify any special considerations or features tailored for users under the legal majority age? | UK Principle: Safety, security and robustness) |
8 | What explainability features does your AI system provide for in its decisions or recommendations? | (UK Principle: Safety, security and robustness) |
9 | What steps are taken to minimize bias within models your either create or fine tune? | (UK Principle: Fairness robustness) |
10 | Does your company have a public statement on Trustworthy AI or Responsible AI? Please link to it here. | (UK Principle: Accountability and governance) |
11/ 11a/ 11b/ 11c | Does your solution promote research, organizational or educational use by: A) Not restricting the use of parts of your solution within AI tools and services B) Not preventing institutions from making licensed solutions fully accessible to all authorized users in any legal manner; C) Not introducing new liability on institutions, or require an institution to indemnify you especially in relation to the actions of authorized users | (Gartner, Inc, ICOLC statement and legal advice obtained by Jisc) |
12 | Does your solution adequately protect against institutional intellectual property (IP) infringement including scenarios where third parties are given access to and may harvest institutional IP? | (Gartner, Inc and ICOLC statement) |