🟢AI Policy Essentials 🟢

(For public sector, health and education)

ISO 42001 Compliance & EU AI Act

Purpose and Scope

  • ☐ The policy explains why the organisation is using AI (e.g. efficiency, service quality, insight).#

  • ☐ The policy is clear where it applies (staff, contractors, students, partners).

  • ☐ The policy covers all AI tools and services, not just one product.

Definitions

  • ☐ The policy gives a simple definition of “AI” and “AI tools” in plain language.

  • ☐ It explains the difference between low‑risk uses (e.g. drafting) and higher‑risk uses (e.g. decisions about people).

Roles and Responsibilities

  • ☐ There is a named senior owner for AI (e.g. SIRO, CIO, Director, SMT lead).

  • ☐ It explains what managers are responsible for (approvals, oversight, raising risks).

  • ☐ It explains what staff (and where relevant, students) are responsible for when using AI.

Permitted and Prohibited Uses

  • ☐ The policy gives clear examples of acceptable use (e.g. drafting, summarising, idea generation).

  • ☐ It gives clear examples of prohibited use (e.g. fully automated hiring, grading, clinical or safeguarding decisions).

  • ☐ It states that AI outputs must be checked by a human before important decisions are made.

Data Protection and Confidentiality

  • ☐ The policy states what data must never be put into AI tools (e.g. special category data, identifiable patient/student/client data, confidential information).

  • ☐ It reminds staff to follow existing data protection and confidentiality rules when using AI.

  • ☐ It explains when a DPIA or similar assessment is needed for higher‑risk AI uses.

Fairness, Bias and Equality

  • ☐ The policy states that AI must not be used in a discriminatory way.

  • ☐ It explains that staff should watch for bias in AI outputs and challenge it.

  • ☐ It links AI use to existing equality and inclusion duties.

Transparency and Record‑Keeping

  • ☐ The policy says when people should be told that AI is being used (where relevant).

  • ☐ It encourages keeping basic records of important AI‑assisted decisions (e.g. how AI was used, who reviewed it).

  • ☐ It explains how AI systems and use cases will be logged or registered inside the organisation.

Risk, Incidents and Escalation

  • ☐ The policy explains how to report problems with AI (e.g. harmful content, clear errors, suspected data breaches).

  • ☐ It links AI risks and incidents into existing risk and incident processes.

  • ☐ It explains how serious issues will be escalated and reviewed.

Training and Support

  • ☐ The policy commits to basic training or guidance for staff (and where relevant, students).

  • ☐ It tells people where to go for help and advice on AI use (e.g. ICT, IG, digital learning, AI champion).

  • ☐ It encourages staff to ask before they deploy AI in a new, higher‑risk area.

Review and Updates

  • ☐ The policy states how often it will be reviewed and updated.

  • ☐ It explains how changes will be communicated to staff (and, where relevant, students).

  • ☐ It acknowledges that AI is fast‑moving, and the policy may be updated as laws and guidance change.

.