AI Ethics framework

1. Purpose and Scope 

Cura AI is committed to the responsible, ethical, and transparent use of generative artificial intelligence in education. This policy sets out the principles, governance structures, and operational practices that guide the design, development, deployment, and ongoing use of AI within the Cura platform. 

The policy applies to all Cura AI systems, processes, and personnel involved in the AI lifecycle, including product design, data management, model training, deployment, monitoring, and user engagement. 

2. Governance 

2.1 Principles and Values 

Cura AI’s approach to artificial intelligence is guided by the following principles: 

  • Educational integrity: AI outputs must support sound, research-based pedagogy, teaching and learning practices. They must be reliable and accurate.  

  • Fairness and inclusion: AI systems should minimise bias and work equitably across diverse learners, contexts, and communities. 

  • Transparency: Users should have a clear understanding of how AI is used within the platform and the nature of AI-generated outputs. 

  • Human agency: AI is designed to support, not replace, professional teacher judgement. 

  • Accountability: Cura AI accepts responsibility for the impacts of its AI systems and acts to address risks and harms. 

  • Safety and trust: Systems must be secure, reliable, and appropriate for use in school and classroom environments. 

2.2 Roles and Responsibilities 

Cura AI defines clear responsibilities across the AI lifecycle: 

AI Governance & Data Protection Lead 

CTO 

  • Overall accountability for ethical AI implementation  

  • Ensures data protection, privacy, and security safeguards are applied across all AI systems 

  • Selects and manages AI base models and vendors in line with ethical and legal requirements 

  • Ensures AI systems do not train on customer or teacher-created content unless explicitly permitted 

  • Oversees AI risk identification and mitigation (bias, misuse, data leakage, system failure) 

  • Ensures secure deployment, monitoring, and model updates 

  • Ensures that the development team, including contractors, comply with these guidelines 

AI Quality, Accuracy & Human Oversight Lead 

Product Lead 

  • Responsible for accuracy testing, validation, and quality assurance of AI outputs 

  • Designs and maintains human-in-the-loop review processes for AI-generated content 

  • Defines acceptable-use boundaries and content limitations for AI outputs 

  • Reviews edge cases, failures, and reported inaccuracies 

  • Fields customer queries and concerns relating to AI behaviour, accuracy, and appropriateness 

  • Feeds real-world feedback into model configuration and product improvements 

Ethical Use & Customer Interaction Steward 

CEO 

  • Communicates AI capabilities and limitations accurately to customers 

  • Ensures AI is not misrepresented in sales, marketing, or customer communications 

  • Escalates customer concerns, misuse reports, or ethical questions to the Product Lead or CTO  

  • Handles customer data in accordance with privacy and ethical AI commitments

All employees:

  • use AI systems responsibly and in line with documented ethical principles 

  • avoid over-reliance on AI outputs without appropriate review 

  • report suspected AI misuse, bias, or harmful outputs 

  • participate in ongoing review and improvement of ethical AI practices 

2.3 Risk Management 

Cura AI maintains a proactive approach to identifying and mitigating AI-related risks, including: 

  • bias or unfair representation in outputs 

  • factual inaccuracies or misleading content 

  • data privacy or security vulnerabilities 

  • unintended educational or societal impacts 

Risks are identified through internal reviews, user feedback, testing, and monitoring. Mitigation strategies may include data refinement, model adjustments, user guidance, safeguards, or withdrawal of functionality where necessary.  

Cura AI has various policies covering how data is gathered, processed and transported and how the platform is protected. Visit curaeducation.com/legal to find these. 

3. Design and Development 

3.1 Human-Centred Design 

Cura AI systems are designed with educators and students at the centre. This includes: 

  • prioritising teacher workload reduction without compromising educational quality 

  • supporting online and offline versions of resources 

  • supporting inclusive and accessible learning experiences  

  • respecting the professional expertise of teachers 

  • designing interfaces and outputs that are clear, usable, and adaptable 

AI is positioned as a planning and resource generation tool, not an autonomous decision-maker. 

3.2 Data Management 

Cura AI is committed to responsible data practices, including: 

  • using lawfully obtained, appropriate data, specifically our own in-house created educational resources, for training and system improvement 

  • prioritising original, education-specific learning objects and resources 

  • not training AI systems on teacher-created output or user-generated lesson content unless the user gives permission 

  • minimising the use of personal data 

  • complying with applicable privacy and data protection regulations 

  • implementing safeguards to reduce bias and inappropriate generalisation 

Teacher-created content remains under the control of educators and is not used to retrain or fine-tune Cura AI models. Teachers have the right to download, copy, share or edit any of the AI-generated resources they create. 

3.3 Algorithmic Transparency and Explainability 

Cura AI aims to make the role of AI within the platform understandable to users by: 

  • clearly indicating when content is AI-generated by visually and technically separating the generative AI and non-generative AI (static) halves of the platform 

  • explaining, at a high level, how AI supports lesson and resource generation (see curaeduction.com) 

  • providing guidance on appropriate use and limitations of AI outputs (e.g. Cura AI newsletter, curaeducation.com, customer success function) 

While some technical details may remain proprietary, Cura AI is committed to meaningful transparency rather than opaque automation. 

4. Testing and Deployment 

4.1 Testing and Validation 

Before deployment, Cura AI systems undergo structured testing and review processes designed to ensure safety, reliability, and educational integrity. 

Requirements for AI functionality are informed directly by teacher consultation and classroom use cases. These requirements are refined collaboratively by Cura’s education specialists and engineering professionals to ensure both pedagogical soundness and technical robustness. 

Testing processes include: 

  • validation of accuracy and educational appropriateness 

  • review against research-based pedagogical frameworks 

  • technical testing for reliability, security, and performance 

  • safety checks aligned with current industry best practices 

Only systems that meet defined educational, technical, and safety thresholds are deployed. 

4.2 Monitoring and Auditing 

Cura AI systems are continuously monitored following deployment to: 

  • identify performance issues or degradation 

  • detect emerging biases or inaccuracies 

  • ensure continued alignment with curriculum, pedagogy, and user expectations 

Cura AI recognises that underlying AI base models may be updated or changed over time as technology evolves. When such updates occur, Cura conducts additional review and testing to assess impacts on accuracy, safety, and educational outcomes before changes are fully adopted. 

Cura AI platform does not train or fine-tune proprietary models using customer data. Our AI privacy assessments therefore focus on inference-time privacy controls and the supporting platform controls (data minimisation, de-identification/anonymisation where used, user controls, logging/monitoring, retention, and third-party processing). 

Assessment frequency (risk-based baseline + change triggers): 

  • Data minimization testing: Quarterly, and on any change to data inputs (new fields, new upload types, new prompts/templates, new integrations, or new model/provider routing). 

  • Data anonymization testing: Quarterly where anonymisation is used, and on any pipeline or rules change (e.g., redaction updates, new entity types, new data classifications). If anonymisation is not used, we instead verify data classification + minimisation + redaction policies on the same cadence. 

  • User control testing (access, deletion, retention preferences):Quarterly, and each major release touching identity/roles, permissions, export/delete workflows, or retention settings. 

  • Compliance testing (privacy/security control verification): Biannually, plus annually as part of internal governance, and after major architectural changes (e.g., new sub-processors, new regions, new storage patterns, new endpoints). 

Findings from monitoring and audits inform ongoing system improvements. 

4.3 Human Oversight 

Human oversight is maintained throughout the AI lifecycle. This includes: 

  • teacher review and approval of AI-generated materials before classroom use 

  • internal review processes for system changes 

  • mechanisms for intervention, correction, or withdrawal of AI outputs where required 

AI systems do not make high-stakes educational decisions independently. 

4.4 Best-in-class sub-processing systems 

Cura AI takes various measures to ensure its platform is safe and secure, including: 

  • Hosting on AWS EC2 instances, one of the most secure cloud computing services in the world 

  • Data encryption for both user-entered data and generated metadata 

  • Only using base large language models with adequate cyber-security protections 

Cura AI uses the following sub-processing systems: 

  • AWS 

  • ASW Bedrock 

  • Claude Sonnet 4.5 

  • Google Analytics 

  • Hotjar 

Data types processed 

  • Interaction telemetry such as clicks, scroll depth, navigation patterns, and page-level usage signals (heatmaps/recordings tooling).  

  • User input protection: We keep Hotjar configured to suppress user input by default and use Hotjar’s suppression controls to prevent capture of sensitive page content and inputs.  

  • Exclusions for sensitive areas: Hotjar is disabled (not collecting) on any screens that may contain student/customer personal information or sensitive content (e.g., document upload/preview areas, free-text entry, AI prompt/response areas). Hotjar supports suppressing page content or entire pages from collection.  

  • Retention: Hotjar’s default retention is long (e.g., recordings/heatmaps), so we set retention to the minimum necessary for operational use and delete data when no longer needed.  

  • Aggregated usage/event data and online identifiers (e.g., cookie/device identifiers used for analytics measurement), plus basic device/browser metadata typical of web analytics.  

  • No PII: We do not send personally identifiable information (PII) to Google Analytics (e.g., no names, emails, phone numbers, student IDs in event parameters, page URLs, or custom dimensions). Google’s terms/policies prohibit sending PII to Google Analytics.  

  • Data minimisation: Events are designed to avoid content capture (no document text, no prompts, no user-entered free text), focusing on feature usage only. 

We do not intentionally send uploaded document contents or other sensitive payloads to Hotjar; collection is restricted and masked/suppressed by configuration 

5. Communication and Stakeholder Engagement 

5.1 Transparency with Users 

Cura AI is committed to clear communication with users about: 

  • how AI is used within the platform 

  • the benefits and limitations of AI-generated content 

  • appropriate and responsible use expectations 

  • how user data is handled 

Documentation, in-product guidance, and public policies support informed use. This document is available to the public at curaeducation.com/legal. Our privacy policy details our use of personal data, at curaeducation.com/legal/privacy

5.2 Stakeholder Engagement 

Cura AI engages with a broad range of stakeholders, including: 

  • educators and school leaders 

  • education authorities and regulators 

  • researchers and policy experts 

  • the wider education community 

Feedback and dialogue inform product design, governance decisions, and policy updates. 

6. Accountability and Continual Improvement 

6.1 Accountability Mechanisms 

Cura AI maintains accountability by: 

  • assigning clear internal responsibility for AI systems 

  • providing channels for users to report concerns or issues 

  • investigating and addressing reported harms or failures 

  • adhering to an incident response plan  

  • taking corrective action where AI systems do not meet ethical or educational standards 

 Please email support@curaeducation.com for a copy of our Incident Response Plan. 

6.2 Continual Learning and Improvement 

Cura AI recognises that responsible AI is an ongoing commitment. The organisation: 

  • learns from real-world use, feedback, and outcomes 

  • updates systems and practices as standards evolve 

  • reviews this policy regularly to reflect emerging risks, regulations, and best practice 

7. Alignment with Australian Regulatory and Policy Frameworks 

Cura AI’s approach to ethical and responsible AI is aligned with relevant Australian regulatory, policy, and best‑practice guidance, including: 

7.1 Australian Privacy Act 1988 and Australian Privacy Principles (APPs) 

Cura AI’s data management practices align with the Australian Privacy Act 1988 and the Australian Privacy Principles by: 

  • minimising the collection and use of personal information 

  • clearly communicating how data is handled and protected 

  • ensuring teacher‑created content is not reused for model training 

  • implementing safeguards to protect data from misuse, loss, or unauthorised access 

7.2 Australian Government AI Ethics Principles 

Cura AI’s governance and development practices reflect the Australian Government’s AI Ethics Principles, including: 

  • Human‑centred values: AI is designed to support teachers and learners, not replace professional judgement. 

  • Fairness: Cura actively works to identify and mitigate bias in AI outputs. 

  • Transparency and explainability: Cura communicates how AI is used, its limitations, and when content is AI‑generated. 

  • Reliability and safety: AI systems are tested, monitored, and reviewed before and after deployment. 

  • Accountability: Cura maintains clear responsibility for AI behaviour and remediation of issues. 

7.3 Online Safety and Child‑Safe Design Expectations 

As an education platform, Cura AI recognises its responsibility to support safe and appropriate digital environments for young people. Cura AI: 

  • designs AI systems for school‑appropriate use 

  • avoids autonomous decision‑making that could negatively impact students 

  • collects the minimum amount of personal information required to create an account – name and email address only 

  • maintains human oversight and educator control 

  • monitors outputs to reduce the risk of harmful, inappropriate, or misleading content 

7.4 Australian Consumer Law 

Cura AI aligns with Australian Consumer Law by ensuring that representations about AI capabilities are accurate, not misleading, and supported by reasonable grounds. Claims regarding accuracy, pedagogy, and safety are reflected in internal testing, monitoring, and review processes. 

7.5 Education Sector Expectations 

Cura AI’s practices align with expectations commonly applied across Australian education systems, including: 

  • respect for teacher intellectual property 

  • alignment with curriculum and pedagogical standards 

  • transparency for school leaders and governing bodies 

  • accountability mechanisms appropriate for school procurement and risk management 

  • best-in-class cybersecurity and data protection 

 8. Review and Updates 

This policy is reviewed periodically and updated as necessary to reflect changes in technology, regulation, and educational practice. 

Last reviewed: 11/2/26 

Contact: For questions or concerns regarding this policy, please contact Cura AI at support@curaeducation.com  

Next
Next

Terms of Use