Back to Home

Safety & Ethics

Our commitment to Responsible AI is not an afterthought—it's the foundation of Mentora.

Curriculum Grounding

Unlike generic chatbots, Mentora's AI is strictly "grounded" in verified educational texts (such as NCERT, Oxford Cambridge, and approved textbooks). It will refuse to answer questions outside this educational scope, preventing off-topic or harmful discussions.

Consented Personas

We do not use deepfakes of unauthorized individuals. Every AI persona on our platform is created with the explicit, written consent of the human educator it represents. They retain rights and receive compensation for their likeness interaction.

Transparency & Auditing

Complete transparency is key. Students and parents can access "Audit Logs" of all AI interactions. We clearly label AI-generated content to ensure no user is ever misled about who (or what) they are interacting with.

Data Privacy

Student data is never used to train our foundation models. Your learning patterns are private and used solely to personalize your current session. We adhere to strict data minimization principles.

Have specific concerns?

Our safety team reviews all flagged interactions within 24 hours. You can also revoke consent for data usage at any time.