Study Hall AI Constitution

In today’s world, every company needs to live by an AI Constitution that is transparent and public. We have adapted our principles from EdSAFE AI Alliance.

1. We use proprietary AI/NLU models to generate personalised curriculum and content.

Research shows that dull, dated content contributes to learner disengagement. Learners must feel that content and curriculum is relevant to their world and of the highest quality. We use specialised datasets to produce content that is statistically relevant. For example, when we teach irregular verb conjugation, we teach the verbs most likely to appear in human discourse and reading materials. We also use proprietary data to produce content that is modern, knowledge-rich and packed with high utility vocabulary, accelerating reading comprehension skills. Our expert humans-in-the-loop assess our training data and frameworks, and mitigate the risk of low quality or random content that often comes from outsourced question bank farms.

2. Our AI Study Guide is connected to our own datasets, not ChatGPT.

We recognize the need for safety and accuracy. We are constantly adding new data sources to our AI Study Guide and hope it will get smarter and more useful over time.

Study Hall wants to teach me to have good AI manners so I never forget to say “please” and “thank you”.

logoSmile
AI Study Guide

Study Hall

3. We invest in AI Literacy curriculum for K-12.

Study Hall has developed AI literacy resources for students on our platform comprising AI Reading Comprehension and Vocabulary Curricula. We will help students build AI knowledge, and understand that AI is not a replacement for hard work, human thought and creativity.

4. We are committed to human-in-the-loop design.

Our content generation pipelines include expert humans at critical points to develop and approve our frameworks and outputs. We have and continue to create innovative features on our platform that enable parents, teachers, and tutors to interact safely with students, and to promote ethical and human-centered use within our learning environments.

5. Our AI is safe from end-to-end.

We do not connect students directly to LLMs (large language models). Our AI Tutor only has access to a corpus of learning content. It cannot interact in a social way like Siri or Chat GPT. It only knows how to provide hints and answer explanations to support learning.

6. We are transparent regarding how we train our AI.

Wherever we use AI, we note it on our platform. We define and explain our AI systems to our users. This includes explaining how we use their data, and generate AI curriculum content. Our users must be able to trust that the quality of content and interactions on our platform is safe, evidence-based and of a higher-quality than if we did not use AI.

7. Our proprietary AI/NLU models check for bias.

We check for bias from the creation of frameworks and training data through content generation. We are not a “ChatGPT wrapper”. Our pipelines leverage multiple models from our own proprietary Fine-Tuned and Edge Language Models (SLMs) to a variety of LLMs to ensure that our content is of the highest quality. Sometimes we will intentionally produce content that has bias for the explicit purpose of teaching students how to recognize bias, and develop values around equity and fairness for educational purposes.

8. We provide curriculum for diverse authors.

We place a high value in premium, copyright-protected content that reflects a diversity of viewpoints. We have developed proprietary models that enable us to enrich this content with curriculum materials cleverly aligned to exam frameworks. Allowing us to offer a wider range of content to support diverse learner needs and perspectives, and continually build meaningful human engagement.

9. Our platform is evidence-based from end-to-end.

If you encounter content or questions on our platform that appear to have a weaker evidence-base, it is because we must align with an exam board. We do not agree with all of the content and curriculum decisions made by the exam boards/providers. In our Evidence-Based Guide to the 11+ and 13+, we have identified areas that we think need to be reconsidered in order to align with the research.

10. We comply with GDPR and COPPA.

We ask for informed consent with respect to how we use a learner’s information as data. We only use learner data to support the learning process and optimize the Study Plan. We will not share learner data with third parties without your express consent. Today, we do not share any data. In the future, we may share specific learner data with teachers or tutors, but only if you have given us permission. Or we may share anonymized data with content publishers such as Penguin Random House regarding the popularity of their content. We will always disclose how data is being used on our platform.

Our systems are safe and secure so we can support the requirements of students, educators and the world’s most valued publishers, and safeguard the integrity and confidentiality of personal data.

11. We work with educators to deliver ethical AI.

We have internal AI metrics in place such that humans can assess with confidence, and actuate a review and audit process with a path to providing schools, teachers, and parents comfort and clarity on our AI systems.

Through AI Literacy curriculum, articles, webinars and in-person meetings, we help the broader education community have a deeper understanding of the opportunities and challenges of AI in learning environments, so they can be part of shaping the future. We maintain a significant and growing research and resource library on our platforms for guardians and educators. We are committed to being user-centered, collaborative, and consultative in the identification of education problems and the formulation of the solutions.

EdSAFE is non-for-profit which provides global leadership for the development of safe, equitable and trusted AI innovation.

EdSAFE AI Alliance