Strand 2
Strand 2: Socio-technical Foundations of Trustworthy AI

Team members set up the Jigsaw Interactive Agent (JIA) Wizard of Oz Studies and teacher interviews in the NSF iSAT Lab.
Our goal is to improve student-AI teaming鈥攁nd ultimately student learning and engagement鈥攂y designing trustworthy AI. Trustworthy AI refers to AI partners and learning environments built around reliability, security, transparency, safety, and privacy. We will use mixed-methods research to refine key theories and measures for trustworthy AI in K-12 settings, create user-centered designs and design guidelines, and design, deploy, and study trustworthy AI partners. We will also lead work on novel 鈥渦nder-the-hood鈥 environments that help students understand and experiment with core AI concepts, including those related to trustworthiness.
Our guiding research question is: 鈥淲hat socio-technical approaches are needed to appropriately calibrate trust in AI during small group collaborative learning in K-12 classrooms?鈥 Building appropriate trust to support uptake and effective use of AI tools is particularly critical when supporting collaborative learning, which involves complex knowledge sharing and negotiations among learners and teachers and creating psychologically safe learning conditions at multiple levels (individual, small group, whole class) to promote engaged participation by all students. Our work is organized along two themes: Theme 1: Novel Frameworks and Measures to Study Trustworthy Student AI Teaming and Theme 2: Under-the-hood Designs to Calibrate Trustworthy Student-AI Teaming.
The connections between privacy, fairness, safety, and trustworthy AI are underexplored in classrooms, requiring a re-envisioning of what it takes to design trustworthy AI in K-12 schools. Our goal is to build on extensive research on 鈥榯rust in AI鈥 in adult populations to re-envision how to design trustworthy AI in K-12 classrooms.
Our goal is to study novel 鈥渦nder the hood鈥 AI learning environments for improving students鈥 understanding of the inner workings of iSAT鈥檚 AI Partners. We run participatory studies with students and teachers to investigate cognitive, social, and technical factors that shape trust in interactions between students, teachers, and AI. This can ultimately provide students the agency to accept or contest AI inferences, and adapt AI models to their context, with appropriate safeguards in place.