| Total: 48
With artificial intelligence (AI) becoming more present in ed- ucation globally, it is essential to consider how cultural con- texts shape teachers’ perspectives, an understanding that sup- ports more inclusive and sustainable learning systems. This study draws on the African philosophy of Ubuntu to frame our cross-cultural investigation of how children conceptu- alize AI through the lens of their teachers. We conducted semi-structured interviews with twelve middle school teach- ers in Nigeria and the United States, asking them to interpret AI-themed essays written by students. These teacher reflec- tions revealed differing educational priorities, cultural val- ues, and infrastructural realities: U.S. educators’ interpreta- tions centered on personal development and future careers, while Nigerian teachers highlighted students’ focus on fam- ily, community well-being, and practical societal challenges. Nigerian participants also pointed to the need for improved infrastructure (e.g., electricity, internet), broader AI literacy, and education policies that reflect local needs. Our findings il- lustrate how culturally grounded worldviews, such as Ubuntu, shape interpretations of AI and its role in society, and sug- gest that AI education is never culturally neutral. We argue that AI literacy initiatives must be designed not only to teach technical skills but also to support educational sustainability, defined here as inclusive, resilient, and culturally responsive learning systems capable of evolving within diverse contexts. We offer actionable recommendations for the HCI commu- nity to co-design AI education tools that foreground collec- tive well-being, foster global digital citizenship, and reduce epistemic exclusion in the development of future technolo- gies.
Artificial intelligence offers powerful methods for audio processing and analysis. Still, complex workflows and the required programming skills often limit access for students and domain experts, such as marine bioacousticians and soundscape ecologists. We present "AI EcoSound Tutor", a code-free and interactive tool that lowers these barriers by allowing users to construct and explore a complete AI pipeline for audio data analysis. Starting from raw recordings, users can choose from various feature extraction techniques (MFCC, OpenL3), apply dimensionality reduction methods (PCA, t-SNE, UMAP), and optionally perform unsupervised clustering (K-Means, GMM,HDBSCAN). The results are displayed with an interactive 2D visualisation where the user can compare multiple plots by employing various techniques, including PCA and t-SNE. Interactive plots enable the selection of points or clusters of interest, allowing exploration of spectrograms within the desired frequency range, and playing an audio clip corresponding to the selected points. An integrated "Help" feature provides explanations of each method (i.e., what it is, how it works, and its practical use in different domains, such as bioacoustics), fostering both conceptual understanding and useful skill acquisition as learning outcomes. For precomputed features or embeddings, this tool also supports training and evaluating a variety of machine learning models, providing visual feedback on the results. By merging accessibility, interactivity, pedagogy, and domain relevance, our application demystifies AI methods for interdisciplinary education and supporting research in audio analysis.
Theory of mind refers to the attribution of mental states that humans ascribe to other humans or objects (such as computer-based systems). Recently, the attribution of mental states has been investigated toward Artificial Intelligence (AI) as a basic manner to capture people's engagement toward it, and people's perception about AI social skills and AI capabilities. In line with this idea, mental state attribution can be used as an indirect measure of students' understanding of AI functioning, and in particular of the kind of interactions students may have with AI systems. Too often is the case of people using generative AI systems in ways that exceed their actual ways of functioning. In our study, children of age in the range 9-12 were involved in one-shot unplugged activities concerning data and models. The unplugged activities were not aimed at teaching the theory of Machine Learning, but rather they were designed so as to provide awareness on some basic mechanisms and help developing a correct use of tools that are becoming more and more present in everyday life. This paper introduces the activities and reports the results that were achieved.
Research on how the popularization of generative Artificial Intelligence (AI) tools impacts learning environments has led to hesitancy among educators to teach these tools in classrooms, creating two observed disconnects. Generative AI competency is increasingly valued in industry but not in higher education, and students are experimenting with generative AI without formal guidance. The authors argue students across fields must be taught to responsibly and expertly harness the potential of AI tools to ensure job market readiness and positive outcomes. Computer Science trajectories are particularly impacted, and while many consistently top ranked Computer Science departments in the United States teach the mechanisms and frameworks underlying AI, few have started offering courses on applications for existing generative AI tools. A course was developed at a private research university to teach undergraduate and graduate Computer Science students applications for generative AI tools in software development. Two mixed method surveys indicated students overwhelmingly found the course valuable and effective. Co-authored by the instructor and one of the graduate students, this paper explores the context, implementation, and impact of the course through data analysis and reflections from both perspectives. It additionally offers recommendations for replication in and beyond Computer Science departments.
Undergraduate research experiences are often limited to small-scale apprenticeship models, leaving many students without accessible entry points into research practice. This paper presents the design and evaluation of a semester-long course for undergraduates to gain research experience in Machine Learning. The course, led by one faculty instructor, enables nearly a hundred students to engage in structured research through a scaffolded replication-and-extension project, where students first replicate a published research project and then implement novel additions. The course integrates instructional modules (e.g., guided paper reading, proposal writing, public presentation) with project milestones (e.g., replication, extension, poster) to support research learning for students with diverse backgrounds. Every component of research is visited several times, with each iteration having progressively increased autonomy coupled with simultaneously decreased scaffolding. We find that the scaffolding modules help students develop foundational conceptual and procedural understanding of doing research, and the project milestones on replication and extension help them gain execution skills gradually. Students also report developing a researcher mindset and feeling like they understand the research process better. We discuss the principles used to design a scalable research-based course: balancing scaffolding to provide foundational understanding with autonomy for students to “feel like real researchers”.
The rapid advancement and integration of artificial intelligence (AI) into our everyday lives, work, and classrooms have added demands for PK-12 education to ensure that students are given opportunities to obtain AI competencies essential for responsible participation in the AI-driven future. AI literacy encompasses technical knowledge, ethical awareness, and critical evaluation of AI tools, as well as the ability to collaborate with AI systems in creative and productive ways. Research highlights the importance of age-appropriate approaches that address foundational AI knowledge, data literacy, ethics, problem-solving, and creativity, ensuring students can both use, analyze, critically evaluate, and design AI solutions for real-world challenges. This not only requires schools to prepare learners with AI competencies, but also colleges and universities to ensure that we prepare our preservice teachers to be AI-ready, who can integrate technological, pedagogical, and content knowledge in classroom practice. This paper highlights the importance of purposefully integrating AI competencies into teacher education programs and offers practical examples of how such integration can be achieved.
Edge artificial intelligence (AI) redistributes AI computation from distant cloud to local processors for real-time processing and enhanced privacy. This fundamental shift underscores a critical gap in current university curricula, which predominantly focus on AI fundamentals and algorithms while often neglecting essential AI hardware topics. To address this deficiency, this paper presents Edge AI, a postgraduate curriculum co-designed by two universities in Germany. Guided by the Dagstuhl triangle, the curriculum is designed to comprehensively cover technical, sociocultural, and application perspectives. Courses are developed using the Four-Component Instructional Design model to encourage action-oriented skill development, with a Learning Management System template available to assist in the design of individual courses. Selected practical courses are formulated as self-managed projects and inverted classrooms, enabling students to learn at their own pace with just-in-time guidance. All curriculum materials are accessible online and maintained by the Open Science Framework to enhance collaboration across institutions and promote applicability in diverse domains. Evaluation results from 176 students over two years (2023-2025) demonstrate universal satisfaction across various curriculum components.
Automated Essay Scoring (AES) and Automatic Essay Feedback (AEF) systems aim to reduce the workload of human raters in educational assessment. However, most existing systems prioritize numeric scoring accuracy over feedback quality and are primarily evaluated on pre-secondary school level writing. This paper presents Multi-Agent Argumentation and Grammar Integrated Critiquer (MAGIC), a framework using five specialized agents to evaluate prompt adherence, persuasiveness, organization, vocabulary, and grammar for both holistic scoring and detailed feedback generation. To support evaluation at the college level, we collated a dataset of Graduate Record Examination (GRE) practice essays with expert-evaluated scores and feedback. MAGIC achieves substantial to near-perfect scoring agreement with humans on the GRE data, outperforming baseline LLM models while providing enhanced interpretability through its multi-agent approach. We also compare MAGIC's feedback generation capabilities against ground truth human feedback and baseline models, finding that MAGIC achieves strong feedback quality and naturalness.
AI has great potential to transform education and daily life. However, before integrating AI into classrooms, it is crucial to first educate children on what AI is and how to use it responsibly. Effective AI education should build on children's existing perceptions, address misconceptions, and establish a solid foundation for AI literacy. This study explores primary school students’ perception of AI and its relationship to their demographic characteristics and digital skills. A survey was conducted in seven local schools, with 233 students participating. The results indicate that most of them were unfamiliar with AI, and those who attempted to define or depict it often associated it with robots or digital devices. The study also found significant differences in students’ AI perceptions based on factors like gender, grade, and prior digital skills training. These variables were also linked to students’ awareness and understanding of AI. These findings underscore the need for targeted AI educational interventions for primary school students, leveraging their existing perceptions.
As AI technologies grow more influential in shaping modern life, there is an urgent need to make AI literacy accessible beyond academic and technical communities. This paper presents the design, delivery, and evaluation of an online AI course targeting the general public. The course combined asynchronous lectures, interactive live sessions, and reflective assignments. Of the 343 people who registered, 169 completed the program. Using validated instruments administered before and after the course, we measured changes in participants’ attitudes toward AI and their AI literacy. Our findings revealed statistically significant changes in AI literacy, specifically in awareness, usage, and evaluation constructs, as well as a rise in positive attitudes toward AI. High satisfaction scores and qualitative feedback further support the course’s effectiveness. These findings reinforce the importance of inclusive, scalable educational interventions for empowering the public to navigate AI technologies.
Multiple-choice questions (MCQs) are central to instruction and assessment, with distractors revealing student understanding and misconceptions. However, creating high-quality distractors is time-consuming, especially for emerging domains like K–12 AI education. This study explores using generative AI to support distractor creation in a self-paced online module integrating AI and Algebra 1. Five MCQs were selected to compare distractors written by human developers and ChatGPT, using expert reviews and log data from 80 students. Experts rated human distractors higher overall, though AI ones consistently ranked second. Log analysis showed human distractors drew more initial selections, while students who chose AI distractors spent more time engaging without differences in hint use or revisits. Transition patterns across attempts suggest AI-generated distractors can effectively guide students toward correct answers, highlighting their potential for scalable MCQ design.
Recent work has explored the interests that draw learners to Machine Learning (ML), aiming to support their success and broaden participation in the field. However, whether strategies used in textbooks align with these interests is unexplored. We perform a thematic analysis of the introductions from ten openly available ML textbooks to identify their motivational strategies and compare them with student interests documented in prior research. We find that textbooks frequently motivate learners in their introductions by setting learning goals, previewing core ML topics to be covered, showcasing applications and current successes, and, less often, by using learner-centered strategies such as reassurance or curiosity prompts. We group these motivations into three overarching themes: theoretical, practical, and learner-centered. These motivations largely align with student interests, particularly in theory and applications, even in textbooks published before the recent surge of ML and Artificial Intelligence. These findings reveal how textbooks frame ML’s value and offer evidence-based guidance for developing future materials that better engage and support diverse learners.
This study aimed to investigate the impact of a data-driven teaching approach on students’ conceptual understanding of machine learning (ML). To this end, an exemplary intervention was designed and evaluated using a pre- and post-test design and a German-language Concept Inventory on Machine Learning. A total of 83 German ninth-grade students participated in the study. The results revealed significant learning gains related to data handling and the ML workflow. In contrast, conceptions about the inner workings of ML models largely persisted. The effectiveness of the intervention varied depending on context, with greater gains observed in the text generation domain than in facial recognition, highlighting challenges in cross-contextual transfer of understanding. A regression analysis showed no significant influence of students’ pre-instructional conceptions on learning outcomes. These findings demonstrate both the potential and the limitations of data-driven teaching approaches and emphasize the need for more explicit engagement with learners' misconceptions to foster deeper conceptual change.
With the growing use of Large Language Model (LLM)-based Question-Answering (QA) systems in education, it is critical to evaluate their performance across individual pipeline components. In this work, we introduce EduMod-LLM, a modular function-calling LLM pipeline, and present a comprehensive evaluation along three key axes: function calling strategies, retrieval methods, and generative language models. Our framework enables fine-grained analysis by isolating and assessing each component. We benchmark function-calling performance across LLMs, compare our novel structure-aware retrieval method to vector-based and LLM-scoring baselines, and evaluate various LLMs for response synthesis. This modular approach reveals specific failure modes and performance patterns, supporting the development of interpretable and effective educational QA systems. Our findings demonstrate the value of modular function calling in improving system transparency and pedagogical alignment.
As machine learning (ML) becomes integral in more disciplines, introductory courses in the field are attracting increasingly diverse audiences. Design of these introductory ML courses needs to be theoretically sound, but also intuitive, engaging, and accessible to a range of students. Effective teaching of ML must go beyond teaching the theoretical or practical mechanics of algorithms. In this paper, we synthesize effective teaching strategies from 6 experienced ML instructors across 5 institutions to help students define appropriate ML problems, build intuition, develop reasoning skills, and apply models responsibly. We organize these strategies into eight thematic areas: preparing students for success, motivating learners through real-world relevance, integrating ethics and societal impact, avoiding common methodological pitfalls in model evaluation, guiding students on design decisions, adapting effective classroom practices, assessing student learning, and preparing for the future. Each section offers practical examples of classroom-tested activities (or references to existing resources), and in many cases, reflections on our experiences with the strategies. Our aim is for this paper to be a starting point for instructors aiming to improve learning in introductory ML courses. We hope this is a resource-rich guide for teaching ML to diverse learners, grounded in both pedagogy and practice.
This paper considers the development of an AI-based provably-correct mathematical proof tutor. While Large Language Models (LLMs) allow seamless communication in natural language, they are error prone. Theorem provers such as Lean allow for provable-correctness, but these are hard for students to learn. We present a proof-of-concept system (LeanTutor) by combining the complementary strengths of LLMs and theorem provers. LeanTutor is composed of three modules: (i) an autoformalizer/proof-checker, (ii) a next-step generator, and (iii) a natural language feedback generator. To evaluate the system, we introduce PeanoBench, a dataset of 371 Peano Arithmetic proofs in human-written natural language and formal language, derived from the Natural Numbers Game.
As artificial intelligence (AI) becomes increasingly integrated into daily life, higher education must move beyond code-centric instruction to foster holistic AI literacy. We present a novel pedagogical approach that integrates embodied, unplugged activities into a university-level Introduction to AI course. Inspired by the effectiveness of CS Unplugged in K-12 education, our physical, collaborative activities gave students a first-person perspective on AI decision-making. Through interactive games modeling Search Algorithms, Markov Decision Processes, Q-learning, and Hidden Markov Models, students built an intuition for complex AI concepts and more easily transitioned to mathematical formalizations and code implementations. We present four unplugged AI activities, describe how to bridge from unplugged activities to plugged coding tasks, reflect on implementation challenges, and propose refinements. We suggest that unplugged activities can effectively bridge conceptual reasoning and technical skill-building in university-level AI education.
Teaching machine learning (ML) workflows to non-programmers remains a challenge in introductory AI courses. Traditionally, educators have turned to no-code tools such as KNIME to lower barriers. With the rise of generative AI (GenAI), students can now construct ML pipelines through natural language prompts, potentially offering a new “no-code” pathway. In a polytechnic-wide elective in Singapore, students were given the choice of using either KNIME or a GenAI chatbot for practical exercises and their semester project. Survey responses, informal interviews, and classroom observations revealed that both tools supported conceptual learning, but students’ experiences diverged: KNIME provided predictability and structured guidance, while GenAI offered speed and flexibility yet posed setup challenges and required coding familiarity. Students valued having a choice, though this complicated teaching logistics. Our experience suggests that GenAI can complement—but not yet replace—traditional no-code platforms, and that the design of introductory activities is critical for adoption. We share lessons learned for educators considering GenAI as an alternative in workflow-based ML education.
Generative AI has moved from pilots to everyday practice, delivering gains in productivity and accessibility while surfacing present-day risks—hallucinations and reliability failures, bias and unfairness, prompt-injection attacks, and so on. These trends make AI safety education a core competency. In this paper, we survey global AI safety curricula and, in the Japanese context, observe strong policy momentum but relatively few courses that explicitly combine capability instruction with systematic safety evaluation. In response, we developed a 7-week, graduate-level intensive at a private science and engineering university in Japan, with enrollment open to international exchange students at both the undergraduate and graduate levels. The curriculum progresses from machine-learning foundations to generative models and alignment, with introductory agent topics included to support risk reasoning. Delivery combines weekly lectures, invited talks from academia and industry, structured group discussions, and a final presentation plus a paper-style final project focused on risk evaluation and mitigation planning. An end-of-course survey indicates high perceived learning and positive experience and one student project later resulted in a peer-reviewed workshop paper at ICLR 2025.
As artificial intelligence (AI) becomes increasingly integrated into daily life, there is a critical need for developing AI literacy across all educational levels. However, current AI education remains largely confined to college-level computer science classrooms with limited access for K-12 learners. We present the AI Scholars Program, a novel approach that addresses the AI education gap by preparing college computing students to serve as AI education ambassadors in their communities and empowering K-12 teachers to adopt AI education practices in their classrooms. This experience report presents the curriculum and its outcomes after one round of refinement. The program offers structured AI learning through bi-weekly webinars, resources, and collaborative opportunities to form teams and conduct community outreach projects. Our program invited 63 scholars from 30 institutions across the U.S., including 51 college students and 12 K-12 teachers. Their outreach impacted over 230 K-12 learners. We examine program outcomes for participants and projects through pre/post surveys measuring computing attitudes and self-efficacy for teaching AI, scholar interviews, and outreach project reports. We share lessons learned and challenges for designing similar programs, highlighting the importance of involving educators for effective community-engaged AI education. The program creates a sustainable pipeline for college students to develop technical skills and leadership while addressing K-12 AI education shortages. We contribute insights for scaling AI literacy and broadening participation in computing.
Multiagent systems is a key area within artificial intelligence (AI) that explores the behavior of interacting rational agents where the decisions of one agent impact others. Rooted in economic game theory, multiagent systems takes the idea of individual incentives from economic game theory and applies it to distributed computation and decentralized mechanisms. It examines not only how certain overall economic or computational goals can be accomplished, but also why individual participants will choose to cooperate with reaching that goal. While multiagent systems is grounded in rigorous mathematical theory, current pedagogical approaches often lack opportunities for students to connect abstract theory with real-world human dynamics. This disconnect is particularly pressing as AI increasingly operates in sociotechnical environments, where understanding human behavior and interaction is critical. This paper presents the first exploration of using large participation activities to facilitate experiential learning to bridge this gap. We report on a day-long resource allocation scenario involving up to 43 participants, designed to simulate multiagent interactions under pressure and with meaningful stakes, where learners can apply their theoretical knowledge to analyze and solve emerging problems. We propose ``megagames'' as a powerful pedagogical tool not only for multiagent systems, but also for other domains as well.
As Artificial Intelligence (AI) becomes increasingly integrated into daily life, there is a growing need to equip the next generation with the ability to apply, interact with, evaluate, and collaborate with AI systems responsibly. Prior research highlights the urgent demand from K-12 educators to teach students the ethical and effective use of AI for learning. To address this need, we designed a Large-Language Model (LLM)-based module to teach prompting literacy. This includes scenario-based deliberate practice activities with direct interaction with intelligent LLM agents, aiming to foster secondary school students' responsible engagement with AI chatbots. We conducted two iterations of classroom deployment in 11 authentic secondary education classrooms, and evaluated 1) AI-based auto-grader's capability; 2) students' prompting performance and confidence changes towards using AI for learning; and 3) the quality of learning and assessment materials. Results indicated that the AI-based auto-grader could grade student-written prompts with satisfactory quality. In addition, the instructional materials supported students in improving their prompting skills through practice and led to positive shifts in their perceptions of using AI for learning. Furthermore, data from Study 1 informed assessment revisions in Study 2. Analyses of item difficulty and discrimination in Study 2 showed that True/False and open-ended questions could measure prompting literacy more effectively than multiple-choice questions for our target learners. These promising outcomes highlight the potential for broader deployment and highlight the need for broader studies to assess learning effectiveness and assessment design.
In Fall 2023, we introduced a new AI Literacy class called The Essentials of AI for Life and Society (CS 109), a one-credit, seminar course consisting mainly of guest lectures, which was open to the entire university, including students, staff, and faculty. Building on its success and popularity, this paper describes our significant expansion of the course into a full-scale three-credit undergraduate course (CS 309), with an expanded emphasis on student engagement, interactivity, and ethics-related components. To knit together content from the guest lecturers, we implemented a flipped classroom. This model used weekly asynchronous learning modules---integrating pre-recorded expert lectures, collaborative readings, and ethical reflections---which were then unified by the course instructor during a live, interactive discussion session. To maintain the broad accessibility of the material (no prerequisites), the course introduced substantive, non-programming homework assignments in which students applied AI concepts to grounded, real-world problems. This work culminated in a final project analyzing the ethical and societal implications of a chosen AI tool. The redesigned course received overwhelmingly positive student feedback, highlighting its interactivity, coherence, and accessible and engaging assignments. This paper details the course's evolution, its pedagogical structure, and the lessons learned in developing a core AI literacy course. All course materials are freely available for others to use and build upon.
AI technologies have long-term societal implications that impact youth, prompting a need for critical AI literacy for students. While current K-12 AI curricula have increasingly integrated societal impact and ethics concepts in AI curricula, there is a need to center youth’s agency in decision-making around AI systems that impact them. In this work, we engaged 94 middle and high school art students in a Policy Design learning activity as a part of an Art and AI learning workshop. Students worked in groups to create policies around AI's use in art, considering stakeholders like artists, AI companies, and consumers. Findings revealed that students developed nuanced, actionable policies that reflected a deep understanding of AI's impact on the art ecosystem, including issues of copyright, artist compensation, and transparency. The activity empowered students to think critically about AI’s ethical implications on various systems in the AI and art ecosystem and fostered a sense of agency in shaping its future. This work demonstrates the value of integrating policy design into K-12 AI curricula, providing youth with the skills and perspectives to become informed, ethical citizens in an AI-driven world.
Artificial intelligence (AI) education has garnered growing attention from both educational researchers and practitioners in recent years. Among the various emerging approaches, integrating AI education across the curriculum—particularly within core disciplines—offers distinct advantages. This strategy foregrounds the inherently interdisciplinary nature of AI and enables students to investigate its connections with subjects such as mathematics and English language arts (ELA). Furthermore, it holds promise for broadening participation by engaging all students, including those historically underrepresented and underserved in the field of AI. To date, most efforts to integrate AI education have been situated within individual classrooms, often led by a single teacher. While such initiatives provide valuable entry points, they overlook the reality that students’ learning experiences span multiple classrooms and disciplines. As students transition between subjects, they inevitably synthesize ideas—both consciously and unconsciously—from diverse instructional contexts. Recognizing this, we take a whole-school perspective that considers the cumulative and interconnected nature of students’ learning experiences. With this perspective, we explore a coordinated, cross-disciplinary approach in which students engage with AI through a set of curriculum modules spanning mathematics, ELA, and social studies. Each module is discipline-specific yet designed to contribute to a cohesive, cross-disciplinary exploration of AI. These modules are further framed by a self-paced introductory unit, which establishes foundational concepts, and a culminating application-and-reflection unit, which supports integration and transfer of learning. This paper describes the design of the AI Education Across the Curriculum module set and reports preliminary findings from a pilot implementation conducted in Spring 2025. By examining both the pedagogical design and initial findings, we aim to contribute to the growing body of research on scalable, equitable, and interdisciplinary models for AI education.