Tuesday, October 07, 2025

Factors influencing undergraduates’ ethical use of ChatGPT: a reasoned goal pursuit approach - Radu BogdanToma & Iraya Yánez-Pérez, Interactive Learning Environments

The widespread use of large language models, such as ChatGPT, has changed the learning behaviours of undergraduate students, raising issues of academic dishonesty. This study investigates the factors that influence the ethical use of ChatGPT among undergraduates using the recently proposed Theory of Reasoned Goal Pursuit. Through a qualitative elicitation procedure, 26 salient beliefs were identified, representing procurement and approval goals, advantages and disadvantages, the social influence, and factors facilitating and hindering ethical use of ChatGPT. A subsequent, two-wave quantitative survey provided promising evidence for the theory, revealing that positive attitudes and subjective norms emerged as key antecedents of motivation to use ChatGPT ethically.

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Juncheng Guo & Huanxi Li, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy.

Monday, October 06, 2025

Sans Safeguards, AI in Education Risks Deepening Inequality - Government Technology

A new UNESCO report cautions that artificial intelligence has the potential to threaten students’ access to quality education. The organization calls for a focus on people, to ensure digital tools enhance education. While AI and other digital technology hold enormous potential to improve education, a new UNESCO report warns they also risk eroding human rights and worsening inequality if deployed without deliberately robust safeguards. Digitalization and AI in education must be anchored in human rights, UNESCO argued in the report, AI and Education: Protecting the Rights of Learners, and the organization urged governments and international organizations to focus on people, not technology, to ensure digital tools enhance rather than endanger the right to education.

https://www.govtech.com/education/k-12/sans-safeguards-ai-in-education-risks-deepening-inequality

What's your college's AI policy? Find out here. - Chase DiBenedetto, Mashable

 Part of ChatGPT for Education, OpenAI has announced educational partnerships with Harvard Business School, University of Pennsylvania's Wharton College, Duke, University of California, Los Angeles (UCLA), UC San Diego, UC Davis, Indiana University, Arizona State University, Mount Sinai's Ichan School of Medicine, and the entire California State University (CSU) System — OpenAI's collaboration with CSU schools is the largest ChatGPT deployment yet. But there are dozens more, an OpenAI spokesperson told Mashable, that haven't made their ChatGPT partnerships public. Ed Clark, chief information officer for CSU, told Mashable that the decision to partner with OpenAI came from a survey of students that showed many were already signing up for AI accounts using their student emails — faculty and staff were too. 

Sunday, October 05, 2025

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy. 

What your students are thinking about artificial intelligence - Florencia Moore & Agostina Arbia, Time Higher Eduction

Students have been quick to adopt and integrate GenAI into their study practices, using it as a virtual assistant to enhance and enrich their learning. At the same time, they sometimes rely on it as a substitute for their own ideas and thinking, since GenAI can complete academic tasks in a matter of seconds. While the first or even second iteration may yield a hallucinated or biased response, with prompt refinement and guidance, it can produce results very close to our expectations almost instantly.

https://www.timeshighereducation.com/campus/what-your-students-are-thinking-about-artificial-intelligence

Saturday, October 04, 2025

Syracuse University adopts Claude for Education - EdScoop

yracuse University, the private research institution in New York, this week announced that it’s formed a partnership with Anthropic, the company behind the popular Claude chatbot, to provide students, faculty and staff with a version of the software designed for use in higher education. “Expanding access to Claude for all members of our community is another step in making Syracuse University the most digitally connected campus in America,” Jeff Rubin, senior vice president and chief digital officer, said in a press release. “By equipping every student, faculty member and staff member with Claude, we’re not only fueling innovation, but also preparing our community to navigate, critique and co-create with AI in real-world contexts.”

Colleges are giving students ChatGPT. Is it safe? - Rebecca Ruiz and Chase DiBenedetto - Mashable

This fall, hundreds of thousands of students will get free access to ChatGPT, thanks to a licensing agreement between their school or university and the chatbot's maker, OpenAI. When the partnerships in higher education became public earlier this year, they were lauded as a way for universities to help their students familiarize themselves with an AI tool that experts say will define their future careers. At California State University (CSU), a system of 23 campuses with 460,000 students, administrators were eager to team up with OpenAI for the 2025-2026 school year. Their deal provides students and faculty access to a variety of OpenAI tools and models, making it the largest deployment of ChatGPT for Education, or ChatGPT Edu, in the country. 

Friday, October 03, 2025

We’re introducing GDPval, a new evaluation that measures model performance on economically valuable, real-world tasks across 44 occupations. - OpenAI

We found that today’s best frontier models are already approaching the quality of work produced by industry experts. To test this, we ran blind evaluations where industry experts compared deliverables from several leading models—GPT‑4o, o4-mini, OpenAI o3, GPT‑5, Claude Opus 4.1, Gemini 2.5 Pro, and Grok 4—against human-produced work. Across 220 tasks in the GDPval gold set, we recorded when model outputs were rated as better than (“wins”) or on par with (“ties”) the deliverables from industry experts, as shown in the bar chart below.... We also see clear progress over time on these tasks. Performance has more than doubled from GPT‑4o (released spring 2024) to GPT‑5 (released summer 2025), following a clear linear trend. In addition, we found that frontier models can complete GDPval tasks roughly 100x faster and 100x cheaper than industry experts.

The AI Institute for Adult Learning and Online Education - Georgia Tech

(AI-ALOE), led by Georgia Tech and funded by the National Science Foundation, is a multi-institutional research initiative advancing the use of artificial intelligence (AI) to transform adult learning and online education. Through collaborative research and innovation, AI-ALOE develops AI technologies and strategies to enhance teaching, personalize learning, and expand educational opportunities at scale. Since its launch, AI-ALOE has developed seven innovative AI technologies, deployed across more than 360 classes at multiple institutions, reaching over 30,000 students. Recent research news indicated that Jill Watson, our virtual teaching assistant, outperforms ChatGPT in real classrooms. In addition, our collaborative teams have produced about 160 peer-reviewed publications, advancing both research and practice in AI-augmented learning. We invite you to join us for our upcoming virtual research showcase and discover the latest innovations and breakthroughs in AI for education.

Thursday, October 02, 2025

Operationalize AI Accountability: A Leadership Playbook - Kevin Werbach, Knowledge at Wharton

Goal
Deploy AI systems with confidence by ensuring they are fair, transparent, and accountable — minimizing risk and maximizing long-term value.
Nano Tool
As organizations accelerate their use of AI, the pressure is on leaders to ensure these systems are not only effective but also responsible. A misstep can result in regulatory penalties, reputational damage, and loss of trust. Accountability must be designed in from the start — not bolted on after deployment.

Strengthening our Frontier Safety Framework - Four Flynn, Helen King, Anca Dragan, Google Deepmind

AI breakthroughs are transforming our everyday lives, from advancing mathematics, biology and astronomy to realizing the potential of personalized education. As we build increasingly powerful AI models, we’re committed to responsibly developing our technologies and taking an evidence-based approach to staying ahead of emerging risks. Today, we’re publishing the third iteration of our Frontier Safety Framework (FSF) — our most comprehensive approach yet to identifying and mitigating severe risks from advanced AI models. This update builds upon our ongoing collaborations with experts across industry, academia and government. We’ve also incorporated lessons learned from implementing previous versions and evolving best practices in frontier AI safety.

We urgently call for international red lines to prevent unacceptable AI risks. - AI Red Lines

Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.  Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds. We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026. 

Wednesday, October 01, 2025

AI Hallucinations May Soon Be History - Ray Schroeder, Inside Higher Ed

On Sept. 14, OpenAI researchers published a not-yet-peer-reviewed paper, “Why Language Models Hallucinate,” on arXiv. Gemini 2.5 Flash summarized the findings of the paper: "Systemic Problem: Hallucinations are not simply bugs but a systemic consequence of how AI models are trained and evaluated. Evaluation Incentives: Standard evaluation methods, particularly binary grading systems, reward models for generating an answer, even if it’s incorrect, and punish them for admitting uncertainty. Pressure to Guess: This creates a statistical pressure for large language models (LLMs) to guess rather than say “I don’t know,” as guessing can improve test scores even with the risk of being wrong."

AI is changing how Harvard students learn: Professors balance technology with academic integrity - MSN

AI has quickly become ubiquitous at Harvard. According to The Crimson’s 2025 Faculty of Arts and Sciences survey, nearly 80% of instructors reported encountering student work they suspected was AI-generated—a dramatic jump from just two years ago. Despite this, faculty confidence in identifying AI output remains low. Only 14% of respondents felt “very confident” in their ability to distinguish human from AI work. Research from Pennsylvania State University underscores this challenge: humans can correctly detect AI-generated text roughly 53% of the time, only slightly better than flipping a coin.