Tuesday, October 28, 2025

EDUCAUSE Action Plan Looks 10 Years Ahead at GenAI for Education - Abby Sourwine, GovTech

In a new action plan, EDUCAUSE outlines skills, ethics and collaboration strategies to guide effective use and implementation of generative artificial intelligence on college campuses for the next decade. With artificial intelligence advancing at breakneck speed, a new document from the nonprofit EDUCAUSE aims to give colleges and universities proactive steps to meet an uncertain future of teaching and learning with generative AI, rather than simply reacting to technological change. The report, 2025 EDUCAUSE Horizon Action Plan: Building Skills and Literacy for Teaching with Generative AI, is the latest in EDUCAUSE’s long-running Horizon series and outlines how educators and administrators can build skills and literacy to teach with generative AI now and in the future.

‘Urgent need’ for more AI literacy in higher education, report says - Anna McKie, Research Professional News

There is an “urgent need” to improve AI literacy among both staff and students at British universities, according to a report from the Higher Education Policy Institute. The report takes a broad view on how AI is reshaping higher education, including institutional strategy, teaching and assessment, research, and professional services. Wendy Hall, an internet pioneer and director of the Web Science Institute at the University of Southampton, and Giles Carden, chief strategy officer at Southampton, state in the report’s foreword that “simply acknowledging AI’s presence is insufficient”. “Active, informed engagement and a structured approach to skill development are paramount to ensure universities remain relevant and effective,” they say.

Monday, October 27, 2025

Realizing the full potential of AI agents - McKinsey

The story of agentic AI is still unfolding. The majority of CEOs have yet to see bottom-line value from AI agents. But there’s no question that the pace and potential scope of change are breathtaking. While we’re waiting for the technology to fully mature, CEOs can take advantage of this “trough of disillusionment” to understand the implications for how their companies operate, make some essential decisions, and get a jump on their competitors. A year into the agentic AI revolution, one lesson is clear: It takes hard work to do it well. We recently dug into more than 50 agentic AI builds we’ve supported, as well as dozens of others in the marketplace. Six lessons have emerged. Here’s one that may surprise you: Agents aren’t always the answer. 

https://www.mckinsey.com/~/media/mckinsey/email/shortlist/272/2025-10-17b.html

Concern and excitement about AI - Jacob Poushter, Moira Fagan and Manolo Corichi, Pew Research Center

A median of 34% of adults across 25 countries are more concerned than excited about the increased use of artificial intelligence in daily life. A median of 42% are equally concerned and excited, and 16% are more excited than concerned. Older adults, women, people with less education and those who use the internet less often are particularly likely to be more concerned than excited. Roughly half of adults in the U.S., Italy, Australia, Brazil and Greece say they are more concerned than excited about the increased use of AI in daily life. But in 15 of the 25 countries polled, the largest share of people are equally concerned and excited. In no country surveyed is the largest share more excited than concerned about the increasing use of AI in daily life.


Sunday, October 26, 2025

Sharing Resources, Best Practices in AI - Ashley Mowreader, Inside Higher Ed

While generative artificial intelligence tools have proliferated in education and workplace settings, not all tools are free or accessible to students and staff, which can create equity gaps regarding who is able to participate and learn new skills. To address this gap, San Diego State University leaders created an equitable AI alliance in partnership with the University of California, San Diego, and the San Diego Community College District. Together, the institutions work to address affordability and accessibility concerns for AI solutions, as well as share best practices, resources and expertise. In the latest episode of Voices of Student Success, host Ashley Mowreader speaks with James Frazee, San Diego State University’s chief information officer, about the alliance and SDSU’s approach to teaching AI skills to students.

3 Leadership Micro-Credentials Are Redefining The Modern Career Path -Cheryl Robinson, Forbes

Traditional degrees are yielding to skills-based hiring, making micro-credentials crucial for professionals. These short, focused programs, offered by universities and tech platforms, efficiently equip leaders with vital skills like digital fluency and strategic agility. They address the urgent need for reskilling by 2030, enabling continuous learning and proving capabilities without lengthy academic commitments, though standardization is still evolving.

Saturday, October 25, 2025

About 1 in 5 U.S. workers now use AI in their job, up since last year - Luona Lin, Pew Research

As the abilities of artificial intelligence (AI) tools advance rapidly, a growing share of Americans say they are using the technology in their jobs. Today, 21% of U.S. workers say at least some of their work is done with AI, according to a Pew Research Center survey conducted in September. That share is up from 16% roughly a year ago. Most American workers (65%) still say they don’t use AI much or at all in their job.

Friday, October 24, 2025

Quantum record smashed as scientists build mammoth 6,000-qubit system — and it works at room temperature - Tristan Greene, Live Science

Scientists at Caltech have conducted a record-breaking experiment in which they synchronized 6,100 atoms in a quantum array. This research could lead to more robust, fault-tolerant quantum computers. In the experiment, they used paired neutral atoms as the quantum bits (qubits) in a system and held them in a state of “superposition” to conduct quantum computations. To achieve this, the scientists split a laser beam into 12,000 "laser tweezers" which together held the 6,100 qubits. As described in a new study published Sept. 24 in the journal Nature, the scientists not only set a new record for the number of atomic qubits placed in a single array — they also extended the length of "superposition" coherency.  

https://share.google/IWiWTlEMbwROMJUqk 


Google shares a massive list of 1,000+ generative AI use cases - Aditya Tiwari, Neowin

Generative AI will continue to expand its reach whether you want it or not. While the race was unofficially started by OpenAI's ChatGPT, Google has become one of the leaders in the space with its Gemini-branded products and services. For some, generative AI exposure might still be limited to a chatbot or a tool that converts their image into a video. But the tech has many possibilities, ranging from AI pharmacists, vacation planners, coding agents, and autonomous driving, to AI that can browse the web for you. Google has published a lengthy list of 1001 use cases of generative AI across different sectors like Automotive, Financial Services, Manufacturing, Healthcare, Business, Hospitality, Travel, and Media.

Thursday, October 23, 2025

A systematic review on AI-enhanced pedagogies in higher education in the Global SouthProvisionally accepted - Gloria KhozaNomfundo and Freda Van Der Walt, Frontiers in Education

Artificial intelligence is gaining traction in higher education for its ability to simulate human intelligence and support learning processes. This systematic review investigates how artificial intelligence-enhanced teaching approaches are being applied in higher education institutions across the Global South. The study draws on peer-reviewed literature identified through a structured search of SCOPUS and Web of Science databases, using clearly defined inclusion and exclusion criteria. The findings reveal that most applications focus on improving technical efficiency and administrative functions, while pedagogical integration remains limited. Key barriers include inadequate infrastructure, unequal access to digital tools, limited faculty preparedness, and ethical considerations. 

https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1667884/abstract

Universities need AI sovereignty to protect free thought - Peter Salden, University World News

The question of digital sovereignty is becoming more urgent for universities in the age of artificial intelligence. AI-based applications are not only critical from the perspective of data protection and functional transparency, but also pose a threat to independent thinking. AI literacy, an independent AI infrastructure and a clearly defined strategic framework are fundamental for defending academic freedom. Since the beginning of digitalisation, universities worldwide have been preoccupied with the question of digital sovereignty. This involves issues such as ensuring that IT applications comply with data protection regulations and reducing technical and financial dependencies. However, artificial intelligence is challenging digital sovereignty in new ways that go beyond these classic aspects.

https://www.universityworldnews.com/post.php?story=20251007143251841

Wednesday, October 22, 2025

How to Teach Critical Thinking When AI Does the Thinking - Timothy Cook, Psychology Today

Students who've learned dialogic engagement with AI behave completely differently. They ask follow-up questions during class discussions. They can explain their reasoning when challenged. They challenge each other's arguments using evidence they personally evaluated. They identify limitations in their own conclusions. They want to keep investigating beyond the assignment requirements. The difference is how they used it. This means approaching every AI interaction as a sustained interrogation. Instead of "write an analysis of symbolism in The Great Gatsby," students must "generate an AI analysis first, then critique what it missed with their own interpretations of the symbolism. “What assumptions does the AI make in its interpretation and how could it be wrong?" “What would a 20th-century historian say about this approach?” “Can you see these themes present in The Great Gatsby in your own life?”

https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202510/how-to-teach-critical-thinking-when-ai-does-the-thinking

The end of AI and the future of higher education - James Yoonil Auh, University World News

We now live in what I call the atmosphere of cognition: not the disappearance of AI, but its absorption into the invisible architectures of institutional life. Like Wi-Fi, AI is no longer a tool at the margins but the infrastructure of thought. Algorithms now shape admissions. Predictive models determine financial aid. Recommendation engines curate research. Plagiarism detectors and manuscript filters run silently in the background. To speak of ‘using AI’ in 2025 is like debating whether universities should install electricity. This is not simply a technical evolution. It is civilisational. The printing press multiplied texts, but students still thought alone. The internet digitised knowledge, but students still wrote in their own words. MOOCs disrupted delivery but not learning itself. AI is different. It entwines itself with cognition.


Tuesday, October 21, 2025

‘The Future of Teaching in the AI Age’ Draws Hundreds of Educators to Iona University - Iona University

“At Iona, we are choosing to engage with AI not because it is fashionable, but because it is necessary. Our students, and of course us as educators as well, are entering a world where AI will shape – or has begun to shape already – nearly every profession, from health care to business to education itself,” said Tricia Mulligan, Ph.D., provost and senior vice president for Academic Affairs at Iona. “AI can provide outputs in seconds, but it cannot help us discern whether those outputs are true, fair, ethical, or good. That work remains profoundly human. Our role as educators is to guide students in cultivating this habit of discernment, so that they graduate not just knowing how to use the latest tool, but how to direct it toward the service of humanity.”

AI in higher education: Experts discuss changes to be seen - Stephen Kenney, Phys.org

While students are flocking to AI tools, many universities have failed to keep up with the dramatic shift. A January 2025 report from the Digital Education Council revealed that though 61% of college faculty have used AI in teaching, 88% of them have only incorporated it in minimal to moderate amounts. The University of Cincinnati is at the forefront of shaping AI policy and equipping faculty and students with AI literacy skills. MidwestCon, one of the region's premier tech conferences, recently hosted its 2025 event at UC's 1819 Innovation Hub and discussed the role of AI in higher education. Below are some takeaways from the session titled "Brains, Bots and the Battle for Creativity."

Monday, October 20, 2025

‘It would almost be stupid not to use ChatGPT’ - Hoger Onderwijs Persbureau, Resource Online Netherlands

Amid widespread concern among lecturers about students’ use of AI tools, public philosopher Bas Haring mostly sees opportunities: ‘Outsourcing part of the thinking process to AI shouldn’t be prohibited.’ Bas Haring annoyed a lot of people with a provocative recent experiment. For one of his students last year, the philosopher and professor of public understanding of science delegated his responsibilities as a thesis supervisor to AI. The student discussed her progress not with Haring, but with ChatGPT – and the results were surprisingly positive. While Haring may be excited about the outcome of his experiment, not everyone shares his enthusiasm. Some have called it unethical, irresponsible, unimaginative and even disgusting. It has also been suggested that this could provide populists with an excuse to further slash education budgets.

https://www.resource-online.nl/index.php/2025/10/07/it-would-almost-be-stupid-not-to-use-chatgpt/?lang=en

C-RAC Releases Statement on the Use of Artificial Intelligence (AI) - MSCHE

C-RAC Releases Statement on the Use of Artificial Intelligence (AI) - MSCHE
On October 6, 2025, the Council of Regional Accrediting Commissions (C-RAC) released a Statement on the Use of Artificial Intelligence (AI) to Advance Learning Evaluation and Recognition. C-RAC stated: 
Put simply, the use of AI in learning evaluation does not conflict with accreditation standards, policies, or practices.  Accreditation is never a reason to not implement technology solutions that leverage AI for learning evaluation. Since innovating to advance student success is a central tenet of accreditation expectations, C-RAC supports the exploration and application of transparent, accountable, and unbiased AI solutions within the practice of learning evaluation and credit transfer. 

Sunday, October 19, 2025

OpenAI Wants ChatGPT to Be Your Future Operating System - Laren Goode and Will Knight, Wired

OpenAI unveiled a new way to embed third-party apps directly into ChatGPT. At the company’s annual developer conference in San Francisco, CEO Sam Altman said the move would “enable a new generation of apps that are adaptive, interactive, and personalized, that you can chat with.” Starting today, some developers will be able to use a preview version of a new apps software development kit (SDK) to build apps within ChatGPT using open standards. The ability to distribute these apps is currently limited to a handful of big partners. Altman showed off several examples of how these apps would ultimately work within ChatGPT. The demo featured Spotify, Canva, and Zillow apps appearing inside a chat and responding to typed commands.

The future of the CLO: Leading in a world of merged work and learning - Bryan Hancock and Heather Stefanski with Lisa Christensen, McKinsey

Rapid technological advancements, shifting market dynamics, and a complex regulatory landscape are reshaping industries at a breakneck pace. To respond, employees will need new skills, mindsets, and ways of operating. Yet, at a time when development is critical to organizational success, some companies are moving in the opposite direction. They are dissolving senior learning roles or pushing learning deeper into human resources, further separating learning leaders from where strategic decisions are being made. It’s a time of both tremendous opportunity and risk. For chief learning officers (CLOs), the stakes have never been higher. Learning leaders have dreamed about delivering personalized development at scale for decades. The best learning functions have made the most of the tools available to become partners in strategy, experts in development, and fluent in technology. With the advent of AI, CLOs and their organizations are ready for the next evolution: a fundamental transformation in the way organizational learning is delivered.

Saturday, October 18, 2025

Beyond learning design: supporting pedagogical innovation in response to AI - Charlotte von Essen, Times Higher Education

To avoid an unwinnable game of catch-up with technology, universities must rethink pedagogical improvement that goes beyond scaling online learning. Now artificial intelligence is accelerating these tensions. Used well, AI could give faculty and designers more space to experiment, reflect and take risks. But if it is simply used to produce more content, faster, universities risk deepening a content conveyor belt mentality. If higher education wants genuine teaching innovation, it must recognise that pedagogy is the real driver of educational quality. This means making pedagogical improvement a strategic goal. Institutions can anchor innovation in values, build authentic communities of practice and invest explicitly in pedagogical leadership.

As we celebrate teachers, AI is redefining the classroom - Hani Shehada, CGTN

Hani Shehada, a special commentator for CGTN, is a regional manager at Education Above All Foundation's Al Fakhoora Program. He works on global interventions that provide access to higher education for young people whose futures have been disrupted by war and injustice. The article reflects the author's opinions and not necessarily the views of CGTN. 
Artificial intelligence is not knocking politely on the door of higher education; it is kicking it wide open. What we thought would take decades is happening in just a few years. In some cases, in months. Universities that stood for centuries as the ultimate gatekeepers of knowledge are now watching that monopoly evaporate in real time. Here lies the irony: the very institutions that claim to prepare us for the future are themselves unprepared for the speed at which the future is arriving. And that forces us to ask a deeper question: What were universities for in the first place?

Friday, October 17, 2025

Universities can turn AI from a threat to an opportunity by teaching critical thinking - Anitia Lubbe, the Conversation

But focusing only on policing misses a bigger issue: whether students are really learning. As an education researcher, I’m interested in the topic of how students learn. My colleagues and I recently explored the role AI could play in learning – if universities tried a new way of assessing students. We found that many traditional forms of assessment in universities remain focused on memorisation and rote learning. These are exactly the tasks that AI performs best. We argue that it’s time to reconsider what students should be learning. This should include the ability to evaluate and analyse AI-created text. That’s a skill which is essential for critical thinking.

Emerging and established readers’ cognitive and metacognitive strategies during online evaluation - Julie A. Corrigan, Elena Forzani - Computers in Human Behavior

•This study describes a range of cognitive and metacognitive strategies involving  qualitatively more complex and varied strategies used to critically evaluate online information.

•Cognitive and metacognitive evaluation strategies were compared and contrasted within and between groups of emerging and established evaluators, which demonstrated that experts use more varied and greater instances of metacognition and relied more heavily on corroboration (with self, expert, and other) strategies.

•Overall, this study represents a holistic synthesis of the cognitive and metacognitive strategies necessary to evaluate online information credibility.

Thursday, October 16, 2025

AI Boom Drives Surge in Demand for Tech Skills in 2025 - Victor Dey, GovTech

Artificial intelligence is doing more than just automating workflows in 2025: It’s dismantling the very idea of education. Once seen as one-time achievements, a bachelor’s degree, a professional certificate, or an annual corporate training session, are no longer guarantees of relevance in a world where knowledge ages almost as quickly as technology itself. Nearly half of talent development leaders surveyed in LinkedIn’s 2025 Workplace Learning Report say they see a skills crisis, with organizations under pressure to equip employees for both present and future roles through dynamic skill-building, particularly in AI and generative AI. 

https://www.govtech.com/education/ai-boom-drives-surge-in-demand-for-tech-skills-in-2025

New data show no AI jobs apocalypse—for now - Molly Kinder, et al; Brookings

Every day brings new breakthroughs in artificial intelligence—and new fears about the technology’s potential to trigger mass unemployment. CEOs predict white collar “bloodbaths.” Headlines warn of widespread job losses. With public anxiety growing, it can feel like the economy is already hemorrhaging jobs to AI. But what if, at least for now, the data are telling a different story? To find out, we measured how the labor market has changed since ChatGPT’s launch in November 2022. Specifically, we analyzed the change in the occupational mix across the labor market over the past 33 months. If generative AI technologies such as ChatGPT were automating jobs at scale, we would expect to see fewer workers employed in jobs at greatest risk of automation.  Our data found the opposite.

Wednesday, October 15, 2025

Higher Education AI Transformation 2030 - Ray Schroeder, Inside Higher Ed

We have begun a transformation in higher education that will make us more responsive, efficient and effective at achieving our multiple missions. This will not be easy or without trauma, but it is necessary. To build this new future, we must first rethink the very foundations of our institutions. This is not about adding a few new apps to the learning management system. Rather, it’s about a fundamental re-architecture of how we operate, how we teach and how we define the work of our faculty and students. Key factors include institutional strategy, pedagogy and the future of work. 

From Detection to Development: How Universities Are Ethically Embedding AI for Learning - Isabelle Bambury, Higher Education Policy Institute

The Universities UK Annual Conference always serves as a vital barometer for the higher education sector, and this year, few topics were as prominent as the role of Generative Artificial Intelligence (GenAI). A packed session, Ethical AI in Higher Education for improving learning outcomes: A policy and leadership discussion, provided a refreshing and pragmatic perspective, moving the conversation beyond academic integrity fears and towards genuine educational innovation. Based on early findings from new independent research commissioned by Studiosity, the session’s panellists offered crucial insights and a clear path forward. 

Tuesday, October 14, 2025

Research, curriculum and grading: new data sheds light on how professors are using AI - Lee V. Gaines, NPR

New research from Anthropic — the company behind the AI chatbot Claude — suggests professors around the world are using AI for curriculum development, designing lessons, conducting research, writing grant proposals, managing budgets, grading student work and designing their own interactive learning tools, among other uses. "When we looked into the data late last year, we saw that of all the ways people were using Claude, education made up two out of the top four use cases," says Drew Bent, education lead at Anthropic and one of the researchers who led the study.

William & Mary launches ChatGPT Edu pilot - Laren Weber, William and Mary

The initiative is a collaboration between the School of Computing, Data Sciences & Physics (CDSP), Information Technology, W&M Libraries and the Mason School of Business and is part of a broader push to embed advanced AI into everyday academic life.  The pilot will explore how AI can enhance teaching, research and university operations, while also gathering feedback to guide the responsible and effective use of AI across campus. The results will help shape how W&M leverages AI to advance our world-class academics and research. Additionally, faculty and staff outside of the pilot who are interested in purchasing an Edu license can visit the W&M ChatGPT Edu site for more information.  

https://news.wm.edu/2025/10/01/william-mary-launches-chatgpt-edu-pilot/

Monday, October 13, 2025

UMass Students Showcase AI Tools Built for State Agencies - Government Technology

Massachusetts Gov. Maura Healey invited University of Massachusetts, Amherst students to create AI tools to assist public agencies. The students traveled to Boston last week to share their work.  Government leaders in Massachusetts are looking to university students as partners in delivering AI services to their constituents, and a recent showcase highlighted how these collaborations have simplified user experiences with state technology.


AI Grading: Revolutionizing Feedback in Higher Education - Bioengineer

In the era of educational innovation, artificial intelligence has emerged as a transformative force, redefining the dynamics of assessment in universities. Artificial Intelligence (AI) is not merely a tool used for automating tasks, but an advanced system that can analyze vast amounts of data, predict outcomes, and personalize experiences. The application of AI in grading and providing tailored feedback holds the promise of revolutionizing traditional educational practices, enhancing the learning experience for diverse student populations. The concept of AI-powered grading has evolved significantly, transitioning from rudimentary algorithms that merely evaluate student outputs to sophisticated systems capable of understanding context, nuance, and individual learning trajectories.

Sunday, October 12, 2025

AI Isn't a Curse. It's a Gift for College Learning. - Samuel J. Abrams, Real Clear Education

The Chronicle of Higher Education recently ran a piece that offers a beautiful and evocative snapshot of intellectual life at its best. Its authors, Khafiz Kerimov and Nicholas Bellinson of St. John’s College, describe students gathered around a blackboard in a campus coffee shop, each wielding a different color of chalk as they work through Euclid and Lobachevsky together. This is admirable, and more institutions could learn from St. John’s commitment to dialogue. But from this unique experience, the authors make a sweeping claim: that artificial intelligence - specifically tools like ChatGPT’s “study mode”-  will steal our ability to think and work together. They worry that students will abandon collaborative learning for solitary interactions with machines, and that the vibrant hum of campus life will fade into silence. It’s a poetic warning. It’s also profoundly mistaken.

https://www.realcleareducation.com/articles/2025/09/30/ai_isnt_a_curse_its_a_gift_for_college_learning_1138190.html

Is Artificial Intelligence Reshaping Higher Education? - Amy Dittmar, et al; Baker Institute

What does the acceleration of artificial intelligence mean for higher education, from the admissions process to students’ academic and intellectual development? How can students learn to engage responsibly with AI, and what does it mean for the early graduate labor market? Baker Institute fellow and guest host Michael O. Emerson sat down with Rice University Provost Amy Dittmar, University of Houston Associate Provost Jeff Morgan, and Burke Nixon, a senior lecturer in Rice’s writing and communication program, to discuss the advent of AI and its implications for colleges and universities.


Saturday, October 11, 2025

Sora 2 is here - OpenAI

Our latest video generation model is more physically accurate, realistic, and more controllable than prior systems. It also features synchronized dialogue and sound effects. Create with it in the new Sora app. The original Sora model⁠ from February 2024 was in many ways the GPT‑1 moment for video—the first time video generation started to seem like it was working, and simple behaviors like object permanence emerged from scaling up pre-training compute. Since then, the Sora team has been focused on training models with more advanced world simulation capabilities. We believe such systems will be critical for training AI models that deeply understand the physical world. A major milestone for this is mastering pre-training and post-training on large-scale video data, which are in their infancy compared to language.

The agentic organization: Contours of the next paradigm for the AI era - Alexander Sukharevsky, et al; McKinsey

AI is bringing the largest organizational paradigm shift since the industrial and digital revolutions (see sidebar, “The evolution of operating models”). This new paradigm unites humans and AI agents—both virtual and physical—to work side by side at scale at near-zero marginal cost. We call it the agentic organization. McKinsey’s experience working with early adopters indicates that AI agents can unlock significant value. Organizations are beginning to deploy virtual AI agents along a spectrum of increasing complexity: from simple tools that augment existing activities to end-to-end workflow automation to entire “AI-first” agentic systems. In parallel, physical AI agents are emerging. Companies are making strides in developing “bodies” for AI, such as smart devices, drones, self-driving vehicles, and early attempts at humanoid robots. These machines allow AI to interface with the physical world.

Friday, October 10, 2025

OpEd: Adapting Higher Ed To New AI World - Alfonzo Berumen, LA Business Journal

Job prospects, even with a degree in hand, aren’t the only aspect of higher education being affected. GenAI has demonstrated the ability to solve mathematical problems and respond to case study questions which are designed to develop critical thinking in students. In a sense, it has replaced the idea of there being only one expert in the room, the professor. This has come at a time when universities are under fire with questions around value for the cost from the student side and diminishing contributions to innovation on the industry side.

ChatGPT Study Mode - Explained By A Learning Coach - Justin Sung, YouTube

The main issue is that the interaction remains very user-led, as Study Mode struggles to dynamically adjust its teaching to a beginner's exact level or pinpoint the root cause of confusion without specific, targeted input from the student [10:10]. The coach found that a passive learner could be stuck in confusion for 30 minutes, whereas an active, metacognitive learner was able to break through the same confusion in just two minutes by asking the right questions [16:15]. Ultimately, the host recommends using Study Mode for targeted study with specific questions, advising that users must embrace active, effortful thinking because effective learning cannot be made easy [19:18]. [summary provided in part by Gemini 2.5 Flash]

Thursday, October 09, 2025

Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry - Governor Gavin Newsom

Governor Newsom today signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), authored by Senator Scott Wiener (D-San Francisco) – legislation carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies. The new law builds on recommendations from California’s first-in-the-nation report, called for by Governor Newsom and published earlier this year — and helps advance California’s position as a national leader in responsible and ethical AI, the world’s fourth-largest economy, the birthplace of new technology, and the top pipeline for tech talent.

Udemy Banks on Artificial Intelligence to Power Online Learning - Bloomberg Businessweek

Udemy is in the midst of pivoting from a leading online learning platform to an AI-powered skills acceleration platform built for individuals and organizations. The company says it has expanded its focus from learning to "reskilling," which includes assessment, role plays, and new learning experiences. The company runs what it calls a "two-sided" business: a marketplace for consumers as well as Udemy Business for enterprises, which is designed to help businesses become more competitive.

Wednesday, October 08, 2025

50 AI agents get their first annual performance review - 6 lessons learned - Joe McKendrick, ZDnet

"Agentic AI efforts that focus on fundamentally reimagining entire workflows -- that is, the steps that involve people, processes, and technology -- are more likely to deliver a positive outcome," according to the review. Start with addressing key user pain points, the co-authors suggest. Organizations with document-intensive workflows, such as insurance companies or legal firms, for example, benefit from having agents handle tedious steps. There will always be a need for human workers to "oversee model accuracy, ensure compliance, use judgment, and handle edge cases," the co-authors emphasized. Redesign work "so that people and agents can collaborate well together. Without that focus, even the most advanced agentic programs risk silent failures, compounding errors, and user rejection." 

The future of work is agentic - McKinsey

Think about your org chart. Now imagine it features both your current colleagues—humans, if you’re like most of us—and AI agents. That’s not science fiction; it’s happening—and it’s happening relatively quickly, according to McKinsey Senior Partner Jorge Amar. In this episode of McKinsey Talks Talent, Jorge joins McKinsey talent leaders Brooke Weddle and Bryan Hancock and Global Editorial Director Lucia Rahilly to talk about what these AI agents are, how they’re being used, and how leaders can prepare now for the workforce of the not-too-distant future.

Tuesday, October 07, 2025

Factors influencing undergraduates’ ethical use of ChatGPT: a reasoned goal pursuit approach - Radu BogdanToma & Iraya Yánez-Pérez, Interactive Learning Environments

The widespread use of large language models, such as ChatGPT, has changed the learning behaviours of undergraduate students, raising issues of academic dishonesty. This study investigates the factors that influence the ethical use of ChatGPT among undergraduates using the recently proposed Theory of Reasoned Goal Pursuit. Through a qualitative elicitation procedure, 26 salient beliefs were identified, representing procurement and approval goals, advantages and disadvantages, the social influence, and factors facilitating and hindering ethical use of ChatGPT. A subsequent, two-wave quantitative survey provided promising evidence for the theory, revealing that positive attitudes and subjective norms emerged as key antecedents of motivation to use ChatGPT ethically.

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Juncheng Guo & Huanxi Li, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy.

Monday, October 06, 2025

Sans Safeguards, AI in Education Risks Deepening Inequality - Government Technology

A new UNESCO report cautions that artificial intelligence has the potential to threaten students’ access to quality education. The organization calls for a focus on people, to ensure digital tools enhance education. While AI and other digital technology hold enormous potential to improve education, a new UNESCO report warns they also risk eroding human rights and worsening inequality if deployed without deliberately robust safeguards. Digitalization and AI in education must be anchored in human rights, UNESCO argued in the report, AI and Education: Protecting the Rights of Learners, and the organization urged governments and international organizations to focus on people, not technology, to ensure digital tools enhance rather than endanger the right to education.

https://www.govtech.com/education/k-12/sans-safeguards-ai-in-education-risks-deepening-inequality

What's your college's AI policy? Find out here. - Chase DiBenedetto, Mashable

 Part of ChatGPT for Education, OpenAI has announced educational partnerships with Harvard Business School, University of Pennsylvania's Wharton College, Duke, University of California, Los Angeles (UCLA), UC San Diego, UC Davis, Indiana University, Arizona State University, Mount Sinai's Ichan School of Medicine, and the entire California State University (CSU) System — OpenAI's collaboration with CSU schools is the largest ChatGPT deployment yet. But there are dozens more, an OpenAI spokesperson told Mashable, that haven't made their ChatGPT partnerships public. Ed Clark, chief information officer for CSU, told Mashable that the decision to partner with OpenAI came from a survey of students that showed many were already signing up for AI accounts using their student emails — faculty and staff were too. 

Sunday, October 05, 2025

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy. 

What your students are thinking about artificial intelligence - Florencia Moore & Agostina Arbia, Time Higher Eduction

Students have been quick to adopt and integrate GenAI into their study practices, using it as a virtual assistant to enhance and enrich their learning. At the same time, they sometimes rely on it as a substitute for their own ideas and thinking, since GenAI can complete academic tasks in a matter of seconds. While the first or even second iteration may yield a hallucinated or biased response, with prompt refinement and guidance, it can produce results very close to our expectations almost instantly.

https://www.timeshighereducation.com/campus/what-your-students-are-thinking-about-artificial-intelligence

Saturday, October 04, 2025

Syracuse University adopts Claude for Education - EdScoop

yracuse University, the private research institution in New York, this week announced that it’s formed a partnership with Anthropic, the company behind the popular Claude chatbot, to provide students, faculty and staff with a version of the software designed for use in higher education. “Expanding access to Claude for all members of our community is another step in making Syracuse University the most digitally connected campus in America,” Jeff Rubin, senior vice president and chief digital officer, said in a press release. “By equipping every student, faculty member and staff member with Claude, we’re not only fueling innovation, but also preparing our community to navigate, critique and co-create with AI in real-world contexts.”

Colleges are giving students ChatGPT. Is it safe? - Rebecca Ruiz and Chase DiBenedetto - Mashable

This fall, hundreds of thousands of students will get free access to ChatGPT, thanks to a licensing agreement between their school or university and the chatbot's maker, OpenAI. When the partnerships in higher education became public earlier this year, they were lauded as a way for universities to help their students familiarize themselves with an AI tool that experts say will define their future careers. At California State University (CSU), a system of 23 campuses with 460,000 students, administrators were eager to team up with OpenAI for the 2025-2026 school year. Their deal provides students and faculty access to a variety of OpenAI tools and models, making it the largest deployment of ChatGPT for Education, or ChatGPT Edu, in the country. 

Friday, October 03, 2025

We’re introducing GDPval, a new evaluation that measures model performance on economically valuable, real-world tasks across 44 occupations. - OpenAI

We found that today’s best frontier models are already approaching the quality of work produced by industry experts. To test this, we ran blind evaluations where industry experts compared deliverables from several leading models—GPT‑4o, o4-mini, OpenAI o3, GPT‑5, Claude Opus 4.1, Gemini 2.5 Pro, and Grok 4—against human-produced work. Across 220 tasks in the GDPval gold set, we recorded when model outputs were rated as better than (“wins”) or on par with (“ties”) the deliverables from industry experts, as shown in the bar chart below.... We also see clear progress over time on these tasks. Performance has more than doubled from GPT‑4o (released spring 2024) to GPT‑5 (released summer 2025), following a clear linear trend. In addition, we found that frontier models can complete GDPval tasks roughly 100x faster and 100x cheaper than industry experts.

The AI Institute for Adult Learning and Online Education - Georgia Tech

(AI-ALOE), led by Georgia Tech and funded by the National Science Foundation, is a multi-institutional research initiative advancing the use of artificial intelligence (AI) to transform adult learning and online education. Through collaborative research and innovation, AI-ALOE develops AI technologies and strategies to enhance teaching, personalize learning, and expand educational opportunities at scale. Since its launch, AI-ALOE has developed seven innovative AI technologies, deployed across more than 360 classes at multiple institutions, reaching over 30,000 students. Recent research news indicated that Jill Watson, our virtual teaching assistant, outperforms ChatGPT in real classrooms. In addition, our collaborative teams have produced about 160 peer-reviewed publications, advancing both research and practice in AI-augmented learning. We invite you to join us for our upcoming virtual research showcase and discover the latest innovations and breakthroughs in AI for education.

Thursday, October 02, 2025

Operationalize AI Accountability: A Leadership Playbook - Kevin Werbach, Knowledge at Wharton

Goal
Deploy AI systems with confidence by ensuring they are fair, transparent, and accountable — minimizing risk and maximizing long-term value.
Nano Tool
As organizations accelerate their use of AI, the pressure is on leaders to ensure these systems are not only effective but also responsible. A misstep can result in regulatory penalties, reputational damage, and loss of trust. Accountability must be designed in from the start — not bolted on after deployment.

Strengthening our Frontier Safety Framework - Four Flynn, Helen King, Anca Dragan, Google Deepmind

AI breakthroughs are transforming our everyday lives, from advancing mathematics, biology and astronomy to realizing the potential of personalized education. As we build increasingly powerful AI models, we’re committed to responsibly developing our technologies and taking an evidence-based approach to staying ahead of emerging risks. Today, we’re publishing the third iteration of our Frontier Safety Framework (FSF) — our most comprehensive approach yet to identifying and mitigating severe risks from advanced AI models. This update builds upon our ongoing collaborations with experts across industry, academia and government. We’ve also incorporated lessons learned from implementing previous versions and evolving best practices in frontier AI safety.

We urgently call for international red lines to prevent unacceptable AI risks. - AI Red Lines

Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.  Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds. We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026. 

Wednesday, October 01, 2025

AI Hallucinations May Soon Be History - Ray Schroeder, Inside Higher Ed

On Sept. 14, OpenAI researchers published a not-yet-peer-reviewed paper, “Why Language Models Hallucinate,” on arXiv. Gemini 2.5 Flash summarized the findings of the paper: "Systemic Problem: Hallucinations are not simply bugs but a systemic consequence of how AI models are trained and evaluated. Evaluation Incentives: Standard evaluation methods, particularly binary grading systems, reward models for generating an answer, even if it’s incorrect, and punish them for admitting uncertainty. Pressure to Guess: This creates a statistical pressure for large language models (LLMs) to guess rather than say “I don’t know,” as guessing can improve test scores even with the risk of being wrong."

AI is changing how Harvard students learn: Professors balance technology with academic integrity - MSN

AI has quickly become ubiquitous at Harvard. According to The Crimson’s 2025 Faculty of Arts and Sciences survey, nearly 80% of instructors reported encountering student work they suspected was AI-generated—a dramatic jump from just two years ago. Despite this, faculty confidence in identifying AI output remains low. Only 14% of respondents felt “very confident” in their ability to distinguish human from AI work. Research from Pennsylvania State University underscores this challenge: humans can correctly detect AI-generated text roughly 53% of the time, only slightly better than flipping a coin.