Saturday, April 11, 2026

AI Is Routine for College Students, Despite Campus Limits - Stephanie Marken, Gallup News

New research from the Lumina Foundation-Gallup 2026 State of Higher Education study finds that more than half (57%) of U.S. college students are using artificial intelligence in their coursework at least weekly, including about one in five who say they use it daily. Male students report more frequent AI use than female students, particularly in the case of daily use (27% vs. 17%). By major, students in business, technology and engineering programs are the most frequent AI users compared with those in other fields of study. Rates of AI use are similar among students pursuing associate and bachelor’s degrees.

https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx

AI in Higher Education Is Moving From Experimentation to Strategic Integration. Here's What the 2025 Data Shows - Joe Sullistio, Ellucian

When the question is "Are people using AI?" the answers are mostly anecdotal. When the question becomes "How do we integrate AI responsibly and measurably across the institution?" you need strategy, investment discipline, governance, and enablement. Not just tools. Ellucian's new report, Artificial Intelligence in Higher Education: From Widespread Adoption to Strategic Integration, captures this transition in detail, and lays out what institutions need to do next. This is the third consecutive year of the Ellucian AI Survey for Higher Education, and the 2025 State of AI in Higher Education findings mark a clear turning point.

Personal AI use is nearing saturation: 91% of administrators report using AI, up from 84% last year, a relatively modest increase that signals individual adoption is plateauing.
Institution-wide adoption surged: from 49% in 2024 to 66% in 2025, a 17-point jump that signals AI has moved beyond experimentation and into mainstream operational and strategic integration.
Momentum is expected to continue: 88% of respondents expect institutional AI use to keep rising over the next two years.

Friday, April 10, 2026

‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business

Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”

Here are the stats:
More than 40% of bachelor’s degree students and more than half of those seeking associate’s degrees said generative AI has caused them to consider changing their major or field of study, according to a a new Gallup poll.
About one in seven students surveyed at both levels said “preparing for AI and other technological advances is an important reason they enrolled.”
AI is not yet the “primary driver” academic and enrollment decisions, Gallup’s authors contend. They urge higher leaders to ensure students have opportunities to learn the AI skills needed to succeed in a changing workforce.

Emotion Concepts and their Function in a Large Language Model - Nicholas Sofroniew, et al; Transformer Circuits

Large language models (LLMs) sometimes appear to exhibit emotional reactions. We investigate why this is the case in Claude Sonnet 4.5 and explore implications for alignment-relevant behavior. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to. These representations track the operative emotion concept at a given token position in a conversation, activating in accordance with that emotion’s relevance to processing the present context and predicting upcoming text. Our key finding is that these representations causally influence the LLM’s outputs, including Claude’s preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy. We refer to this phenomenon as the LLM exhibiting functional emotions: patterns of expression and behavior modeled after humans under the influence of an emotion, which are mediated by underlying abstract representations of emotion concepts. Functional emotions may work quite differently from human emotions, and do not imply that LLMs have any subjective experience of emotions, but appear to be important for understanding the model’s behavior.


Thursday, April 09, 2026

A dual-framework analysis of artificial intelligence adoption in cross-cultural higher education - Zouhaier Slimi & Beatriz Villarejo Carballido, Nature

The integration of artificial intelligence in higher education is increasingly critical as institutions face both opportunities and ethical challenges in its adoption. This study introduces a dual-framework model that combines the Technology Acceptance Model with an AI Ethics Framework, highlighting "Ethical Readiness" as essential for successful AI implementation, and identifies key drivers and barriers to adoption across diverse cultural contexts.


AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - Will Knight, Wired

A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. I've had these assertions presented to me as evidence of (take  your pick):  AI is already conscious; AI is evil and will destroy us; AI is capable of lying to protect itself; and other highly anthropomorphized interpretations.  My first thought was, 'Has this behavior been independently verified'?  The Gemini 3 quote is highly suspicious.  it sounds too much like a segment from a cautionary science fiction tale.  LLMs and other flavors of AI are not designed with motivation beyond optimizing their performance in response to human queries/instructions.  Behavioral responses of biological animals with brains were optimized via natural selection to favor self-preservation.


Wednesday, April 08, 2026

Building Better, Faster: How JKO is Integrating AI to Enhance Online Learning - JKO News

"The integration of AI is not just about speeding up development but also about fundamentally changing how training is built," said Tim Brandon, JKO program director. "The goal is to deliver a more agile and advanced learning experience that is more personalized, less linear and in line with the technology our training audience is already accustomed to.” AI is also being used to monitor real-world events and identify which of the thousands of courses on the platform need updates. The system flags outdated courses, which allows for rapid revisions. As part of its AI adoption, JKO is working with the DDJTE AI Working Group and the Joint Staff J-7 to establish the platform as a central hub for AI-related training and education resources for the Joint Force. 

Meet Claude Mythos : Anthropic’s Powerful Successor to Opus - Julian Horsey, Geeky Gadgets

Claude Mythos, Anthropic’s latest AI model, introduces significant advancements in software development, academic reasoning and cybersecurity, setting a new benchmark for AI performance and functionality.The model excels in identifying software vulnerabilities and solving complex problems, but its dual-use nature raises ethical concerns about potential misuse for malicious purposes. High computational demands and operational costs pose challenges to accessibility, prompting Anthropic to explore techniques like model distillation to improve efficiency and scalability. 
Primarily targeting enterprise-level users, Claude Mythos is positioned to transform industries such as finance, healthcare and cybersecurity, while raising questions about accessibility for smaller organizations.

Tuesday, April 07, 2026

Prompt engineering competence, knowledge management, and technology fit as drivers of educational sustainability through generative AI - Omer Gibreel, Kasım Karataş & Ibrahim Arpaci; Nature

This study investigated the impact of prompt engineering competence, knowledge management, and task–individual–technology fit on the continued intention to use artificial intelligence (AI), as well as their implications for educational sustainability. Data from 437 undergraduate students who use AI tools for academic purposes were analyzed using PLS-SEM. The results indicated that prompt engineering competence significantly predicts knowledge acquisition and knowledge application, which, in turn, significantly predict both task-technology fit (TTF) and individual-technology fit (ITF). Furthermore, TTF and ITF were found to have significant impacts on the continuous intention, which, in turn, positively predicts educational sustainability through generative AI. The results of the multi-group analysis revealed that the hypotheses were supported in both the female and male samples and that the model maintained a consistent and robust structure across genders.


CSU made a $17-million AI bet. A year later, students and faculty give it a mixed grade - Jaweed Kaleem, LA Times

California State University’s controversial $17-million deal to provide ChatGPT to every one of its campuses has been met with mixed results, with wide but uneven use across the system, high distrust of AI-generated content and broad fears that the technology could imperil job security — even as people say they want more training in systems they believe will be “essential” to their professions.

CSU’s big bet on AI shows mixed results, with a survey revealing widespread use but significant concerns over its drawbacks. Faculty remain deeply divided on AI’s educational value, while staff and students are more enthusiastic. An 18-month ChatGPT contract expires in July. CSU has not decided whether to renew, but intends to continue embracing AI.

Monday, April 06, 2026

BU Wheelock Forum Explores AI in Education - Boston University

What do teaching and learning mean in an AI world? This question was at the center of the 2026 BU Wheelock Forum AI and the Future of Education, hosted by the Boston University Wheelock College of Education & Human Development on March 25. Approximately 250 people—including educators, administrators, and scholars—attended the event, which featured a keynote from Aaron Rasmussen (COM’06, CAS’06), cofounder of online education platforms Outlier.ai and MasterClass; a faculty panel discussion moderated by Wheelock Dean Penny Bishop; and a modern dance performance using Random Actor, a technology developed by James Grady, a College of Fine Arts assistant professor of art, graphic design, and Clay Hopper, a CFA senior lecturer, directing, that harnesses AI to extend the visual expression of human movement. 


Cal State’s new framework promises jobs or grad school path for all students - Cate Rix, EdSource

Over the past decade, California State University campuses pursued an ambitious plan to encourage students to complete their degrees faster and boost overall graduation rates. Now the system is making a bold promise: Every student will graduate with a clear path to a career or graduate school. And it is planning changes to make the system’s degree programs more career-focused, possibly by phasing out some majors. CSU leaders say academic and career advising will be closely connected as a new Student Success Framework rolls out. They also say that less popular majors may be phased out, offered only on some campuses or merged into other programs.

https://edsource.org/2026/csus-new-framework-promises-jobs-or-grad-school-path-for-all-students/754804

Sunday, April 05, 2026

Where can AI be used? Insights from a deep ontology of work activities - Alice Cai, et al; arXiv

 Where can AI be used? Insights from a deep ontology of work activities = Alice Cai, et al; arXiv

Here we provide a comprehensive ontology of work activities that can help systematically analyze and predict uses of AI. To do this, we disaggregate and then substantially reorganize the approximately 20K activities in the US Department of Labor's widely used O*NET occupational database. Next, we use this framework to classify descriptions of 13,275 AI software applications and a worldwide tally of 20.8 million robotic systems. Finally, we use the data about both these kinds of AI to generate graphical displays of how the estimated units and market values of all worldwide AI systems used today are distributed across the work activities that these systems help perform. We find a highly uneven distribution of AI market value across activities, with the top 1.6% of activities accounting for over 60% of AI market value. Most of the market value is used in information-based activities (72%), especially creating information (36%), and only 12% is used in physical activities. Interactive activities include both information-based and physical activities and account for 48% of AI market value, much of which (26%) involves transferring information. 

Courageous conversations: How to lead with heart - Kurt Strovink, Meagan Hill, and Mike Carson; McKinsey

Leadership, at its best, is a matter of the heart. Courage, which underpins every act of leadership, is also a matter of the heart; it comes from the French word cœur—heart. As Winston Churchill observed, “Courage is rightly esteemed the first of human qualities, because . . . it is the quality which guarantees all others.” The point is simple: Courage is both moral and practical. It is not sentiment or bravado. It is the willingness to face what is real, invite challenge, and repair trust. The story of every great leader—from business to the arts, from education to government to sport—is written in these moments of choice: Do I accept the comfortable, or do I ask for and embrace the truth? Do I protect myself, or do I serve the enterprise?