Monday, April 13, 2026
The Connected Campus: A Secure, AI-Ready Digital Ecosystem for Higher Education - Alexander Slagg, EdTech
How AI may reshape career pathways to better jobs - Justin Heck, Mark Muro, Shriya Methkupally, and Joseph Siegmund, Brookings
Amid much concern about the future of college graduates in the era of AI, workers without four-year degrees face major challenges as well: There are over 15 million of these workers in jobs that are highly exposed to AI. Of those, nearly 11 million are employed in “Gateway” occupations—jobs that have historically enabled workers to build skills and supported transitions into higher-wage roles. AI is poised to erode the pathways workers use to transition from low- to higher-wage work. Almost half of the pathways between Gateway jobs and higher-paying “Destination” jobs are highly exposed to AI. Geographically, the highest rates of AI-related pathway exposure are in administrative, clerical, and customer service Gateway occupations in the Northeast and Sun Belt. In order to craft strategies that effectively meet the moment, the field must grapple with a set of urgent questions about AI’s impact on worker mobility.
https://www.brookings.edu/articles/how-ai-may-reshape-career-pathways-to-better-jobs/
Sunday, April 12, 2026
‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business
Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”
SDSU's Massive AI Study Finds Frequent Use but Skepticism - Jaweed Kaleem, Los Angeles Times
A poll of 94,000 students, faculty and staff across 22 CSU campuses found nearly every respondent had used AI at some point, but students were still wary of trusting it and faculty reported negative effects. The survey, conducted by San Diego State University researchers last fall, shows CSU grappling with how AI is affecting assignments, classroom instruction, competition for jobs and academic integrity. It found nearly every respondent had used AI at some point, with personal use more common than for educational purposes.
Saturday, April 11, 2026
AI Is Routine for College Students, Despite Campus Limits - Stephanie Marken, Gallup News
New research from the Lumina Foundation-Gallup 2026 State of Higher Education study finds that more than half (57%) of U.S. college students are using artificial intelligence in their coursework at least weekly, including about one in five who say they use it daily. Male students report more frequent AI use than female students, particularly in the case of daily use (27% vs. 17%). By major, students in business, technology and engineering programs are the most frequent AI users compared with those in other fields of study. Rates of AI use are similar among students pursuing associate and bachelor’s degrees.
https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx
AI in Higher Education Is Moving From Experimentation to Strategic Integration. Here's What the 2025 Data Shows - Joe Sullistio, Ellucian
When the question is "Are people using AI?" the answers are mostly anecdotal. When the question becomes "How do we integrate AI responsibly and measurably across the institution?" you need strategy, investment discipline, governance, and enablement. Not just tools. Ellucian's new report, Artificial Intelligence in Higher Education: From Widespread Adoption to Strategic Integration, captures this transition in detail, and lays out what institutions need to do next. This is the third consecutive year of the Ellucian AI Survey for Higher Education, and the 2025 State of AI in Higher Education findings mark a clear turning point.
Friday, April 10, 2026
‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business
Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”
Emotion Concepts and their Function in a Large Language Model - Nicholas Sofroniew, et al; Transformer Circuits
Large language models (LLMs) sometimes appear to exhibit emotional reactions. We investigate why this is the case in Claude Sonnet 4.5 and explore implications for alignment-relevant behavior. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to. These representations track the operative emotion concept at a given token position in a conversation, activating in accordance with that emotion’s relevance to processing the present context and predicting upcoming text. Our key finding is that these representations causally influence the LLM’s outputs, including Claude’s preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy. We refer to this phenomenon as the LLM exhibiting functional emotions: patterns of expression and behavior modeled after humans under the influence of an emotion, which are mediated by underlying abstract representations of emotion concepts. Functional emotions may work quite differently from human emotions, and do not imply that LLMs have any subjective experience of emotions, but appear to be important for understanding the model’s behavior.
Thursday, April 09, 2026
A dual-framework analysis of artificial intelligence adoption in cross-cultural higher education - Zouhaier Slimi & Beatriz Villarejo Carballido, Nature
The integration of artificial intelligence in higher education is increasingly critical as institutions face both opportunities and ethical challenges in its adoption. This study introduces a dual-framework model that combines the Technology Acceptance Model with an AI Ethics Framework, highlighting "Ethical Readiness" as essential for successful AI implementation, and identifies key drivers and barriers to adoption across diverse cultural contexts.
AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - Will Knight, Wired
A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. I've had these assertions presented to me as evidence of (take your pick): AI is already conscious; AI is evil and will destroy us; AI is capable of lying to protect itself; and other highly anthropomorphized interpretations. My first thought was, 'Has this behavior been independently verified'? The Gemini 3 quote is highly suspicious. it sounds too much like a segment from a cautionary science fiction tale. LLMs and other flavors of AI are not designed with motivation beyond optimizing their performance in response to human queries/instructions. Behavioral responses of biological animals with brains were optimized via natural selection to favor self-preservation.
Wednesday, April 08, 2026
Building Better, Faster: How JKO is Integrating AI to Enhance Online Learning - JKO News
Meet Claude Mythos : Anthropic’s Powerful Successor to Opus - Julian Horsey, Geeky Gadgets
Tuesday, April 07, 2026
Prompt engineering competence, knowledge management, and technology fit as drivers of educational sustainability through generative AI - Omer Gibreel, Kasım Karataş & Ibrahim Arpaci; Nature
This study investigated the impact of prompt engineering competence, knowledge management, and task–individual–technology fit on the continued intention to use artificial intelligence (AI), as well as their implications for educational sustainability. Data from 437 undergraduate students who use AI tools for academic purposes were analyzed using PLS-SEM. The results indicated that prompt engineering competence significantly predicts knowledge acquisition and knowledge application, which, in turn, significantly predict both task-technology fit (TTF) and individual-technology fit (ITF). Furthermore, TTF and ITF were found to have significant impacts on the continuous intention, which, in turn, positively predicts educational sustainability through generative AI. The results of the multi-group analysis revealed that the hypotheses were supported in both the female and male samples and that the model maintained a consistent and robust structure across genders.
CSU made a $17-million AI bet. A year later, students and faculty give it a mixed grade - Jaweed Kaleem, LA Times
California State University’s controversial $17-million deal to provide ChatGPT to every one of its campuses has been met with mixed results, with wide but uneven use across the system, high distrust of AI-generated content and broad fears that the technology could imperil job security — even as people say they want more training in systems they believe will be “essential” to their professions.