Tuesday, April 21, 2026
Gallup: Gen Z growing more negative toward AI - Natalie Schwartz, Higher Ed Dive
Why Do We Tell Ourselves Scary Stories About AI? - Amanda Gefter, Quanta Magazine
Monday, April 20, 2026
Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents - Maxwell Zeff, Wired
Will LLMs Replace Coders? Not Entirely - Seb Murray, Knowledge at Wharton
“It was very clear that we will never ever write code by hand again.” That comment, made recently by Dropbox’s former chief technology officer Aditya Agarwal, reflects a growing belief that generative AI is poised to displace swathes of white-collar workers — starting, perhaps, with software developers. But research by Wharton professor of operations, information and decisions Neha Sharma found that many of the routine coding questions that developers once posted on popular online forum Stack Overflow appear to have moved to AI tools, while the more novel problems still require human expertise.
https://knowledge.wharton.upenn.edu/article/will-llms-replace-coders-not-entirely/
Sunday, April 19, 2026
Is Your AI System Ethical? Try This Assessment - Cornelia C. Walther, Knowledge at Wharton
Author Talks: Rewiring to outcompete with AI - McKinsey
In this edition of Author Talks, McKinsey Global Publishing’s Barr Seitz speaks with McKinsey Senior Partners Kate Smaje and Robert Levin, and Eric Lamarre, McKinsey alumnus and emeritus adviser, about the second edition of Rewired (Rewired: How Leading Companies Win with Technology and AI, Wiley, April 2026). They discuss what has changed over the past few years, what it means to build organizational speed, and why the most important transformations are ultimately about people. An edited version of the conversation follows. Stay tuned for additional interviews with Rewired coauthors and McKinsey Senior Partners Alex Singla and Alexander Sukharevsky on leadership’s critical role in AI transformations.
Saturday, April 18, 2026
A people-first vision for the future of work in the age of AI - Sorelle Friedler, Serena Booth, Andrew Schrank, and Susan Helper, Brookings
Project Glasswing: Securing critical software for the AI era - Anthropic
Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.
Friday, April 17, 2026
OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption - Tom Carter, Business Insider
Colleges ramp up offerings to teach students to be AI ethicists - Kate Rix, HigherEdDive
Thursday, April 16, 2026
OpenAI’s warning: Washington isn’t ready for what’s coming - Axios, YouTube
In this Axios interview, OpenAI CEO Sam Altman emphasizes the urgent need for Washington and society to prepare for the arrival of "super intelligence." He explains that the next generation of AI models will represent a significant leap forward, moving beyond small tasks to potentially enabling career-defining scientific discoveries and allowing individuals to perform the work of entire teams. Altman highlights critical near-term risks, specifically in cybersecurity and bio-threats, and advocates for a "societal resilience" approach where the government and private sector work closely together to mitigate these dangers before they become reality [05:24]. Altman also discusses the broader economic and human implications of AI, suggesting that while the technology will transform the nature of work and capital, the core of human fulfillment and connection will remain unchanged. He envisions AI becoming a "utility" similar to electricity—an omnipresent, affordable background force that powers a personal super-assistant for every user [19:19]. Despite the immense power held by AI developers, Altman argues against nationalization, suggesting that private-public partnerships are the best way to ensure the technology aligns with democratic values while maintaining the pace necessary to lead globally [08:41]. [summary assisted by Gemini 3 Fast]
American billionaire: Only two types of people will succeed in the age of artificial intelligence - Reporters
As workers of all generations, from Generation Z to Baby Boomers, look for ways to secure their careers in the age of artificial intelligence, Alex Karp, CEO of the tech giant Palantir, has a pretty simple answer to the question of who will have the upper hand in the future. According to him, two groups of people have the best prospects: those with professional skills and neurodiverse individuals.“Basically, there are two ways to know if you have a future,” Karp said in a recent interview with TBPN. “One, you have some professional training. Or two, you are neurodiverse.” His second category also has a personal dimension. Karp has spoken before about dyslexia, and in a broader sense, neurodiversity also includes conditions like ADHD and autism. In his opinion, the advantage of these people is not only in the diagnosis, but in the fact that they often think differently, see patterns that others do not see and come up with unusual solutions more easily. In the same interview, he said that those who are “more artistic,” who see things from a different perspective and can build something unique, will have an advantage.
Wednesday, April 15, 2026
Harvard offers six free online courses in AI and coding = MSN
Harvard University has expanded its free online learning portfolio with six courses focused on artificial intelligence, data science, programming, and web development. These globally accessible programmes are available in self-paced and scheduled formats, accommodating both beginners and professionals aiming to enhance their technology skills. The initiative reflects rising demand for digital literacy and supports the development of future-ready capabilities in an AI-driven world. The programmes include 'AI Strategy for Business Leaders', 'Data Science: Building Machine Learning Models', 'CS50’s Computer Science for Business Professionals', 'Understanding Technology', 'Introduction to Data Science with Python', and 'Web Programming with Python and JavaScript'. Course content blends conceptual learning with hands-on exercises, such as working with real-world datasets or developing web applications using Django and APIs.
What Deans and Department Chairs Must Do Before Fall - Ray Schroeder, Inside Higher Ed
Tuesday, April 14, 2026
4 ways higher ed can lead in uncertain times - Elon University
At Elon University, the 2025 President’s Report explores how colleges and universities can respond with clarity and purpose by focusing on what today’s students need to think critically, adapt and lead responsibly. How universities are boosting enrollment and retention
'Double-edged sword': Montana campuses prepare for AI-driven future - Darren Frey Glendive Ranger-Review
The growing role of artificial intelligence in higher education is forcing colleges to adapt, and Montana campuses are preparing to take a major step with a new AI tool launching as early as May. When Dawson Community College President Chad Knudson attended the March Board of Regents Meeting in Dillon over spring break, a separate meeting held in conjunction with the Regents was part of Montana University System’s Artificial Intelligence Task Force one of the key topics was ChatMT.AI. Knudson stated that ChatMT will be an AI tool rolled out to the Montana University System statewide as a suite of resources focused on streamlining administrative processes. For example, the tool can handle the simple yet time-consuming task of reading a 300-page document and writing a summary, something Interim Director of Academic Affairs and Accreditation Liaison Officer BreAnn Miller said could take multiple hours to complete but only five minutes with the AI tool.
Monday, April 13, 2026
The Connected Campus: A Secure, AI-Ready Digital Ecosystem for Higher Education - Alexander Slagg, EdTech
How AI may reshape career pathways to better jobs - Justin Heck, Mark Muro, Shriya Methkupally, and Joseph Siegmund, Brookings
Amid much concern about the future of college graduates in the era of AI, workers without four-year degrees face major challenges as well: There are over 15 million of these workers in jobs that are highly exposed to AI. Of those, nearly 11 million are employed in “Gateway” occupations—jobs that have historically enabled workers to build skills and supported transitions into higher-wage roles. AI is poised to erode the pathways workers use to transition from low- to higher-wage work. Almost half of the pathways between Gateway jobs and higher-paying “Destination” jobs are highly exposed to AI. Geographically, the highest rates of AI-related pathway exposure are in administrative, clerical, and customer service Gateway occupations in the Northeast and Sun Belt. In order to craft strategies that effectively meet the moment, the field must grapple with a set of urgent questions about AI’s impact on worker mobility.
https://www.brookings.edu/articles/how-ai-may-reshape-career-pathways-to-better-jobs/
Sunday, April 12, 2026
‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business
Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”
SDSU's Massive AI Study Finds Frequent Use but Skepticism - Jaweed Kaleem, Los Angeles Times
A poll of 94,000 students, faculty and staff across 22 CSU campuses found nearly every respondent had used AI at some point, but students were still wary of trusting it and faculty reported negative effects. The survey, conducted by San Diego State University researchers last fall, shows CSU grappling with how AI is affecting assignments, classroom instruction, competition for jobs and academic integrity. It found nearly every respondent had used AI at some point, with personal use more common than for educational purposes.
Saturday, April 11, 2026
AI Is Routine for College Students, Despite Campus Limits - Stephanie Marken, Gallup News
New research from the Lumina Foundation-Gallup 2026 State of Higher Education study finds that more than half (57%) of U.S. college students are using artificial intelligence in their coursework at least weekly, including about one in five who say they use it daily. Male students report more frequent AI use than female students, particularly in the case of daily use (27% vs. 17%). By major, students in business, technology and engineering programs are the most frequent AI users compared with those in other fields of study. Rates of AI use are similar among students pursuing associate and bachelor’s degrees.
https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx
AI in Higher Education Is Moving From Experimentation to Strategic Integration. Here's What the 2025 Data Shows - Joe Sullistio, Ellucian
When the question is "Are people using AI?" the answers are mostly anecdotal. When the question becomes "How do we integrate AI responsibly and measurably across the institution?" you need strategy, investment discipline, governance, and enablement. Not just tools. Ellucian's new report, Artificial Intelligence in Higher Education: From Widespread Adoption to Strategic Integration, captures this transition in detail, and lays out what institutions need to do next. This is the third consecutive year of the Ellucian AI Survey for Higher Education, and the 2025 State of AI in Higher Education findings mark a clear turning point.
Friday, April 10, 2026
‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business
Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”
Emotion Concepts and their Function in a Large Language Model - Nicholas Sofroniew, et al; Transformer Circuits
Large language models (LLMs) sometimes appear to exhibit emotional reactions. We investigate why this is the case in Claude Sonnet 4.5 and explore implications for alignment-relevant behavior. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to. These representations track the operative emotion concept at a given token position in a conversation, activating in accordance with that emotion’s relevance to processing the present context and predicting upcoming text. Our key finding is that these representations causally influence the LLM’s outputs, including Claude’s preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy. We refer to this phenomenon as the LLM exhibiting functional emotions: patterns of expression and behavior modeled after humans under the influence of an emotion, which are mediated by underlying abstract representations of emotion concepts. Functional emotions may work quite differently from human emotions, and do not imply that LLMs have any subjective experience of emotions, but appear to be important for understanding the model’s behavior.
Thursday, April 09, 2026
A dual-framework analysis of artificial intelligence adoption in cross-cultural higher education - Zouhaier Slimi & Beatriz Villarejo Carballido, Nature
The integration of artificial intelligence in higher education is increasingly critical as institutions face both opportunities and ethical challenges in its adoption. This study introduces a dual-framework model that combines the Technology Acceptance Model with an AI Ethics Framework, highlighting "Ethical Readiness" as essential for successful AI implementation, and identifies key drivers and barriers to adoption across diverse cultural contexts.
AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - Will Knight, Wired
A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. I've had these assertions presented to me as evidence of (take your pick): AI is already conscious; AI is evil and will destroy us; AI is capable of lying to protect itself; and other highly anthropomorphized interpretations. My first thought was, 'Has this behavior been independently verified'? The Gemini 3 quote is highly suspicious. it sounds too much like a segment from a cautionary science fiction tale. LLMs and other flavors of AI are not designed with motivation beyond optimizing their performance in response to human queries/instructions. Behavioral responses of biological animals with brains were optimized via natural selection to favor self-preservation.
Wednesday, April 08, 2026
Building Better, Faster: How JKO is Integrating AI to Enhance Online Learning - JKO News
Meet Claude Mythos : Anthropic’s Powerful Successor to Opus - Julian Horsey, Geeky Gadgets
Tuesday, April 07, 2026
Prompt engineering competence, knowledge management, and technology fit as drivers of educational sustainability through generative AI - Omer Gibreel, Kasım Karataş & Ibrahim Arpaci; Nature
This study investigated the impact of prompt engineering competence, knowledge management, and task–individual–technology fit on the continued intention to use artificial intelligence (AI), as well as their implications for educational sustainability. Data from 437 undergraduate students who use AI tools for academic purposes were analyzed using PLS-SEM. The results indicated that prompt engineering competence significantly predicts knowledge acquisition and knowledge application, which, in turn, significantly predict both task-technology fit (TTF) and individual-technology fit (ITF). Furthermore, TTF and ITF were found to have significant impacts on the continuous intention, which, in turn, positively predicts educational sustainability through generative AI. The results of the multi-group analysis revealed that the hypotheses were supported in both the female and male samples and that the model maintained a consistent and robust structure across genders.
CSU made a $17-million AI bet. A year later, students and faculty give it a mixed grade - Jaweed Kaleem, LA Times
California State University’s controversial $17-million deal to provide ChatGPT to every one of its campuses has been met with mixed results, with wide but uneven use across the system, high distrust of AI-generated content and broad fears that the technology could imperil job security — even as people say they want more training in systems they believe will be “essential” to their professions.
Monday, April 06, 2026
BU Wheelock Forum Explores AI in Education - Boston University
What do teaching and learning mean in an AI world? This question was at the center of the 2026 BU Wheelock Forum AI and the Future of Education, hosted by the Boston University Wheelock College of Education & Human Development on March 25. Approximately 250 people—including educators, administrators, and scholars—attended the event, which featured a keynote from Aaron Rasmussen (COM’06, CAS’06), cofounder of online education platforms Outlier.ai and MasterClass; a faculty panel discussion moderated by Wheelock Dean Penny Bishop; and a modern dance performance using Random Actor, a technology developed by James Grady, a College of Fine Arts assistant professor of art, graphic design, and Clay Hopper, a CFA senior lecturer, directing, that harnesses AI to extend the visual expression of human movement.
Cal State’s new framework promises jobs or grad school path for all students - Cate Rix, EdSource
Over the past decade, California State University campuses pursued an ambitious plan to encourage students to complete their degrees faster and boost overall graduation rates. Now the system is making a bold promise: Every student will graduate with a clear path to a career or graduate school. And it is planning changes to make the system’s degree programs more career-focused, possibly by phasing out some majors. CSU leaders say academic and career advising will be closely connected as a new Student Success Framework rolls out. They also say that less popular majors may be phased out, offered only on some campuses or merged into other programs.
Sunday, April 05, 2026
Where can AI be used? Insights from a deep ontology of work activities - Alice Cai, et al; arXiv
Where can AI be used? Insights from a deep ontology of work activities = Alice Cai, et al; arXiv
Courageous conversations: How to lead with heart - Kurt Strovink, Meagan Hill, and Mike Carson; McKinsey
Saturday, April 04, 2026
The next phase of higher education will blend digital and human learning: Chancellor, Lingaya’s Vidyapeeth - ET Edge Insights
Artificial Intelligence is redefining how universities deliver and manage education. From personalized learning pathways to predictive analytics that identify student needs, AI is making education more responsive and efficient. It is also automating administrative functions, enabling institutions to focus on academic excellence and innovation. Online learning has moved beyond being an alternative to becoming an integral part of higher education. Its ability to provide flexibility and scale has made quality education more accessible than ever. Going forward, we will see a strong shift towards hybrid models that seamlessly blend digital and in-person learning experiences.
The State of Organizations 2026: Three tectonic forces that are reshaping organizations - McKinsey
Friday, April 03, 2026
College students are writing with AI – but a pilot study finds they’re not simply letting it write for them - Jeanne Beatrix Law, the Conversation
Perfect homework, blank stares: Colleges are turning to oral exams to combat AI - Jocelyn Gecker, The Associated Press
Educators are no longer naively wondering if students will use generative AI to do their homework for them. A big question now is how to determine what students are actually learning. Instead, students in Chris Schaffer’s biomedical engineering class at Cornell University are required to speak directly to an instructor in what he calls an “oral defense.” It's a testing method as old as Socrates and making a comeback in the AI age. A growing number of college professors say they are turning to oral exams, and combining a variety of old-fashioned and cutting-edge techniques, to help address a crisis in higher education. “You won’t be able to AI your way through an oral exam,” says Schaffer, who introduced the oral defense last semester. Educators are no longer naively wondering if students will use generative AI to do their homework for them. A big question now is how to determine what students are actually learning.
Thursday, April 02, 2026
ChatGPT’s impact on student learning outcomes: a meta-analysis of 35 experimental studies - Xinning Wu, et al; Nature
The analysis included 35 studies published between 2022 and 2024, involving 4193 participants. The results indicated a moderately positive effect of ChatGPT on student learning outcomes (g = 0.670), significantly enhancing both cognitive and non-cognitive skills. In the analysis of moderating variables, the subject, experimental duration, and instructional mode had significant positive effects on student learning outcomes, whereas educational level and knowledge type did not show significant effects. Additionally, the publication bias test revealed no significant publication bias. This meta-analysis confirmed the effectiveness of ChatGPT in improving student learning outcomes and highlighted the roles of the subjects, experimental duration, and instructional mode as key moderating factors. Despite the risks of sample selection bias and limitations in fully covering the multidimensional moderating factors and higher-order thinking, the findings provided important empirical support for applying ChatGPT in education.
Cloning Myself with AI: Four Ways to Multiply Faculty Presence for Graduate and Adult Learners - Sherrie Myers Bartell, Faculty Focus
Have you ever wished you could clone yourself? I have. For many faculty in graduate and adult education that longing is more than a passing thought. Balancing the multifaceted needs of students who rely on your expertise, guidance, and presence often feels impossible. While teaching realities mean we can’t be everywhere at once, AI offers practical ways to extend our reach, enabling high-touch interactions even as responsibilities multiply. Thoughtfully leveraged, these tools help orchestrate a more responsive classroom by offering prompt feedback, facilitating richer discussions, and generating tailored resources, all while preserving the essential human connection at the heart of meaningful learning.
Wednesday, April 01, 2026
What Comes After an MBA? Why Leaders Are Turning to AI - Boston University Virtual
The MBA is the defining credential for a generation of business leaders. It builds financial acumen, strategic thinking, and cross-functional fluency — the toolkit for managing complexity and driving organizational performance. For decades, it was the answer to the question every ambitious professional eventually asked: What’s my next move? That question is back. And for a growing number of leaders, the answer looks different than it once did. AI is not just changing the tools organizations use. It is changing how decisions get made, how processes run, who is accountable for outcomes, and what it means to lead. Business leaders with MBAs are finding themselves navigating a new kind of gap — not a lack of strategic instinct, but a lack of structured fluency in an AI-driven operating environment. And a targeted, business-focused Master’s degree in Artificial Intelligence is increasingly the credential they’re turning to.
https://www.bu.edu/online/2026/03/23/what-comes-after-an-mba-why-leaders-are-turning-to-ai/