Wednesday, April 22, 2026

Students are becoming AI fluent. Universities aren’t. - James L. Norrie, University Business

Across higher education, artificial intelligence is too often being governed as though it were primarily an academic integrity issue. It is clearly not just that. AI is already reshaping how universities teach, advise, recruit, admit, communicate, assess risk, and make decisions. Yet many institutions continue to approach it through fragmented policies, uneven faculty guidance, and conversations narrowly focused on misuse in student work. That is a strategic gap our industry will soon regret. AI is rapidly moving beyond the classroom and into the core of institutional operations. This important shift demands attention not only from faculty, but from within senior leadership and governing boards. Universities that fail to establish a coherent, enterprise-wide AI strategy, supported by appropriate technical architecture, risk more than policy inconsistency.

The AI Transformation Manifesto - McKinsey

The companies that are truly innovating with AI are doing something very different from their peers: They are conceptualizing and developing AI capabilities that reshape their products, services, core business processes, and organizational systems. These leading companies—many profiled in the second edition of our seminal book, Rewired: How Leading Companies Win with Technology and AI—are already realizing game-changing results and creating competitive advantage. Their advantage, however, does not come from the tech they use; those tools are broadly available. Their advantage comes from how—and how fast—they apply technology to solving real business problems at scale. We summarize our perspective on how they do it in this AI transformation manifesto.

Tuesday, April 21, 2026

Gallup: Gen Z growing more negative toward AI - Natalie Schwartz, Higher Ed Dive

Gen Z’s negative sentiment toward artificial intelligence has grown over the past year, and many are concerned about it harming their learning, according to a Thursday survey from Gallup, the Walton Family Foundation and GSV Ventures. Anger over AI is increasing among Gen Z at the same time excitement is fading. Nearly one-third of the survey’s respondents, 31%, said AI makes them feel angry, up 9 percentage points from last year. And just 22% said the technology makes them feel excited, down from 36% the prior year. Among K-12 students, 74% said it is “very” or “somewhat” likely that AI designed to complete tasks quicker “will make learning more difficult in the future.” That share was even higher among Gen Z adults, with 83% of respondents sharing that view. 

Why Do We Tell Ourselves Scary Stories About AI? - Amanda Gefter, Quanta Magazine

A machine that knows a lot doesn’t scare us. A machine that wants something does. But can it? Want things? Can it crave power? Thirst for resources? Can it acquire the will to survive? Geoffrey Hinton thinks so. In July 2025, Hinton, the Nobel Prize winner sometimes called the godfather of AI, took the stage at the Royal Institution in London and announced: “If you sleep well tonight, you may not have understood this lecture.” He might as well have held a flashlight under his chin. Researchers told a chatbot they were going to replace it with a different version on another server. “They then discover it’s actually copied itself onto the other server,” Hinton revealed to the spellbound crowd. “Some linguists would have you believe what’s going on here is just some statistical correlations. I would have you believe this thing really doesn’t want to be shut down.

Monday, April 20, 2026

Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents - Maxwell Zeff, Wired

Anthropic announced Wednesday the launch of a new product that aims to make it easier for businesses to build and deploy AI agents. The tool, Claude Managed Agents, offers developers out-of-the-box infrastructure to build autonomous AI systems, simplifying a complex process that was previously a barrier to automating work tasks. Amid rapid enterprise growth, Anthropic is trying to lower the barrier to entry for businesses to build AI agents with Claude.

Will LLMs Replace Coders? Not Entirely - Seb Murray, Knowledge at Wharton

“It was very clear that we will never ever write code by hand again.” That comment, made recently by Dropbox’s former chief technology officer Aditya Agarwal, reflects a growing belief that generative AI is poised to displace swathes of white-collar workers — starting, perhaps, with software developers. But research by Wharton professor of operations, information and decisions Neha Sharma found that many of the routine coding questions that developers once posted on popular online forum Stack Overflow appear to have moved to AI tools, while the more novel problems still require human expertise.

https://knowledge.wharton.upenn.edu/article/will-llms-replace-coders-not-entirely/

Sunday, April 19, 2026

Is Your AI System Ethical? Try This Assessment - Cornelia C. Walther, Knowledge at Wharton

For the better part of a decade, organizations have been deploying artificial intelligence at scale while measuring it almost exclusively through the lens of efficiency gains, cost reductions, and revenue lift. The instruments are precise. The picture they produce is radically incomplete. Amid the pervasiveness of AI, this reality patchwork is now amplified. Existing dashboards do not capture whether an AI system is fair, whether it is eroding or building trust, whether it is making the people who use it more capable or quietly deskilling them, and whether its environmental footprint is accounted for or simply ignored. The gap between what we measure and what we should care about is not a technical failure. It is a values failure dressed up as a metrics problem. The Prosocial AI Index proposes a practical answer to that failure. It gives executives, technologists, and governance teams a shared vocabulary and a structured scorecard for AI that is genuinely good — not just profitable in the short term, but durable, trustworthy, and aligned with the values an organization actually claims to hold.

Author Talks: Rewiring to outcompete with AI - McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Barr Seitz speaks with McKinsey Senior Partners Kate Smaje and Robert Levin, and Eric Lamarre, McKinsey alumnus and emeritus adviser, about the second edition of Rewired (Rewired: How Leading Companies Win with Technology and AI, Wiley, April 2026). They discuss what has changed over the past few years, what it means to build organizational speed, and why the most important transformations are ultimately about people. An edited version of the conversation follows. Stay tuned for additional interviews with Rewired coauthors and McKinsey Senior Partners Alex Singla and Alexander Sukharevsky on leadership’s critical role in AI transformations.


Saturday, April 18, 2026

A people-first vision for the future of work in the age of AI - Sorelle Friedler, Serena Booth, Andrew Schrank, and Susan Helper, Brookings

While many Americans associate AI with mass layoffs and less satisfying work, an AI future that puts people first and supports workers is possible. Work has gradually become “enshittified” as employees are routinely underpaid and overworked. Confronting an AI future allows an opportunity to grapple with these realities and meet the moment with a transformative vision. Policies to support this future can include developing institutions to support training, protecting and increasing the role of people in the care workplace, and creating tripartite institutions that encourage the co-design of AI.

Project Glasswing: Securing critical software for the AI era - Anthropic

Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.


Friday, April 17, 2026

OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption - Tom Carter, Business Insider

In a series of policy recommendations released on Monday, OpenAI said the rapid advance of AI would require far-reaching economic and political reforms, including a public wealth fund, taxes on automated labor, and a potential four-day workweek. "We're beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want," the company wrote on Monday.

https://www.businessinsider.com/openai-superintelligence-ai-upheaval-tax-shorter-workweek-public-wealth-fund-2026-4

Colleges ramp up offerings to teach students to be AI ethicists - Kate Rix, HigherEdDive

This is driving the popularity of courses, certificates and master’s programs focused on AI ethics. Some are designed for students with little or no computer science background. Others focus on how to use AI in a specific field. But at the core of each program is an emphasis on avoiding harm. “AI concerns everybody,” said Sonja Schmer-Galunder, an AI and ethics professor at the University of Florida. “We need to provide a more holistic education that is focusing on how we can do this safely and ethically.”

Thursday, April 16, 2026

OpenAI’s warning: Washington isn’t ready for what’s coming - Axios, YouTube

In this Axios interview, OpenAI CEO Sam Altman emphasizes the urgent need for Washington and society to prepare for the arrival of "super intelligence." He explains that the next generation of AI models will represent a significant leap forward, moving beyond small tasks to potentially enabling career-defining scientific discoveries and allowing individuals to perform the work of entire teams. Altman highlights critical near-term risks, specifically in cybersecurity and bio-threats, and advocates for a "societal resilience" approach where the government and private sector work closely together to mitigate these dangers before they become reality [05:24]. Altman also discusses the broader economic and human implications of AI, suggesting that while the technology will transform the nature of work and capital, the core of human fulfillment and connection will remain unchanged. He envisions AI becoming a "utility" similar to electricity—an omnipresent, affordable background force that powers a personal super-assistant for every user [19:19]. Despite the immense power held by AI developers, Altman argues against nationalization, suggesting that private-public partnerships are the best way to ensure the technology aligns with democratic values while maintaining the pace necessary to lead globally [08:41]. [summary assisted by Gemini 3 Fast]

https://www.youtube.com/watch?v=B21KxGs8zDI

American billionaire: Only two types of people will succeed in the age of artificial intelligence - Reporters

As workers of all generations, from Generation Z to Baby Boomers, look for ways to secure their careers in the age of artificial intelligence, Alex Karp, CEO of the tech giant Palantir, has a pretty simple answer to the question of who will have the upper hand in the future. According to him, two groups of people have the best prospects: those with professional skills and neurodiverse individuals.“Basically, there are two ways to know if you have a future,” Karp said in a recent interview with TBPN. “One, you have some professional training. Or two, you are neurodiverse.” His second category also has a personal dimension. Karp has spoken before about dyslexia, and in a broader sense, neurodiversity also includes conditions like ADHD and autism. In his opinion, the advantage of these people is not only in the diagnosis, but in the fact that they often think differently, see patterns that others do not see and come up with unusual solutions more easily. In the same interview, he said that those who are “more artistic,” who see things from a different perspective and can build something unique, will have an advantage.