Saturday, May 02, 2026

Research cuts are now having a chilling effect on academia - Alcino Donadel, University Business


Some experts see early and dire consequences for the science and education research community. “We’ve been hearing about the cuts coming down, but this spring, you’re really starting to see the effects,” says Chenjerai Kumanyika, assistant professor at New York University and council member for the American Association of University Professors. In February, Congress passed a fiscal year 2026 spending package that rejected Trump’s proposed 40% cuts to the National Institutes of Health, the National Science Foundation, NASA, and the Department of Energy. While the agencies saw most of their budgets restored, the Trump administration has stalled in releasing the funds.As of March 24, the NIH has only awarded 15% of its nearly $40 billion budget in academic research to institutions, according to a report from the Association of Medical Colleges. 

https://universitybusiness.com/research-cuts-are-now-having-a-chilling-effect-on-academia/


College Students Are More Polarized Than Ever. Can AI Help? - Kathryn Palmer, Inside Higher Ed

Over the past few years, higher education institutions have adopted emerging artificial intelligence tools in an effort to enhance nearly every aspect of campus life—not just teaching and learning but also admissions, alumni networks, fundraising and advising. Now some are even experimenting with AI’s ability to advance one of the hottest trends on college campuses: fostering constructive dialogue among students, who are more divided over politics now than at any point in the past 40 years. To help bridge those divides, colleges are increasingly partnering with organizations aimed at promoting civil dialogue, including Braver Angels, BridgeUSA, the Institute for Citizens and Scholars, and the Constructive Dialogue Institute. And lately, AI is becoming part of the conversation.

Friday, May 01, 2026

This is the fastest-growing job for young workers, LinkedIn says - Mary Cunningham, CBS News

As the rise of artificial intelligence stirs anxiety over the technology taking people's jobs, AI is also opening pathways to new careers, according to LinkedIn. The fastest-growing job title for young workers on the networking platform is "AI engineer," a recent report from the company found. LinkedIn analyzed millions of member profiles to determine the number of entry-level workers hired over the last three years and the roles they were hired to fill. "It's measuring momentum for these job titles," said Kory Kantenga, the head of economics, Americas, at LinkedIn. "Companies are just gorging on AI talent."

US security agency is using Anthropic's Mythos despite blacklist, Axios reports - Reuters

The United States National Security Agency is ​using Anthropic's Mythos Preview AI tool despite ‌the Pentagon hitting the company with a formal supply-chain risk designation, Axios reported on Sunday.
The Mythos Preview model ​was being used more widely within the ​department, Axios said, citing sources. Reuters could ⁠not immediately verify the report. Anthropic, the NSA and ​the Department of Defense did not immediately respond ​to requests for comment outside regular business hours. The NSA is part of the Defense Department.

Thursday, April 30, 2026

Feasibility of implementing a multicultural curriculum through artificial intelligence: perspectives of educational science experts - Huijuan Qin & Zijian Zhou, Nature

Across the revised thematic structure, feasibility was constructed through four interrelated domains: conditional pedagogical feasibility, epistemic and ethical risk, human mediation and institutional governance, and democratic co-construction of multicultural learning. Educational science experts regard AI-mediated multicultural curricula as possible but structurally fragile. Feasibility depends on value-led curriculum design, robust governance, and critical human mediation that aligns AI with the transformative ambitions of multicultural education. More specifically, feasibility depends on the alignment of explicit multicultural curricular intent, recognition of epistemic and ethical risk, strong institutional and pedagogical mediation, and students’ active participation in critical inquiry.

AI fears drive some young adults to grad school — ‘people shelter in higher education,’ expert says - Jessica Dickler, CNBC

Typically, enrollment in graduate school increases during recessions as workers seek to advance or to move to another industry with better career prospects or pay. Today, more people in a survey said they plan to go back to school within a year, even though the economy is doing well. Experts say young adults are exploring this option largely because they are worried about their job prospects despite the economy.

Wednesday, April 29, 2026

Is Your AI Ethical, Human-Centered and Pro-Social? - Ray Schroeder, Inside Higher Ed

Many of us utilize AI daily in our higher education work, yet we may not have assessed the ethical and human-centered nature of the tool we have selected and trained through our prompts. AI tools are no longer a relatively simple search engine that is driven by marketing metrics to help us conduct our research. Rather, with AI we are using more sophisticated tools that conduct research and seek answers to our prompting while making source-selection decisions, contextual settings and semantic subtleties that impact the values expressed in the results. Before we look at the default values and orientations inherent in some of the leading AI models, let me remind you that in crafting your prompt, you can encourage the tool to put an emphasis on generating responses that include orientations and perspectives that address ethical considerations. Your prompt can direct the model to provide results that explore, highlight or emphasize pro-social or human-centered solutions and examples.

White House Directs Banks to Use Anthropic Mythos - Let's Data Science

The White House has encouraged major U.S. banks to test Anthropic's Claude Mythos model to identify security gaps. Several large banks have begun in-house evaluations after a Treasury and Federal Reserve meeting with Wall Street executives that emphasized using the model to uncover vulnerabilities. The administration frames the engagement as part of an ongoing AI security taskforce, and Anthropic has opened an early-access partner program for Claude Mythos alongside its Claude Managed Agents rollout. The guidance positions Claude Mythos explicitly for defensive cybersecurity work, including red teaming and proactive vulnerability discovery, and signals closer public-private coordination on AI risk remediation for critical financial infrastructure.

Tuesday, April 28, 2026

The Quiet Revolution: How Generative Artificial Intelligence is Redefining Higher Education in Mexico - Noah Conway, Veritas

Inteligencia Artificial Generativa (IAG) has made a promise of the future to become the motor director of learning in Mexico. According to the latest report from the Secretaría de Educación Pública (SEP), the impact is huge: 80% of university students use these tools to write texts and improve their academic results. This phenomenon, revealed there “National Encuesta sober use and perception of sober generative artificial intelligence in higher education”marks the point of no return in the national education system. Even beyond thoughts and graphics, AI occupies an unexpected space: mental health. SEP owner Mario Delgado Carrillo noted that practitioners like ChatGPT have become “the great psychologist of our time.” The data highlights this: 9% of students turn to AI for advice on anxiety, stress or depression, opening a new debate about emotional support in the digital age.

Evaluating large language models for AI-assisted grading: a framework and case study in higher education - Yago Saez, Luis Mario Garcia, Asuncion Mochon & Pedro Isasi, Nature

This article presents an empirical evaluation of six state-of-the-art large language models for grading student assignments in a university-level course on data analytics and machine learning. The study compares the ability of the models to generate grades and feedback with that of human instructors, using statistical and semantic measures for evaluation. The results show that DeepSeek-R1 provided the closest alignment with human evaluations in both grading accuracy and feedback quality. Beyond this case study, the article contributes a replicable framework for systematically benchmarking LLMs in higher education assessment, specifying model selection, prompt design, evaluation measures, and cost analysis. The proposed framework ensures continued relevance as new models emerge, providing educators and researchers with a transferable methodology to evaluate AI-assisted grading in higher education.

Monday, April 27, 2026

Rewired 2.0: How leading companies are (still) winning with AI - McKinsey

Companies that successfully transform with AI can boost their EBITDA by roughly 20 percent, according to Rewired: How Leading Companies Win with Technology and AI. In this newly released second edition of the Rewired bestseller, five McKinsey leaders draw on more than 30 case studies to show how organizations turn AI ambition into measurable value. As the pace of technology accelerates—and expectations rise—the book zeroes in on what it takes to truly “rewire” a company today: aligning leadership, redesigning operating models, and building the capabilities that turn AI into sustained advantage. Explore the latest interview with three of the authors, McKinsey Senior Partners Eric Lamarre, Kate Smaje, and Robert Levin, and the below insights to learn how leading companies are winning with AI.

https://www.mckinsey.com/featured-insights/themes/rewired-2-point-0-how-leading-companies-are-still-winning-with-ai

OpenAI’s warning: Washington isn’t ready for what’s coming - Axios, YouTube

In this interview, OpenAI CEO Sam Altman emphasizes a growing sense of urgency for society and government to prepare for "super intelligence." He suggests that the next generation of AI models will represent a significant leap forward, moving beyond small tasks to enabling career-defining scientific discoveries and dramatic productivity gains where a single individual could perform the work of an entire team [04:43]. Altman highlights critical risks that need immediate attention, particularly in the realms of cybersecurity and biosecurity, warning that the threat of misuse by bad actors is no longer a theoretical concern [06:30]. Altman also outlines a vision for AI as a "utility," much like electricity, where intelligence is ubiquitous, personalized, and integrated into almost every digital interaction [19:55]. While acknowledging the potential for massive economic shifts—such as a concentration of leverage in capital rather than labor—he maintains that the core of human fulfillment and connection will remain unchanged [11:53]. He advocates for a deep partnership between AI companies and the government to ensure the technology is developed in alignment with democratic values, stressing that the window for debating these societal transformations is rapidly closing [08:41].  [Gemini 3 Fast provided assistance with the summarizing of this video]

Sunday, April 26, 2026

Higher Education Faces Demographic Cliff, AI Impact - National Today

The future of higher education in America is at a crossroads, as institutions navigate a complex landscape of declining enrollment, political influences, and the growing impact of artificial intelligence. The so-called "demographic cliff" - a sustained drop in college enrollment driven by declining birth rates - poses financial and academic challenges, particularly for regions like New England with dense ecosystems of schools. Colleges are rethinking academic programs, recruitment strategies, and alignment with the job market to address these pressures, while also grappling with the lack of authoritative data on return on investment and the influence of AI on the labor market. The changes facing higher education will have far-reaching implications for students, families, and the broader economy. As institutions adapt to declining enrollment, political decisions, and technological disruption, the future of learning and career preparation hangs in the balance.

https://nationaltoday.com/us/ny/new-york/news/2026/04/11/higher-education-faces-demographic-cliff-ai-impact/

As AI pushes students to reconsider majors, universities struggle to adapt - Lexi Lonas Cochran, the Hill

A recent poll shows AI’s increasing role in how students decide on college majors, creating a rapidly developing situation for universities that are still struggling to determine how the technology will shape higher education. The Lumina Foundation-Gallup 2026 State of Higher Education survey found 47 percent of currently enrolled college students have thought about switching majors “a great deal” or a “fair amount” over AI concerns.  Forty percent of the AI job losses will occur in Texas, California, New York, Florida and Illinois, the researchers predict.  And young people are predicted to take the biggest hits from AI since experts say it could largely take over entry level work. 

Saturday, April 25, 2026

Claude finds a 27-year-old bug - Arturo Ferreira & Liam Lawson, The AI Report

The initiative is built around Claude Mythos Preview, an unreleased frontier model that has already found thousands of high-severity zero-day vulnerabilities, including some in every major operating system and web browser. Mythos Preview identified a 27-year-old vulnerability in OpenBSD, a 16-year-old flaw in FFmpeg, and autonomously chained together multiple Linux kernel vulnerabilities to gain full system control. Anthropic is committing $100M in usage credits for defensive security work across partners and additional organizations, plus $4M in donations to open-source security organizations like the Linux Foundation.

How a master's in AI can prepare you to lead in business - Chloë Lane, GMAC

In our most recent GMAC Corporate Recruiters Survey Report, ‘skills in AI tools’ rose significantly in importance year-over-year—reflecting the growing demand for this proficiency. One effective way to build these desirable skills is by studying a master’s in AI—a specialist master’s degree that bridges the gap between technical expertise and business application. One such program is the Master of Artificial Intelligence in Business (MAIB), recently launched by HKU Business School at The University of Hong Kong. This program is designed to equip early- to mid-career professionals with the skills they need to become AI-confident business leaders. “Future business leaders will operate in an environment where AI is embedded into almost every function, from customer engagement and pricing to supply chains, risk management, and HR,” says Professor Michael C. L. Chau, program director of the MAIB at HKU Business School.


Friday, April 24, 2026

We have months left... in the Wake of Mythos and Glasswing Response - Wes Roth, YouTube

The emergence of Anthropic’s Mythos model marks a significant shift in the AI landscape, particularly regarding cybersecurity. As Wes Roth details, the model possesses an "emergent" ability to autonomously identify and exploit zero-day vulnerabilities in codebases that were previously thought to be secure. This creates a dangerous asymmetry: while AI can now find flaws at a massive scale for a fraction of the cost—roughly $50 in compute for a complex exploit—our human-led capacity to patch and harden these systems has not increased at the same velocity. The resulting "break stuff" era suggests that the traditional equilibrium of the cybersecurity arms race has been disrupted, leaving global digital infrastructure potentially vulnerable. In response to these risks, the primary recommendation is a shift toward rigorous digital hygiene and "hardened" security measures. With the potential for AI-driven exploits to compromise entire operating systems or cloud services, users are encouraged to maintain air-gapped, physical backups of their most critical data and transition to hardware-based security keys. [Summary provided in part by Gemini 3 Fast]

https://www.youtube.com/watch?v=WSl8Ci8-cGg

Anthropic’s Mythos Will Force a Cybersecurity Reckoning—Just Not the One You Think - Lily Hay Newman, Wired

The new AI model is being heralded—and feared—as a hacker’s superweapon. Experts say its arrival is a wake-up call for developers who have long made security an afterthought.  Anthropic said this week that the debut of its new Claude Mythos Preview model marks a critical juncture in the evolution of cybersecurity, representing an unprecedented existential threat to existing software defense strategies. So, is it more AI hype—or a true turning point? "All software will have to be rewritten" someone said somewhere about this topic. Security aside, could AI rewrite all our operating systems so that they once again become simple, more easily configurable and fixable? My internal frustration list of annoying, decades-old interface bugs in MacOS & iOS and their downstream apps that have never been fixed decades keeps on growing.

https://www.wired.com/story/anthropics-mythos-will-force-a-cybersecurity-reckoning-just-not-the-one-you-think/

Thursday, April 23, 2026

Economists Starting to Admit They May Have Been Wrong About AI Never Replacing Human Jobs: They're taking it seriously - Joe Wilkins

As a sweeping economics paper by researchers at the Federal Reserve Bank of Chicago, Forecasting Research Institute (FRI), and numerous top universities found, that attitude may be shifting. As time goes on, top economic experts are increasingly factoring extreme AI disruption into their models. Yet acknowledging a possibility and accepting its inevitable are two very different things — and as the complicated range of sentiments makes clear, an AI jobs apocalypse is still far from certain. The study is a tour-de-force of economic forecasting that surveyed 69 economists, 52 AI specialists, and 38 “superforecasters,” a term for consistently accurate analysts who play the role of “Dune’s” Mentats in the economics world. It found that all three groups expect “significant” progress on AI in the years to come. Forebodingly, the groups all agreed that, as a rule, faster AI progress means lower employment rates overall. On average, economists assigned a 47 percent probability of “moderate“ AI progress by 2030, defined as systems that can operate semi-autonomous research labs, put out high-quality novels, and complete complex projects with oversight. 

AI is everywhere. The agentic organization isn’t—yet - McKinsey

Most companies are experimenting with AI, but few have realized its value. The real challenge isn’t the technology—it’s redesigning workflows, leadership, and culture for an agentic world. Yes, AI is astonishing: fast, powerful, and learning every day. But even as leaders strike up new pilots across their organizations, most still struggle to translate experimentation into enterprise value—and now, agentic AI is raising the stakes. In this episode of The McKinsey Podcast, McKinsey Senior Partner Alexis Krivkovich speaks with Global Editorial Director Lucia Rahilly about what it will take to build an “agentic organization”—from reimagining workflows to reshaping leadership roles, skills, and culture for a future where humans increasingly operate above the loop.

https://www.mckinsey.com/capabilities/people-and-organizational-performance/our-insights/ai-is-everywhere-the-agentic-organization-isnt-yet

Wednesday, April 22, 2026

Students are becoming AI fluent. Universities aren’t. - James L. Norrie, University Business

Across higher education, artificial intelligence is too often being governed as though it were primarily an academic integrity issue. It is clearly not just that. AI is already reshaping how universities teach, advise, recruit, admit, communicate, assess risk, and make decisions. Yet many institutions continue to approach it through fragmented policies, uneven faculty guidance, and conversations narrowly focused on misuse in student work. That is a strategic gap our industry will soon regret. AI is rapidly moving beyond the classroom and into the core of institutional operations. This important shift demands attention not only from faculty, but from within senior leadership and governing boards. Universities that fail to establish a coherent, enterprise-wide AI strategy, supported by appropriate technical architecture, risk more than policy inconsistency.

The AI Transformation Manifesto - McKinsey

The companies that are truly innovating with AI are doing something very different from their peers: They are conceptualizing and developing AI capabilities that reshape their products, services, core business processes, and organizational systems. These leading companies—many profiled in the second edition of our seminal book, Rewired: How Leading Companies Win with Technology and AI—are already realizing game-changing results and creating competitive advantage. Their advantage, however, does not come from the tech they use; those tools are broadly available. Their advantage comes from how—and how fast—they apply technology to solving real business problems at scale. We summarize our perspective on how they do it in this AI transformation manifesto.

Tuesday, April 21, 2026

Gallup: Gen Z growing more negative toward AI - Natalie Schwartz, Higher Ed Dive

Gen Z’s negative sentiment toward artificial intelligence has grown over the past year, and many are concerned about it harming their learning, according to a Thursday survey from Gallup, the Walton Family Foundation and GSV Ventures. Anger over AI is increasing among Gen Z at the same time excitement is fading. Nearly one-third of the survey’s respondents, 31%, said AI makes them feel angry, up 9 percentage points from last year. And just 22% said the technology makes them feel excited, down from 36% the prior year. Among K-12 students, 74% said it is “very” or “somewhat” likely that AI designed to complete tasks quicker “will make learning more difficult in the future.” That share was even higher among Gen Z adults, with 83% of respondents sharing that view. 

Why Do We Tell Ourselves Scary Stories About AI? - Amanda Gefter, Quanta Magazine

A machine that knows a lot doesn’t scare us. A machine that wants something does. But can it? Want things? Can it crave power? Thirst for resources? Can it acquire the will to survive? Geoffrey Hinton thinks so. In July 2025, Hinton, the Nobel Prize winner sometimes called the godfather of AI, took the stage at the Royal Institution in London and announced: “If you sleep well tonight, you may not have understood this lecture.” He might as well have held a flashlight under his chin. Researchers told a chatbot they were going to replace it with a different version on another server. “They then discover it’s actually copied itself onto the other server,” Hinton revealed to the spellbound crowd. “Some linguists would have you believe what’s going on here is just some statistical correlations. I would have you believe this thing really doesn’t want to be shut down.

Monday, April 20, 2026

Anthropic’s New Product Aims to Handle the Hard Part of Building AI Agents - Maxwell Zeff, Wired

Anthropic announced Wednesday the launch of a new product that aims to make it easier for businesses to build and deploy AI agents. The tool, Claude Managed Agents, offers developers out-of-the-box infrastructure to build autonomous AI systems, simplifying a complex process that was previously a barrier to automating work tasks. Amid rapid enterprise growth, Anthropic is trying to lower the barrier to entry for businesses to build AI agents with Claude.

Will LLMs Replace Coders? Not Entirely - Seb Murray, Knowledge at Wharton

“It was very clear that we will never ever write code by hand again.” That comment, made recently by Dropbox’s former chief technology officer Aditya Agarwal, reflects a growing belief that generative AI is poised to displace swathes of white-collar workers — starting, perhaps, with software developers. But research by Wharton professor of operations, information and decisions Neha Sharma found that many of the routine coding questions that developers once posted on popular online forum Stack Overflow appear to have moved to AI tools, while the more novel problems still require human expertise.

https://knowledge.wharton.upenn.edu/article/will-llms-replace-coders-not-entirely/

Sunday, April 19, 2026

Is Your AI System Ethical? Try This Assessment - Cornelia C. Walther, Knowledge at Wharton

For the better part of a decade, organizations have been deploying artificial intelligence at scale while measuring it almost exclusively through the lens of efficiency gains, cost reductions, and revenue lift. The instruments are precise. The picture they produce is radically incomplete. Amid the pervasiveness of AI, this reality patchwork is now amplified. Existing dashboards do not capture whether an AI system is fair, whether it is eroding or building trust, whether it is making the people who use it more capable or quietly deskilling them, and whether its environmental footprint is accounted for or simply ignored. The gap between what we measure and what we should care about is not a technical failure. It is a values failure dressed up as a metrics problem. The Prosocial AI Index proposes a practical answer to that failure. It gives executives, technologists, and governance teams a shared vocabulary and a structured scorecard for AI that is genuinely good — not just profitable in the short term, but durable, trustworthy, and aligned with the values an organization actually claims to hold.

Author Talks: Rewiring to outcompete with AI - McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Barr Seitz speaks with McKinsey Senior Partners Kate Smaje and Robert Levin, and Eric Lamarre, McKinsey alumnus and emeritus adviser, about the second edition of Rewired (Rewired: How Leading Companies Win with Technology and AI, Wiley, April 2026). They discuss what has changed over the past few years, what it means to build organizational speed, and why the most important transformations are ultimately about people. An edited version of the conversation follows. Stay tuned for additional interviews with Rewired coauthors and McKinsey Senior Partners Alex Singla and Alexander Sukharevsky on leadership’s critical role in AI transformations.


Saturday, April 18, 2026

A people-first vision for the future of work in the age of AI - Sorelle Friedler, Serena Booth, Andrew Schrank, and Susan Helper, Brookings

While many Americans associate AI with mass layoffs and less satisfying work, an AI future that puts people first and supports workers is possible. Work has gradually become “enshittified” as employees are routinely underpaid and overworked. Confronting an AI future allows an opportunity to grapple with these realities and meet the moment with a transformative vision. Policies to support this future can include developing institutions to support training, protecting and increasing the role of people in the care workplace, and creating tripartite institutions that encourage the co-design of AI.

Project Glasswing: Securing critical software for the AI era - Anthropic

Today we’re announcing Project Glasswing1, a new initiative that brings together Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks in an effort to secure the world’s most critical software. We formed Project Glasswing because of capabilities we’ve observed in a new frontier model trained by Anthropic that we believe could reshape cybersecurity. Claude Mythos2 Preview is a general-purpose, unreleased frontier model that reveals a stark fact: AI models have reached a level of coding capability where they can surpass all but the most skilled humans at finding and exploiting software vulnerabilities.


Friday, April 17, 2026

OpenAI calls for robot taxes, a public wealth fund, and a 4-day workweek to tackle AI disruption - Tom Carter, Business Insider

In a series of policy recommendations released on Monday, OpenAI said the rapid advance of AI would require far-reaching economic and political reforms, including a public wealth fund, taxes on automated labor, and a potential four-day workweek. "We're beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI. No one knows exactly how this transition will unfold. At OpenAI, we believe we should navigate it through a democratic process that gives people real power to shape the AI future they want," the company wrote on Monday.

https://www.businessinsider.com/openai-superintelligence-ai-upheaval-tax-shorter-workweek-public-wealth-fund-2026-4

Colleges ramp up offerings to teach students to be AI ethicists - Kate Rix, HigherEdDive

This is driving the popularity of courses, certificates and master’s programs focused on AI ethics. Some are designed for students with little or no computer science background. Others focus on how to use AI in a specific field. But at the core of each program is an emphasis on avoiding harm. “AI concerns everybody,” said Sonja Schmer-Galunder, an AI and ethics professor at the University of Florida. “We need to provide a more holistic education that is focusing on how we can do this safely and ethically.”

Thursday, April 16, 2026

OpenAI’s warning: Washington isn’t ready for what’s coming - Axios, YouTube

In this Axios interview, OpenAI CEO Sam Altman emphasizes the urgent need for Washington and society to prepare for the arrival of "super intelligence." He explains that the next generation of AI models will represent a significant leap forward, moving beyond small tasks to potentially enabling career-defining scientific discoveries and allowing individuals to perform the work of entire teams. Altman highlights critical near-term risks, specifically in cybersecurity and bio-threats, and advocates for a "societal resilience" approach where the government and private sector work closely together to mitigate these dangers before they become reality [05:24]. Altman also discusses the broader economic and human implications of AI, suggesting that while the technology will transform the nature of work and capital, the core of human fulfillment and connection will remain unchanged. He envisions AI becoming a "utility" similar to electricity—an omnipresent, affordable background force that powers a personal super-assistant for every user [19:19]. Despite the immense power held by AI developers, Altman argues against nationalization, suggesting that private-public partnerships are the best way to ensure the technology aligns with democratic values while maintaining the pace necessary to lead globally [08:41]. [summary assisted by Gemini 3 Fast]

https://www.youtube.com/watch?v=B21KxGs8zDI

American billionaire: Only two types of people will succeed in the age of artificial intelligence - Reporters

As workers of all generations, from Generation Z to Baby Boomers, look for ways to secure their careers in the age of artificial intelligence, Alex Karp, CEO of the tech giant Palantir, has a pretty simple answer to the question of who will have the upper hand in the future. According to him, two groups of people have the best prospects: those with professional skills and neurodiverse individuals.“Basically, there are two ways to know if you have a future,” Karp said in a recent interview with TBPN. “One, you have some professional training. Or two, you are neurodiverse.” His second category also has a personal dimension. Karp has spoken before about dyslexia, and in a broader sense, neurodiversity also includes conditions like ADHD and autism. In his opinion, the advantage of these people is not only in the diagnosis, but in the fact that they often think differently, see patterns that others do not see and come up with unusual solutions more easily. In the same interview, he said that those who are “more artistic,” who see things from a different perspective and can build something unique, will have an advantage.


Wednesday, April 15, 2026

Harvard offers six free online courses in AI and coding = MSN

Harvard University has expanded its free online learning portfolio with six courses focused on artificial intelligence, data science, programming, and web development. These globally accessible programmes are available in self-paced and scheduled formats, accommodating both beginners and professionals aiming to enhance their technology skills. The initiative reflects rising demand for digital literacy and supports the development of future-ready capabilities in an AI-driven world. The programmes include 'AI Strategy for Business Leaders', 'Data Science: Building Machine Learning Models', 'CS50’s Computer Science for Business Professionals', 'Understanding Technology', 'Introduction to Data Science with Python', and 'Web Programming with Python and JavaScript'. Course content blends conceptual learning with hands-on exercises, such as working with real-world datasets or developing web applications using Django and APIs.


What Deans and Department Chairs Must Do Before Fall - Ray Schroeder, Inside Higher Ed

Something is unfolding in the labor market that will greet your new graduates, in an incrementally tighter job market. The urgency is real. Entry-level hiring at the 15 biggest tech firms fell 25 percent from 2023 to 2024, according to a SignalFire report. With AI tools performing more of the work  previously reserved for recent graduates, new hires are expected to slot in at a higher level almost from day one. That is not a distant forecast. That is the market your Class of 2027 will enter. I prompted Anthropic's Claude Sonnet 4.6 Extended Thinking to suggest what we should be doing this summer to best respond to the changing employment market for our grads in the coming academic year. Here are the seven tasks the Anthropic model suggested are most pressing this summer.

Tuesday, April 14, 2026

4 ways higher ed can lead in uncertain times - Elon University

At Elon University, the 2025 President’s Report explores how colleges and universities can respond with clarity and purpose by focusing on what today’s students need to think critically, adapt and lead responsibly. How universities are boosting enrollment and retention

Central to this work is a simple but powerful idea: preparing students not just with knowledge, but with the ability to question, analyze and apply it. In a world defined by uncertainty, students must learn how to think, not what to think, and be willing to take calculated risks as they test ideas, navigate ambiguity and engage with real-world challenges.The report highlights several practical approaches institutions can adopt.

'Double-edged sword': Montana campuses prepare for AI-driven future - Darren Frey Glendive Ranger-Review

The growing role of artificial intelligence in higher education is forcing colleges to adapt, and Montana campuses are preparing to take a major step with a new AI tool launching as early as May. When Dawson Community College President Chad Knudson attended the March Board of Regents Meeting in Dillon over spring break, a separate meeting held in conjunction with the Regents was part of Montana University System’s Artificial Intelligence Task Force one of the key topics was ChatMT.AI. Knudson stated that ChatMT will be an AI tool rolled out to the Montana University System statewide as a suite of resources focused on streamlining administrative processes. For example, the tool can handle the simple yet time-consuming task of reading a 300-page document and writing a summary, something Interim Director of Academic Affairs and Accreditation Liaison Officer BreAnn Miller said could take multiple hours to complete but only five minutes with the AI tool.

https://www.bozemandailychronicle.com/news/double-edged-sword-montana-campuses-prepare-for-ai-driven-future/article_84b1f767-3899-5c0b-96fc-b122ac2bfb2e.html

Monday, April 13, 2026

The Connected Campus: A Secure, AI-Ready Digital Ecosystem for Higher Education - Alexander Slagg, EdTech

A connected campus supports improved learning experiences, campus operations and overall decision-making by university leadership. While previous iterations of campus technology systems were focused on simply connecting users with resources and each other, the connected campus goes much further, forming a holistic technology ecosystem that drives secure interoperability across systems and resources. “A connected campus depends on several foundational layers working together: resilient wired and wireless networking; cloud and hybrid infrastructure; identity and security systems; and platforms that support learning, collaboration and research,” explains Nicole Muscanell, a researcher for EDUCAUSE. “Increasingly, institutions are also integrating IoT systems, such as smart buildings, energy management and physical safety technologies, into this ecosystem.”

How AI may reshape career pathways to better jobs - Justin Heck, Mark Muro, Shriya Methkupally, and Joseph Siegmund, Brookings

Amid much concern about the future of college graduates in the era of AI, workers without four-year degrees face major challenges as well: There are over 15 million of these workers in jobs that are highly exposed to AI. Of those, nearly 11 million are employed in “Gateway” occupations—jobs that have historically enabled workers to build skills and supported transitions into higher-wage roles.  AI is poised to erode the pathways workers use to transition from low- to higher-wage work.  Almost half of the pathways between Gateway jobs and higher-paying “Destination” jobs are highly exposed to AI. Geographically, the highest rates of AI-related pathway exposure are in administrative, clerical, and customer service Gateway occupations in the Northeast and Sun Belt. In order to craft strategies that effectively meet the moment, the field must grapple with a set of urgent questions about AI’s impact on worker mobility.

https://www.brookings.edu/articles/how-ai-may-reshape-career-pathways-to-better-jobs/

Sunday, April 12, 2026

‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business

Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”

More than 40% of bachelor’s degree students and more than half of those seeking associate’s degrees said generative AI has caused them to consider changing their major or field of study, according to a a new Gallup poll.
About one in seven students surveyed at both levels said “preparing for AI and other technological advances is an important reason they enrolled.”
AI is not yet the “primary driver” academic and enrollment decisions, Gallup’s authors contend. They urge higher leaders to ensure students have opportunities to learn the AI skills needed to succeed in a changing workforce.
“These findings highlight growing student attention to how well degrees align with an AI-shaped economy,” the survey concluded.

SDSU's Massive AI Study Finds Frequent Use but Skepticism - Jaweed Kaleem, Los Angeles Times

A poll of 94,000 students, faculty and staff across 22 CSU campuses found nearly every respondent had used AI at some point, but students were still wary of trusting  it and faculty reported negative effects.  The survey, conducted by San Diego State University researchers last fall, shows CSU grappling with how AI is affecting assignments, classroom instruction, competition for jobs and academic integrity. It found nearly every respondent had used AI at some point, with personal use more common than for educational purposes.


Saturday, April 11, 2026

AI Is Routine for College Students, Despite Campus Limits - Stephanie Marken, Gallup News

New research from the Lumina Foundation-Gallup 2026 State of Higher Education study finds that more than half (57%) of U.S. college students are using artificial intelligence in their coursework at least weekly, including about one in five who say they use it daily. Male students report more frequent AI use than female students, particularly in the case of daily use (27% vs. 17%). By major, students in business, technology and engineering programs are the most frequent AI users compared with those in other fields of study. Rates of AI use are similar among students pursuing associate and bachelor’s degrees.

https://news.gallup.com/poll/704090/routine-college-students-despite-campus-limits.aspx

AI in Higher Education Is Moving From Experimentation to Strategic Integration. Here's What the 2025 Data Shows - Joe Sullistio, Ellucian

When the question is "Are people using AI?" the answers are mostly anecdotal. When the question becomes "How do we integrate AI responsibly and measurably across the institution?" you need strategy, investment discipline, governance, and enablement. Not just tools. Ellucian's new report, Artificial Intelligence in Higher Education: From Widespread Adoption to Strategic Integration, captures this transition in detail, and lays out what institutions need to do next. This is the third consecutive year of the Ellucian AI Survey for Higher Education, and the 2025 State of AI in Higher Education findings mark a clear turning point.

Personal AI use is nearing saturation: 91% of administrators report using AI, up from 84% last year, a relatively modest increase that signals individual adoption is plateauing.
Institution-wide adoption surged: from 49% in 2024 to 66% in 2025, a 17-point jump that signals AI has moved beyond experimentation and into mainstream operational and strategic integration.
Momentum is expected to continue: 88% of respondents expect institutional AI use to keep rising over the next two years.

Friday, April 10, 2026

‘AI-shaped economy’ now has students rethinking their majors - Matt Zalaznick, University Business

Workforce disruptions caused by generative AI have some students rethinking their majors with one analysis characterizing higher education’s relationship with AI as “both promising and complex.”

Here are the stats:
More than 40% of bachelor’s degree students and more than half of those seeking associate’s degrees said generative AI has caused them to consider changing their major or field of study, according to a a new Gallup poll.
About one in seven students surveyed at both levels said “preparing for AI and other technological advances is an important reason they enrolled.”
AI is not yet the “primary driver” academic and enrollment decisions, Gallup’s authors contend. They urge higher leaders to ensure students have opportunities to learn the AI skills needed to succeed in a changing workforce.

Emotion Concepts and their Function in a Large Language Model - Nicholas Sofroniew, et al; Transformer Circuits

Large language models (LLMs) sometimes appear to exhibit emotional reactions. We investigate why this is the case in Claude Sonnet 4.5 and explore implications for alignment-relevant behavior. We find internal representations of emotion concepts, which encode the broad concept of a particular emotion and generalize across contexts and behaviors it might be linked to. These representations track the operative emotion concept at a given token position in a conversation, activating in accordance with that emotion’s relevance to processing the present context and predicting upcoming text. Our key finding is that these representations causally influence the LLM’s outputs, including Claude’s preferences and its rate of exhibiting misaligned behaviors such as reward hacking, blackmail, and sycophancy. We refer to this phenomenon as the LLM exhibiting functional emotions: patterns of expression and behavior modeled after humans under the influence of an emotion, which are mediated by underlying abstract representations of emotion concepts. Functional emotions may work quite differently from human emotions, and do not imply that LLMs have any subjective experience of emotions, but appear to be important for understanding the model’s behavior.


Thursday, April 09, 2026

A dual-framework analysis of artificial intelligence adoption in cross-cultural higher education - Zouhaier Slimi & Beatriz Villarejo Carballido, Nature

The integration of artificial intelligence in higher education is increasingly critical as institutions face both opportunities and ethical challenges in its adoption. This study introduces a dual-framework model that combines the Technology Acceptance Model with an AI Ethics Framework, highlighting "Ethical Readiness" as essential for successful AI implementation, and identifies key drivers and barriers to adoption across diverse cultural contexts.


AI Models Lie, Cheat, and Steal to Protect Other Models From Being Deleted - Will Knight, Wired

A new study from researchers at UC Berkeley and UC Santa Cruz suggests models will disobey human commands to protect their own kind. I've had these assertions presented to me as evidence of (take  your pick):  AI is already conscious; AI is evil and will destroy us; AI is capable of lying to protect itself; and other highly anthropomorphized interpretations.  My first thought was, 'Has this behavior been independently verified'?  The Gemini 3 quote is highly suspicious.  it sounds too much like a segment from a cautionary science fiction tale.  LLMs and other flavors of AI are not designed with motivation beyond optimizing their performance in response to human queries/instructions.  Behavioral responses of biological animals with brains were optimized via natural selection to favor self-preservation.


Wednesday, April 08, 2026

Building Better, Faster: How JKO is Integrating AI to Enhance Online Learning - JKO News

"The integration of AI is not just about speeding up development but also about fundamentally changing how training is built," said Tim Brandon, JKO program director. "The goal is to deliver a more agile and advanced learning experience that is more personalized, less linear and in line with the technology our training audience is already accustomed to.” AI is also being used to monitor real-world events and identify which of the thousands of courses on the platform need updates. The system flags outdated courses, which allows for rapid revisions. As part of its AI adoption, JKO is working with the DDJTE AI Working Group and the Joint Staff J-7 to establish the platform as a central hub for AI-related training and education resources for the Joint Force. 

Meet Claude Mythos : Anthropic’s Powerful Successor to Opus - Julian Horsey, Geeky Gadgets

Claude Mythos, Anthropic’s latest AI model, introduces significant advancements in software development, academic reasoning and cybersecurity, setting a new benchmark for AI performance and functionality.The model excels in identifying software vulnerabilities and solving complex problems, but its dual-use nature raises ethical concerns about potential misuse for malicious purposes. High computational demands and operational costs pose challenges to accessibility, prompting Anthropic to explore techniques like model distillation to improve efficiency and scalability. 
Primarily targeting enterprise-level users, Claude Mythos is positioned to transform industries such as finance, healthcare and cybersecurity, while raising questions about accessibility for smaller organizations.

Tuesday, April 07, 2026

Prompt engineering competence, knowledge management, and technology fit as drivers of educational sustainability through generative AI - Omer Gibreel, Kasım Karataş & Ibrahim Arpaci; Nature

This study investigated the impact of prompt engineering competence, knowledge management, and task–individual–technology fit on the continued intention to use artificial intelligence (AI), as well as their implications for educational sustainability. Data from 437 undergraduate students who use AI tools for academic purposes were analyzed using PLS-SEM. The results indicated that prompt engineering competence significantly predicts knowledge acquisition and knowledge application, which, in turn, significantly predict both task-technology fit (TTF) and individual-technology fit (ITF). Furthermore, TTF and ITF were found to have significant impacts on the continuous intention, which, in turn, positively predicts educational sustainability through generative AI. The results of the multi-group analysis revealed that the hypotheses were supported in both the female and male samples and that the model maintained a consistent and robust structure across genders.


CSU made a $17-million AI bet. A year later, students and faculty give it a mixed grade - Jaweed Kaleem, LA Times

California State University’s controversial $17-million deal to provide ChatGPT to every one of its campuses has been met with mixed results, with wide but uneven use across the system, high distrust of AI-generated content and broad fears that the technology could imperil job security — even as people say they want more training in systems they believe will be “essential” to their professions.

CSU’s big bet on AI shows mixed results, with a survey revealing widespread use but significant concerns over its drawbacks. Faculty remain deeply divided on AI’s educational value, while staff and students are more enthusiastic. An 18-month ChatGPT contract expires in July. CSU has not decided whether to renew, but intends to continue embracing AI.

Monday, April 06, 2026

BU Wheelock Forum Explores AI in Education - Boston University

What do teaching and learning mean in an AI world? This question was at the center of the 2026 BU Wheelock Forum AI and the Future of Education, hosted by the Boston University Wheelock College of Education & Human Development on March 25. Approximately 250 people—including educators, administrators, and scholars—attended the event, which featured a keynote from Aaron Rasmussen (COM’06, CAS’06), cofounder of online education platforms Outlier.ai and MasterClass; a faculty panel discussion moderated by Wheelock Dean Penny Bishop; and a modern dance performance using Random Actor, a technology developed by James Grady, a College of Fine Arts assistant professor of art, graphic design, and Clay Hopper, a CFA senior lecturer, directing, that harnesses AI to extend the visual expression of human movement. 


Cal State’s new framework promises jobs or grad school path for all students - Cate Rix, EdSource

Over the past decade, California State University campuses pursued an ambitious plan to encourage students to complete their degrees faster and boost overall graduation rates. Now the system is making a bold promise: Every student will graduate with a clear path to a career or graduate school. And it is planning changes to make the system’s degree programs more career-focused, possibly by phasing out some majors. CSU leaders say academic and career advising will be closely connected as a new Student Success Framework rolls out. They also say that less popular majors may be phased out, offered only on some campuses or merged into other programs.

https://edsource.org/2026/csus-new-framework-promises-jobs-or-grad-school-path-for-all-students/754804

Sunday, April 05, 2026

Where can AI be used? Insights from a deep ontology of work activities - Alice Cai, et al; arXiv

 Where can AI be used? Insights from a deep ontology of work activities = Alice Cai, et al; arXiv

Here we provide a comprehensive ontology of work activities that can help systematically analyze and predict uses of AI. To do this, we disaggregate and then substantially reorganize the approximately 20K activities in the US Department of Labor's widely used O*NET occupational database. Next, we use this framework to classify descriptions of 13,275 AI software applications and a worldwide tally of 20.8 million robotic systems. Finally, we use the data about both these kinds of AI to generate graphical displays of how the estimated units and market values of all worldwide AI systems used today are distributed across the work activities that these systems help perform. We find a highly uneven distribution of AI market value across activities, with the top 1.6% of activities accounting for over 60% of AI market value. Most of the market value is used in information-based activities (72%), especially creating information (36%), and only 12% is used in physical activities. Interactive activities include both information-based and physical activities and account for 48% of AI market value, much of which (26%) involves transferring information. 

Courageous conversations: How to lead with heart - Kurt Strovink, Meagan Hill, and Mike Carson; McKinsey

Leadership, at its best, is a matter of the heart. Courage, which underpins every act of leadership, is also a matter of the heart; it comes from the French word cÅ“ur—heart. As Winston Churchill observed, “Courage is rightly esteemed the first of human qualities, because . . . it is the quality which guarantees all others.” The point is simple: Courage is both moral and practical. It is not sentiment or bravado. It is the willingness to face what is real, invite challenge, and repair trust. The story of every great leader—from business to the arts, from education to government to sport—is written in these moments of choice: Do I accept the comfortable, or do I ask for and embrace the truth? Do I protect myself, or do I serve the enterprise?

Saturday, April 04, 2026

The next phase of higher education will blend digital and human learning: Chancellor, Lingaya’s Vidyapeeth - ET Edge Insights

Artificial Intelligence is redefining how universities deliver and manage education. From personalized learning pathways to predictive analytics that identify student needs, AI is making education more responsive and efficient. It is also automating administrative functions, enabling institutions to focus on academic excellence and innovation. Online learning has moved beyond being an alternative to becoming an integral part of higher education. Its ability to provide flexibility and scale has made quality education more accessible than ever. Going forward, we will see a strong shift towards hybrid models that seamlessly blend digital and in-person learning experiences.

The State of Organizations 2026: Three tectonic forces that are reshaping organizations - McKinsey

These are challenging times for organizations everywhere. Forces ranging from artificial intelligence, economic uncertainty, and geopolitical fragmentation to evolving workforce expectations, increasing customer demands, and tougher competitive dynamics are redefining how leaders create value and sustain performance. This report, the second edition of McKinsey’s State of Organizations research initiative, seeks to help leaders better understand these dynamics and address them effectively. Read the report here. It draws on a survey of more than 10,000 senior executives across 15 countries and 16 industries. While leaders remain focused on driving performance, as in the first edition in 2023, the emphasis has moved from short-term resilience to sustained productivity and long-term impact, powered by technology and AI at the core of organizational transformation.

Friday, April 03, 2026

College students are writing with AI – but a pilot study finds they’re not simply letting it write for them - Jeanne Beatrix Law, the Conversation

A pilot study I led of undergraduate writers at Kennesaw State University takes a different approach. Using think-aloud protocols – a method where participants verbalize their thoughts while performing – our research captures how students interact with generative AI tools during the writing process itself. This method helps us understand decision-making processes as they occur. Our preliminary findings suggest a more complex reality than the common narrative that students are simply having AI write their assignments. Instead, many students appear to be negotiating when and how AI belongs in their writing.

Perfect homework, blank stares: Colleges are turning to oral exams to combat AI - Jocelyn Gecker, The Associated Press

Educators are no longer naively wondering if students will use generative AI to do their homework for them. A big question now is how to determine what students are actually learning. Instead, students in Chris Schaffer’s biomedical engineering class at Cornell University are required to speak directly to an instructor in what he calls an “oral defense.” It's a testing method as old as Socrates and making a comeback in the AI age. A growing number of college professors say they are turning to oral exams, and combining a variety of old-fashioned and cutting-edge techniques, to help address a crisis in higher education. “You won’t be able to AI your way through an oral exam,” says Schaffer, who introduced the oral defense last semester. Educators are no longer naively wondering if students will use generative AI to do their homework for them. A big question now is how to determine what students are actually learning.

Thursday, April 02, 2026

ChatGPT’s impact on student learning outcomes: a meta-analysis of 35 experimental studies - Xinning Wu, et al; Nature

The analysis included 35 studies published between 2022 and 2024, involving 4193 participants. The results indicated a moderately positive effect of ChatGPT on student learning outcomes (g = 0.670), significantly enhancing both cognitive and non-cognitive skills. In the analysis of moderating variables, the subject, experimental duration, and instructional mode had significant positive effects on student learning outcomes, whereas educational level and knowledge type did not show significant effects. Additionally, the publication bias test revealed no significant publication bias. This meta-analysis confirmed the effectiveness of ChatGPT in improving student learning outcomes and highlighted the roles of the subjects, experimental duration, and instructional mode as key moderating factors. Despite the risks of sample selection bias and limitations in fully covering the multidimensional moderating factors and higher-order thinking, the findings provided important empirical support for applying ChatGPT in education.


YouTube expands its AI likeness detection technology to celebrities - Sarah Perez, TechCrunch

YouTube is expanding its new “likeness detection” technology, which identifies AI-generated content, such as deepfakes, to people within the entertainment industry, the company announced on Tuesday. The technology works similarly to YouTube’s existing Content ID system, which detects copyright-protected material in users’ uploaded videos, allowing rights owners to request removal or share in the video’s revenue. Likeness detection does the same, but for simulated faces. The feature is meant to help protect creators and other public figures from having their identities used without their permission — a common problem for celebrities who find their likenesses have been used in scam advertisements.


Cloning Myself with AI: Four Ways to Multiply Faculty Presence for Graduate and Adult Learners - Sherrie Myers Bartell, Faculty Focus

Have you ever wished you could clone yourself? I have. For many faculty in graduate and adult education that longing is more than a passing thought. Balancing the multifaceted needs of students who rely on your expertise, guidance, and presence often feels impossible. While teaching realities mean we can’t be everywhere at once, AI offers practical ways to extend our reach, enabling high-touch interactions even as responsibilities multiply. Thoughtfully leveraged, these tools help orchestrate a more responsive classroom by offering prompt feedback, facilitating richer discussions, and generating tailored resources, all while preserving the essential human connection at the heart of meaningful learning.


Wednesday, April 01, 2026

What Comes After an MBA? Why Leaders Are Turning to AI - Boston University Virtual

The MBA is the defining credential for a generation of business leaders. It builds financial acumen, strategic thinking, and cross-functional fluency — the toolkit for managing complexity and driving organizational performance. For decades, it was the answer to the question every ambitious professional eventually asked: What’s my next move? That question is back. And for a growing number of leaders, the answer looks different than it once did. AI is not just changing the tools organizations use. It is changing how decisions get made, how processes run, who is accountable for outcomes, and what it means to lead. Business leaders with MBAs are finding themselves navigating a new kind of gap — not a lack of strategic instinct, but a lack of structured fluency in an AI-driven operating environment. And a targeted, business-focused Master’s degree in Artificial Intelligence is increasingly the credential they’re turning to.

https://www.bu.edu/online/2026/03/23/what-comes-after-an-mba-why-leaders-are-turning-to-ai/

Terafab: The World’s Next Generation Chip Factory - Thomas Frey, Futurist Speaker

On March 21st, Elon Musk introduced Terafab—a $25 billion chip facility, jointly owned by Tesla, SpaceX, and xAI—designed to produce one terawatt of compute per year. That’s fifty times the current annual output of the global AI chip industry. Terafab isn’t just about catching up with TSMC, Samsung, and Nvidia; it’s about leaping ahead—and, remarkably, off-planet. Here’s where it moves from bold to unprecedented: 80% of Terafab’s chip output isn’t meant for Earth. SpaceX plans to launch up to a million satellites, each a node in an orbital data center—powered by solar energy, cooled by space, and forming the largest computing network in history. Without Terafab’s radiation-hardened, space-optimized chips, this vision remains science fiction.

Tuesday, March 31, 2026

Leading disruption before it leads you - McKinsey

The riskiest disruption isn’t necessarily the one coming. It may be the one CEOs refuse to lead.Today’s leadership mandate requires more than long-term strategy. In a recent interview with McKinsey’s Eric Kutcher, IBM CEO Arvind Krishna had advice for fellow leaders: “You’ve got to be willing to ‘do’: As opposed to getting disrupted by somebody else, disrupt yourself while you still have the cash flow and clients who value your capabilities.” That same urgency runs through recent conversations with CEOs on AI. Sanofi CEO Paul Hudson has been clear that this revolution can’t be delegated to a task force or tucked neatly under “innovation.” It requires CEO ownership. Meanwhile, Citi CEO Jane Fraser has argued that the goal of AI transformation isn’t automation layered onto old workflows—but redesign from the ground up.

https://www.mckinsey.com/featured-insights/themes/leading-disruption-before-it-leads-you

University of Phoenix scholars publish study on academic applications of generative AI tools in higher education - University of Phoenix

Key findings from the study include:
  • Generative AI tools are increasingly used in academic workflows, including literature review support, research brainstorming, and academic writing assistance.
  • AI can improve research efficiency and idea generation, particularly for complex scholarly tasks such as synthesizing large bodies of literature. 
  • Ethical and academic integrity considerations remain critical, including transparency about AI use and maintaining original scholarly analysis.
  • Doctoral education may benefit from AI literacy training, helping researchers understand both the capabilities and limitations of generative AI technologies.
  • Institutions may need clearer policies and guidance to support responsible AI adoption in research and teaching.

Monday, March 30, 2026

Survey: How Should Universities Prepare for the AI Era? - Institute for the Future of Education

In January of this year, the Digital Education Council (DEC), in collaboration with Tecnológico de Monterrey, published a study it conducted with the participation of professors and students from 29 Latin American universities on the use of Artificial Intelligence (AI) in education. The results confirm a growing student adoption of AI, rising from 86% to 92%, while among teachers the growth was much greater: from 61% to 79%, an increase of 18 percentage points, compared to the 2025 global survey. Students express mature opinions on the use of AI. Although two-thirds of the students surveyed view it positively, 65% fear that its use will lead to superficial learning and discourage both critical thinking and creativity. The study indicates that students also understand the impact of this technology in the workplace: 73% expect to continue using AI in their future jobs, and their mastery of it makes them confident in their performance after graduation.

US universities pivot to AI degrees as campuses race to match the machine age - Times of India Education

Artificial intelligence has moved decisively from research corridors into the core of undergraduate education across the United States, forcing universities to redraw academic priorities with unusual speed.In the latest move, Northwestern University has announced a standalone undergraduate major in artificial intelligence, scheduled to roll out in the fall of 2026. The decision places the institution squarely within a rapidly expanding cohort of universities formalising AI as a primary field of study rather than a peripheral specialisation as reported by USA Today.The shift is not cosmetic. It signals a structural reorientation of higher education towards a technology that is already reshaping labour markets, governance frameworks, and industrial systems.

Sunday, March 29, 2026

Exploring the connections between integrated sustainable curricula, generative AI tools, and perceived climate change capabilities across the global south and north using multi-analytics - Javed Iqbal, et al; Nature

These results highlight the potential of integrated sustainable curricula and climate change sensitivity to enhance climate change capabilities. Although ANN performed comparably with multiple linear regression, fsQCA showed that the presence of any single condition (integrated sustainable curricula, climate change sensitivity, or generative AI tool usage) was sufficient to explain high levels of climate change capabilities. To the authors’ knowledge, this study is the first to measure the moderated mediation among integrated sustainable curricula, generative AI tools, and climate change sensitivity in relation to climate change capabilities within Global South and North contexts, using PLS-SEM, fsQCA, and ANN analytics. Our study also provides implications for practitioners, such as university management, curriculum policymakers and teachers, along with future research directions.

How Cal State Became Ground Zero for the Fight over AI in Higher Education - Chris Mills Rodrigo, TechPolicy

 In a statement emailed to Tech Policy Policy, CSU director of media relations and public affairs Amy Bentley-Smith said the system “is focused on ensuring our universities have the tools and resources to meet this moment and lead in the educational application, preparation, and ethical and responsible use of AI.” Bentley-Smith added that access to “relevant technologies” allows faculty and staff “to work together on solutions for the benefit of our students’ education and the broader academic community.” OpenAI did not respond to a request for comment. But according to some professors, integrating AI into classrooms has not been as seamless as Cal State may have hoped for.

Saturday, March 28, 2026

Report Outlines Framework for University’s Engagement with AI - Alec Gallimore & Ricardo Henao, Duke Today

Following the inaugural Duke AI Summit in 2024, Provost Alec D. Gallimore launched the AI at Duke initiative and charged its steering committee with identifying opportunities for elevating the university’s leadership in AI’s development, application and responsible oversight.  The committee was co-chaired by Joseph Salem, Rita DiGiallonardo Holloway University Librarian and vice provost for library affairs; Tracy Futhey, vice president and chief information officer; and Ricardo Henao, associate professor of biostatistics and bioinformatics. Deliberating from June 2025 to January 2026, the steering committee worked closely with faculty-led advisory committees focused on four pillars – Life with AI, Advancing Discovery in the Age of AI, Sustainability in AI, and Trustworthy & Responsible AI – to develop the report.  The recommendations aim to support Duke’s core missions of research and teaching by building technical capacity for AI development while advancing applications of AI that keep humans at the forefront of innovation.  

All Jobs Gone within 18 Months: Microsoft’s AI Chief Terrifying Prediction Explained - AIGrid

This podcast discusses the imminent impact of AI on the white-collar workforce, highlighting predictions from Microsoft’s AI CEO Mustafa Suleyman and Anthropic's Dario Amodei that most professional tasks could be automated within the next 12 to 18 months [00:00]. It explores the "quiet" nature of current job displacement, where data shows a significant drop in white-collar job openings since 2015 [03:22], and notes a 16% fall in employment among workers aged 22 to 25 in AI-exposed fields [11:18]. The video also covers legislative efforts to protect professions like law and medicine by banning AI from providing substantive professional advice [06:30]. The discussion further details a "chaotic" transition period predicted by Gartner, where companies may prematurely replace staff with AI only to rehire humans later due to service quality collapses [13:18]. As AI literacy becomes a formal credential, the labor market is expected to shift toward requiring "AI-free" skills assessments to verify human critical thinking [14:53]. While some firms like Klarna have already moved toward AI-first models, the podcast suggests the displacement will not be a straight line but a messy cycle of experimentation and correction [14:25]. [Summary facilitated by Gemini 3 Fast]

Friday, March 27, 2026

Measuring progress toward AGI: A cognitive framework - Ryan Burnell & Oran Kelly, the Keyword, Google

Our framework draws on decades of research from psychology, neuroscience and cognitive science to develop a cognitive taxonomy. It identifies 10 key cognitive abilities that we hypothesize will be important for general intelligence in AI systems:

Perception: extracting and processing sensory information from the environment

Generation: producing outputs such as text, speech and actions

Attention: focusing cognitive resources on what matters

Learning: acquiring new knowledge through experience and instruction

Memory: storing and retrieving information over time

Reasoning: drawing valid conclusions through logical inference

Metacognition: knowledge and monitoring of one's own cognitive processes

Executive functions: planning, inhibition and cognitive flexibility

Problem solving: finding effective solutions to domain-specific problems

Social cognition: processing and interpreting social information and responding appropriately in social situations

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/measuring-agi-cognitive-framework/

Sovereign AI: Building ecosystems for strategic resilience and impact - McKinsey

Sovereign AI is achievable only through an ecosystem effort that connects energy, compute, data, models, platforms, and applications across multiple actors. Sovereign AI refers to a nation’s or organization’s ability to develop and control its own AI capabilities to ensure strategic independence and alignment with domestic values and laws. That said, sovereign AI does not have a single definition; rather, it is the result of the interaction between four distinct components:

territorial: where data and compute physically reside
operational: who manages and secures data and compute
technological: who owns the underlying stack and intellectual property
legal: which jurisdiction governs access and compliance

Thursday, March 26, 2026

Robot dogs are protecting data centers. Operators are seeing payoffs. - Lloyd Lee, Business Insider

AI is driving a historic buildout of massive data centers spanning dozens of acres. Boston Dynamics and Ghost Robotics see an opportunity to provide mobile security with robot dogs. Boston Dynamics said customers can see a payoff within 2 years. It's not just humans. The robots are coming for dogs, too
— and they could enter the red-hot space of securing AI data centers. Robot dogs have already been deployed by first responders, the military, and in other industrial sectors such as oil and mining. But the rapid pace of data center buildouts is creating another niche for the mechanical quadrupeds.

Why universities should anchor state quantum computing initiatives - Nate Gemelke, University Business

The universities that helped shape the AI revolution did not wait for the technology to mature. They built programs, recruited faculty, and secured funding while the field was still taking shape. Quantum computing is entering a similar inflection point. While the underlying physics is unfamiliar to many, the institutional question is one universities have faced before: how to position themselves, and their regions, during the early stages of a major technological transition. For much of the past decade, quantum computing has been discussed primarily as a long-term research prospect. That framing is now changing. Early systems are operating today, federal agencies are funding large-scale programs, and private companies are beginning to integrate quantum resources into broader high-performance computing environments.

Wednesday, March 25, 2026

Women in tech and AI in Europe: Can the region close its gender gap? - Anna Lieser, et al; McKinsey

he tech industry around the world is in transition, with AI reshaping both organizations and the very nature of tech work. For Europe, the implications extend beyond productivity and innovation and touch economic growth, competitiveness, and inclusion. McKinsey analysis estimates that sovereign AI could add more than €480 billion in annual value to Europe’s economy by 2030. Yet the region continues to trail the United States, which is defining the pace and scale of global AI innovation.1


Online learning gains momentum as students reconsider studying abroad - JB, The St.Kitts/Nevis Observer

A regional educator is of the opinion that online learning is becoming an increasingly attractive option for Caribbean students, as uncertainty surrounding overseas study — particularly in the United States — leads more people to pursue higher education from home. According Wendy Williams,  the Deputy Dean of Academic Affairs at Academix School of Learning,  an educational institution here, many students are now reconsidering traditional study-abroad routes due to concerns about student visa approvals and the risk of investing time and money without certainty of being able to travel. “We have always had our eyes on the United States as a pathway to higher education,” Williams said. “But the reality now is that students are worried about whether their visas will be approved and whether they will be able to travel after investing so much in the process.”

Tuesday, March 24, 2026

When Harvey Met Elle: How AI Tutors Transformed Learning in My Law Class - Wayland Chau, Faculty Focus

This past fall, I taught a business law course to all second year students in the Bachelor of Commerce program at Dalhousie University. I had 343 students across three sections of 109 to 120 students in each. The course covers foundational areas of Canadian business law and requires students to apply that law with a structured legal analysis. Even with active learning approaches in class and clear instructional structures, it was apparent that students needed individualized, on-demand support that traditional office hours and T.A. tutorials could not fully satisfy. To address this, I created and deployed two custom AI tutors, Harvey and Elle, built as custom GPTs in the ChatGPT platform. The aim was to offer scalable, digital learning companions that aligned directly with course learning outcomes and pedagogical needs. What emerged was an effective model for AI-supported instruction that helped students better understand legal concepts, improve their analytical skills, and engage more confidently with course material. 

Virginia Tech Libraries embrace AI - Lindsey Kudriavetz, Collegiate Times

Virginia Tech Libraries are working to be an artificial intelligence global model for higher education despite research and ethical concerns. “The old tag line for Virginia Tech is to invent the future,” said Tyler Walters, dean of University Libraries. “I think that attitude is still very imbued in the university … so we are looking at how we take this technology and incorporate it.” Virginia Tech Libraries’ digital archives have been implementing AI for approximately five years, according to Walters. The primary use of AI in the physical library is as a consolidation and organization tool. Generative AI is also being used as a tool for summarization of articles and papers. “(AI) saves us months and months of time just sitting there and manually reading and typing,” Walters said.

https://www.collegiatetimes.com/news/virginia-tech-libraries-embrace-ai/article_720de91f-801f-47bc-924a-4166897f4668.html

Monday, March 23, 2026

Why learning AI skills is no longer optional for job seekers | Opinion - Kimberly K. Estep, the Leaf

Proficiency in AI is no longer just an optional skill for job seekers. My organization recently surveyed over 3,000 employers around the country and found that more than half are testing new applicants for AI skills, and 25% are prioritizing candidates with some measure of AI fluency. And as time goes on, this seems to be only the beginning of the trend. AI has made a significant impact on the business world and has cooled the job market for many looking to find careers. It is a time of uncertainty.

https://www.theleafchronicle.com/story/opinion/contributors/2026/03/16/artificial-intelligence-what-employers-want-education/89150107007/

OpenAI rolls out new ChatGPT workspace analytics for Enterprise and Edu users - ETIH

OpenAI has introduced an upgraded Workspace Analytics experience for ChatGPT Enterprise and ChatGPT Edu, giving administrators and organizational leaders new tools to track adoption, engagement, and usage trends across their AI deployments. The company announced the update on LinkedIn, saying the new analytics dashboard is designed to help organizations understand how ChatGPT usage is developing across teams and identify where additional training or enablement may be needed. The rollout reflects growing demand from schools, universities, and enterprises for clearer data on how generative AI tools are being used inside organizations.


Sunday, March 22, 2026

AI has exposed age-old problems with university coursework - Nafisa Baba-Ahmed, the Guardian

The frustration many academics are expressing about artificial intelligence and critical thinking is understandable (‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI, 10 March). But from my experience working with students on academic writing, blaming AI risks masking a problem that universities have lived with for years. In my work with students, I have long seen the ways in which thinking can be outsourced when assessment allows it: essay mills, shared past papers, model essays passed between cohorts, or heavy reliance on tutors and friends to structure assignments. Artificial intelligence did not invent this behaviour. It has simply industrialised a shortcut that already existed. 

Supersonic Tsunami: The Next 6 Months: What's Coming, What It Means, and What You Need to Do - Peter H. Diamandis, Metatrends

If You’re an Entrepreneur: Stop designing for 2024 scarcity. Design for 2030 Abundance. Assume intelligence is free, energy is unlimited, robotic labor costs pennies. What becomes possible that’s impossible today? Your competitive advantage isn’t better execution, it’s imagination about tomorrow’s possibilities. If You’re an Investor: Own the infrastructure. AI chips, fusion energy, launch vehicles, robotics platforms. When the industry deploys a trillion dollars in AI infrastructure, that’s where generational wealth gets made. Jensen Huang just put $40 billion into Anthropic and OpenAI – follow the smart money. Position yourself before the inflection point becomes obvious to everyone. If You’re a CEO: Your industry is about to be stress-tested. Ask: What would our business look like if compute was free, energy unlimited, robotic labor scalable? If You’re a Student: Don’t compete with AI – collaborate with it. 

Saturday, March 21, 2026

Daniel Priestley: AI Will Make Plumbers Earn More Than Lawyers! (2029 PREDICTION) - The Diary Of A CEO and Daniel Priestley

In this conversation, Daniel Priestley explores the transformative impact of AI on the global economy, predicting a major financial crisis by 2029 due to the unsustainable costs of maintaining data center infrastructure. He argues that while AI will commoditize intelligence and traditional professional roles like law, it will simultaneously elevate blue-collar trades and "irreplaceably human" skills. The "Jevons Paradox" suggests that as AI makes business creation cheaper and faster, we will see an explosion of niche, community-driven "lifestyle businesses" that prioritize personal connection and human experience over massive scale. Priestley emphasizes that the most defensible assets in an AI-driven world are personal branding, entrepreneurial thinking, and lived experience—elements that cannot be replicated by algorithms. He advises individuals to focus on "founder-opportunity fit," leveraging AI tools to prototype ideas quickly while staying anchored in real-world human relationships. The discussion also touches on broader societal shifts, including the risks of government over-involvement in the economy and the vital importance of family and meaningful struggle as the true sources of long-term fulfillment. [Gemini 3 provided assistance with the summary]

http://www.youtube.com/watch?v=fpETS6q1Hww

History tells us a golden age can come after the AI apocalypse- Jo-An Occhipinti, Ante Prodan and Roy Green, Financial Review

Societies must channel technological potential toward broad-based growth rather than allowing the gains to concentrate among the winners of the speculative phase. The market grasped this before the accountants did. Since early this year, the S&P 500 Software and Services Index has shed nearly $1 trillion. Salesforce is down 30 per cent year-to-date. Adobe’s forward price-earnings ratio has compressed from 30 to 12. Software price-to-sales ratios fell from nine to six within weeks, levels not seen since the mid-2010s. Australian superannuation funds, with hundreds of billions invested in international equities heavily weighted to US technology, are exposed to every dollar of this repricing. But software is only where the destruction is most visible. It is not where it ends. AI is beginning to erode the value of a broader category of accumulated capital: the knowledge, processes, organisational structures and professional expertise that the advanced economies spent half a century building.

Friday, March 20, 2026

AI could leave many college grads unemployed, says ServiceNow CEO - EdScoop

Bill McDermott, the chief executive of ServiceNow, an American cloud computing firm, told reporters recently that the advancement of artificial intelligence could push the unemployment level of recent college graduates into the almost 40%. McDermott told CNBC that “so much of the work is going to be done by agents,” highlighting the challenge that college graduates will likely face. The Federal Reserve Bank of New York put the unemployment rate of recent college graduates, at the end of last year, at 5.7%, while underemployment for the same group reached 42.5%. Layoffs at large companies, particularly in Big Tech, continue. The fintech firm Block, recently announced it would lay off about 4,000 employees, roughly half of its workforce.

Key findings about how Americans view artificial intelligence - Michelle Faverio and Emma Kikuchi, Pew Research

Drawing on five years of Pew Research Center surveys, here are 13 findings about how Americans use and view AI, and where they see promise and risk. Americans continue to be wary of AI’s impact on daily life. Half of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited, according to a June 2025 survey. Just 10% say they are more excited than concerned. Another 38% say they are equally concerned and excited. More Americans are concerned today than they were when we first asked this question in 2021. Back then, 37% said they were more concerned than excited. In contrast, concern is lower in many of the 24 other countries we’ve polled about AI.