Monday, December 22, 2025

Purdue unveils comprehensive AI strategy; trustees approve ‘AI working competency’ graduation requirement - Phillip Fiorini, Purdue

Purdue University on Friday (Dec. 12) unveiled a broad strategy of AI@Purdue across five functional areas: Learning with AI, Learning about AI, Research AI, Using AI and Partnering in AI. A key element of the comprehensive plan came as the Board of Trustees approved a first-of-its-kind plan in the country to introduce an “AI working competency” graduation requirement for all undergraduate students on main campus (Indianapolis and West Lafayette). “The reach and pace of AI’s impact to society, including many dimensions of higher education, means that we at Purdue must lean in and lean forward and do so across different functions at the university,” Purdue President Mung Chiang said. “AI@Purdue strategic actions are part of the Purdue Computes strategic initiative, and will continue to be refreshed to advance the missions and impact of our university.”

2025: The State of Generative AI in the Enterprise - Tim Tully, Joff Redfern, Deedy Das, Derek Xiao, Menlo

Venture funding surged back toward all-time highs, with nearly half of it concentrated in just a handful of frontier AI labs. Then the euphoria peaked. An MIT study2 claiming that 95% of generative AI initiatives fail rattled markets over the summer, exposing how quickly sentiment could shift beneath the weight of AI’s massive capex spend. The whispers of a bubble became a din. The concerns aren’t unfounded given the magnitude of the numbers being thrown around. But the demand side tells a different story: Our latest market data shows broad adoption, real revenue, and productivity gains at scale, signaling a boom versus a bubble. 

Sunday, December 21, 2025

The Psychology of AI Doom - Andrew, the Batch

In this letter, I’d like to explore why some people who are knowledgeable in AI take extreme positions on AI “safety” that warn of human extinction and describe scenarios, such as AI deciding to “take over,” based less on science than science fiction. As I wrote in last year’s Halloween edition, exaggerated fears of AI cause real harm. I’d like to share my observations on the psychology behind some of the fear mongering. Companies that are training large models have pushed governments to place large regulatory burdens on competitors, including open source/open weights models. A few enterprising entrepreneurs have used the supposed dangers of their technology to gin up investor interest. After all, if your technology is so powerful that it can destroy the world, it has to be worth a lot! Fear mongering attracts a lot of attention and is an inexpensive way to get people talking about you or your company. This makes individuals and companies more visible and apparently more relevant to conversations around AI. It also allows one to play savior: “Unlike the dangerous AI products of my competitors, mine will be safe!” Or “unlike all other legislators who callously ignore the risk that AI could cause human extinction, I will pass laws to protect you!” To be clear, AI has problems and potentially harmful applications that we should address. But excessive hype about science-fiction dangers is also harmful.


Gpt-5.2 is the first human replacer -Wes Roth, YouTube

This video by Wes Roth, published in December 2025, discusses the release of OpenAI's GPT-5.2, describing it as a massive leap forward rather than a small incremental update. The second half of the video focuses on the economic implications, specifically analyzing a new benchmark called "GDP-eval," which measures performance on real-world, economically valuable tasks. In this benchmark, GPT-5.2 Pro achieved a 74% win/tie rate against human industry experts—a significant jump from the ~39% score of previous models just months prior. Roth argues this signals a critical turning point where AI is beginning to outperform experienced professionals (with an average of 14 years of experience) at a fraction of the cost, citing a 400x cost reduction in one year. The video concludes with a discussion on the potential for "catastrophic job loss" as AI intelligence per dollar continues to skyrocket, validating fears that human labor in many sectors could soon be replaced.  (Gemini 3 Pro assisted with this summary).


Saturday, December 20, 2025

Introducing GPT-5.2 The most advanced frontier model for professional work and long-running agents. - OpenAI

We are introducing GPT‑5.2, the most capable model series yet for professional knowledge work. Already, the average ChatGPT Enterprise user says⁠ AI saves them 40–60 minutes a day, and heavy users say it saves them more than 10 hours a week. We designed GPT‑5.2 to unlock even more economic value for people; it’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long contexts, using tools, and handling complex, multi-step projects. GPT‑5.2 sets a new state of the art across many benchmarks, including GDPval, where it outperforms industry professionals at well-specified knowledge work tasks spanning 44 occupations.


Texas Christian University Commits $10M to Expand AI Use - Samuel O'Neal, Fort Worth Star-Telegram

A private research university in Texas announced a partnership with Dell to accelerate the use of artificial intelligence on campus and implement an AI system that keeps critical data in-house. The partnership, called AI², is one of TCU’s largest-ever research and technology commitments. AI² will enrich the student learning experience and career preparation by expanding responsible AI usage across campus, TCU said. “AI is considered to be the fifth industrial stage in the world,” Reuben Burch, TCU’s vice provost for research, told the Star-Telegram. “There was fire and steel and now there’s AI. It’s on us that we need to include it for all faculty and students because there’s a world where they don’t use it, and they’re going to get left behind.”

Friday, December 19, 2025

Universities must respond to students’ emotional reliance on AI - Agnieszka Piotrowska, Times Higher Ed

If a student feels remembered by a machine but overlooked by humans, something in the educational contract has broken, says Agnieszka Piotrowska. One of my research students told me recently, almost apologetically, that he sometimes turns to ChatGPT “as an emotional crutch”. He said it seemed to understand him better than his therapist. When I asked why, he said, “It remembers me, my problems and my stories better.” He did not tell me which model he used. I did not ask. We both felt faintly embarrassed, and I am sure this conversation was only possible because psychoanalysis is one of my core disciplines. Students are not supposed to form emotional attachments to software. Academics are not supposed to recognise the loneliness that makes such attachments imaginable. And yet here we are.

https://www.timeshighereducation.com/opinion/universities-must-respond-students-emotional-reliance-ai

AI in Higher Education: A Guide for Teachers - Alexandra Shimalla, EdTech

For many faculty members in higher ed, conversations about artificial intelligence in academia often include the same concerns: There isn’t enough time in the day, AI will erode critical thinking, educators are already stretched thin, and we have to consider compromised data and privacy concerns. The list of fears and frustrations from faculty go on, but as universities explore the benefits of generative AI in higher education and look to the future of their classrooms and what’s best for students, it’s obvious that AI needs to find a place on the syllabus. “At a time when everybody’s overwhelmed, having to do more new things is hard,” says Laura Morrow, senior director for the Center for Teaching and Learning at Lipscomb University. “Fear of what’s going to happen is a big barrier.”

Thursday, December 18, 2025

To AI-proof exams, professors turn to the oldest technique of all - Joanna Slater, Washington Post

A growing number of educators are finding that oral exams allow them to test their students’ learning without the benefit of AI platforms such as ChatGPT. When students in Catherine Hartmann’s honors seminar at the University of Wyoming took their final exams this month, they encountered a testing method as old as the ancient philosophers whose ideas they were studying. For 30 minutes, each student sat opposite Hartmann in her office. Hartmann asked probing questions. The student answered. Hartmann, a religious studies professor who started using oral examinations last year, is not alone in turning to a decidedly old-fashioned way to grade student performance.

How AI is redefining the COO’s role - McKinsey Podcast

 Productivity across sectors is slowing, and labor shortages persist. COOs are in an exceptional position to help their companies address these and other macro trends using AI. From gen AI pilots to automated supply chains, technology is reshaping how operations leaders create efficiencies, build resilience, and encourage teamwork. On this episode of The McKinsey Podcast, McKinsey Senior Partner Daniel Swan speaks with Editorial Director Roberta Fusaro about how COOs can embed technology, particularly AI, into their company’s culture. It requires balancing the urgency of today with the transformation of tomorrow.


Wednesday, December 17, 2025

OpenAI boasts enterprise win days after internal ‘code red’ on Google threat - Rebecca Bellan, Tech Crunch

OpenAI released new data Monday showing enterprise usage of its AI tools has surged dramatically over the past year, with ChatGPT message volume growing 8x since November 2024 and workers reporting they’re saving up to an hour daily. The findings arrive a week after CEO Sam Altman sent an internal “code red” memo about the competitive threat of Google. The timing underscores OpenAI’s push to reframe its position as the enterprise AI leader, even as it faces mounting pressures. While close to 36% of U.S. businesses are ChatGPT Enterprise customers compared to 14.3% for Anthropic, per Ramp AI Index, the majority of OpenAI’s revenue still comes from consumer subscriptions — a base that’s being threatened by Google’s Gemini.

Becoming a tech-savvy leader - McKinsey

The importance of technology in modern business has put increased pressure on leaders to become more tech savvy. So far so good. But what being “tech savvy” actually means for today’s business leaders is hard to define. Neesha Hathi, managing director and head of Wealth & Advice Solutions at Charles Schwab and its former chief digital officer, didn’t begin her career as a techie. She started on the finance side but quickly realized the need for a firm grasp of technology to solve important business problems and address client needs. Hathi recently spoke to McKinsey editorial director Barr Seitz about her journey to tech savviness by moving beyond conceptual understandings of technology to its practical applications.

https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/becoming-a-tech-savvy-leader

Tuesday, December 16, 2025

QUANTUM WILL ECLIPSE AI: Why Everyone’s Betting on the Wrong Horse - Julia McCoy, YouTube

The vide argues that while the world is focused on artificial intelligence, quantum computing is poised to become the most powerful technology in history. The host explains that current AI development is hitting a "power ceiling" where it requires exponentially more energy and infrastructure to make incremental gains. In contrast, quantum computing operates on a fundamentally different paradigm; instead of processing information sequentially like classical computers (and AI), quantum computers can access multiple states simultaneously. This allows them to solve complex problems in minutes that would take traditional supercomputers—and by extension, current AI—trillions of years to compute [01:59].The transcript outlines a timeline where quantum technology overtakes AI, predicting that by 2030, the most valuable companies will be those building quantum infrastructure rather than AI models [12:26]. It highlights transformative applications in fields like drug discovery, climate modeling, and financial logistics, but also warns of "Q-Day"—a future moment when quantum computers will be powerful enough to break all current digital encryption [06:43]. The video concludes that AI is merely "middleware" or a warm-up act designed to prepare humanity for the true endgame: a quantum age where technology can solve problems currently beyond human comprehension.

What and How to Teach When Google Knows Everything and ChatGPT Explains It All Very Well -Ángel Cabrera, President, Georgia Tech

In higher education, we have no choice but to accept that machines already are — or very soon will be — better than humans at virtually every intellectual and cognitive task. We can resist, we can throw tantrums, we can ban AI in classrooms. It is a futile battle — and, in fact, it’s the wrong battle. It's true that, after the Industrial Revolution, a few artisanal shoemakers remained, and beautiful Steinway pianos (which take a year to build and cost $200,000) are still made by hand. But they are exceptions — luxury niche products for nostalgics and enthusiasts. Meanwhile, Pearl River in China produces 150,000 pianos per year (400 per day) that sound excellent and cost a fraction of the price. 

 If resistance is pointless, what is the  so we do not become relics of the past?

Teach AI.

Teach with AI.

Research AI.

Help others benefit from AI.

https://president.gatech.edu/blog/what-and-how-teach-when-google-knows-everything-and-chatgpt-explains-it-all-very-well