A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good.
Saturday, February 14, 2026
ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation
Friday, February 13, 2026
Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube
This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)
HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv
Thursday, February 12, 2026
Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times
The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired
Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI"). Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?
https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/
Wednesday, February 11, 2026
AI-powered search is changing how students choose colleges - Michelle Centamore, University Business
Universities And States Lead Charge On AI Education - Evrim Ağacı, Grand Pinnacle Tribune
Tuesday, February 10, 2026
Working with AI: Measuring the Applicability of Generative AI to Occupations Kiran Tomlinson , Sonia Jaffe , Will Wang , Scott Counts , Siddharth Suri, Microsoft
Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising.
AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed
Monday, February 09, 2026
Evaluating AI-powered learning assistants in engineering higher education with implications for student engagement, ethics, and policy - Ramteja Sajja, et al, Nature
How custom AI bots are changing the classroom:Faculty share cutting-edge AI tools enhancing student learning at the business school. - Molly Loonam, WP Carey ASU
Sunday, February 08, 2026
Artificial Intelligence panel demonstrates breadth of teaching, research, and industry collaboration across the Universities of Wisconsin - University of Wisconsin
The Universities of Wisconsin underscored their growing leadership in artificial intelligence (AI) innovation today as representatives from all 13 public universities convened for a panel discussion before the Board of Regents. The conversation highlighted the universities’ shared commitment to shaping the future of AI in education, research, and workforce development. “As AI reshapes our world, the Universities of Wisconsin are not standing on the sidelines. We are helping define what responsible and innovative use of AI looks like for higher education,” said Universities of Wisconsin President Jay Rothman. “This panel today demonstrated how the Universities of Wisconsin are embracing AI in strategic, collaborative, and responsible ways.”
What generative AI reveals about assessment reform in higher educatio - Higher Education Policy Institute
Assessment is fast becoming a central focus in the higher education debate as we move into an era of generative AI, but too often institutions are responding through compliance and risk-management actions rather than fundamental pedagogical reform. Tightened regulations, expanded scrutiny and mechanistic controls may reassure quality assurance systems, but they run the risk of diluting genuine transformation and placing unsustainable pressure on staff and students alike. Assessment is not simply a procedural hurdle; it is a pivotal experience that shapes what students learn, how they engage with content and what universities and employers prioritise as valuable knowledge and skills. If reform is driven through compliance, we will miss opportunities to align assessments with the learning needs of a graduate entering the gen-AI era.