Saturday, February 14, 2026

Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation

Our recent qualitative study with 28 educators across Canadian universities and colleges—from librarians to engineering professors—suggests that we have entered a watershed moment in education. We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm? Participants widely viewed prompting—the ability to formulate clear and purposeful instructions for a chatbot—as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely. Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking. Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into overreliance or uncritical use of AI.

Friday, February 13, 2026

Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube

This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)

https://youtu.be/ro6BxryR0Yo?si=EAg-zAPcKFm618up&t=1

HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv

While current humanoid whole-body control frameworks predominantly rely on the static environment assumptions, addressing tasks characterized by high dynamism and complex interactions presents a formidable challenge. In this paper, we address humanoid skateboarding, a highly challenging task requiring stable dynamic maneuvering on an underactuated wheeled platform. This integrated system is governed by non-holonomic constraints and tightly coupled human-object interactions. Successfully executing this task requires simultaneous mastery of hybrid contact dynamics and robust balance control on a mechanically coupled, dynamically unstable skateboard. 

Thursday, February 12, 2026

Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, making it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. 

The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired

Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI").  Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?

https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/

Wednesday, February 11, 2026

AI-powered search is changing how students choose colleges - Michelle Centamore, University Business

Students are turning to AI tools and AI-enhanced Google searches to explore colleges, pushing them to rethink how they recruit and enroll prospective students, according to EducationDynamics’ 2026 Marketing and Enrollment Management Benchmarks. Marketing and enrollment management have reached a “point of no return,” the report says, as students now do much of their research before ever contacting a college. The report also warns the traditional enrollment base has peaked, making reliance on traditional recruitment channels “the most dangerous strategy an institution can adopt.”

Universities And States Lead Charge On AI Education - Evrim Ağacı, Grand Pinnacle Tribune

Indiana, Kentucky, and Vermont unveil new initiatives and guidelines to prepare students for an AI-driven future while balancing innovation and responsibility.

Indiana University launched a $300,000 initiative to integrate AI responsibly across its nine campuses, involving over 100 faculty, staff, and students.
The IU initiative focuses on AI literacy, pedagogy innovation, experiential learning, and transforming student services with ethical AI tools.
The University of Kentucky introduced the state's first Bachelor of Science in Artificial Intelligence, starting its inaugural class in spring 2026.

Tuesday, February 10, 2026

Working with AI: Measuring the Applicability of Generative AI to Occupations Kiran Tomlinson , Sonia Jaffe , Will Wang , Scott Counts , Siddharth Suri, Microsoft

Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising.


AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed


On my campus, discussions about artificial intelligence tend to focus on how students should be allowed to use it and what tools the university should invest in. In my own work, I’ve seen both the promise and the pitfalls: AI that speeds up my coding, tidies my writing, and helps me synthesize complex documents, and the occasional student submission that is clearly machine-generated. As I’ve started integrating these tools into my work, I’ve begun asking a different question: How is AI reshaping policymaking in colleges and universities, and how might it influence the way we design, implement and analyze university policy in the future?

Monday, February 09, 2026

Evaluating AI-powered learning assistants in engineering higher education with implications for student engagement, ethics, and policy - Ramteja Sajja, et al, Nature

As generative AI becomes increasingly integrated into higher education, understanding how students engage with these technologies is essential for responsible adoption. This study evaluates the Educational AI Hub, an AI-powered learning framework, implemented in undergraduate civil and environmental engineering courses at a large R1 public university. Using a mixed-methods design combining pre- and post-surveys, system usage logs, and qualitative analysis of students’ AI interactions, the research examines perceptions of trust, ethics, usability, and learning outcomes. Findings show that students valued the AI assistant for its accessibility and comfort, with nearly half reporting greater ease using it than seeking help from instructors or teaching assistants. The tool was most helpful for completing homework and understanding concepts, though views on its instructional quality were mixed. 

How custom AI bots are changing the classroom:Faculty share cutting-edge AI tools enhancing student learning at the business school. - Molly Loonam, WP Carey ASU

One example is NotebookLM, an application that converts course materials into podcast-style audio, allowing students to learn while exercising, commuting, or completing other everyday tasks. NotebookLM was one of four AI tools highlighted during the W. P. Carey School of Business's recent Coffee, Tea, and ChatGPT event. This series brings faculty and staff together to share insights on the impact of generative AI on teaching, learning, and research. "We are finding ourselves in a fascinating inflection point for our school as we see the depth of work that our faculty are doing in utilizing AI Tools thoughtfully, while simultaneously learning every day as these tools continue to evolve and we make sense of them," said Associate Dean of Teaching and Learning Dan Gruber, who launched the series nearly three years ago with W. P. Carey faculty teaching leads. Gruber also serves as a College Catalyst for Practice Principled Innovation and co-founded the Teaching and Learning Leaders Alliance, a global consortium that connects business school leaders.

Sunday, February 08, 2026

Artificial Intelligence panel demonstrates breadth of teaching, research, and industry collaboration across the Universities of Wisconsin - University of Wisconsin

The Universities of Wisconsin underscored their growing leadership in artificial intelligence (AI) innovation today as representatives from all 13 public universities convened for a panel discussion before the Board of Regents. The conversation highlighted the universities’ shared commitment to shaping the future of AI in education, research, and workforce development. “As AI reshapes our world, the Universities of Wisconsin are not standing on the sidelines. We are helping define what responsible and innovative use of AI looks like for higher education,” said Universities of Wisconsin President Jay Rothman. “This panel today demonstrated how the Universities of Wisconsin are embracing AI in strategic, collaborative, and responsible ways.”

https://www.wisconsin.edu/news/archive/artificial-intelligence-panel-demonstrates-breadth-of-teaching-research-and-industry-collaboration-across-the-universities-of-wisconsin/

What generative AI reveals about assessment reform in higher educatio - Higher Education Policy Institute

Assessment is fast becoming a central focus in the higher education debate as we move into an era of generative AI, but too often institutions are responding through compliance and risk-management actions rather than fundamental pedagogical reform. Tightened regulations, expanded scrutiny and mechanistic controls may reassure quality assurance systems, but they run the risk of diluting genuine transformation and placing unsustainable pressure on staff and students alike. Assessment is not simply a procedural hurdle; it is a pivotal experience that shapes what students learn, how they engage with content and what universities and employers prioritise as valuable knowledge and skills. If reform is driven through compliance, we will miss opportunities to align assessments with the learning needs of a graduate entering the gen-AI era.