Tuesday, March 03, 2026

Doomsday scenario or reality? Mass layoffs fuel fear of AI Armageddon - Jessica GuynnJessica Guynn, USA Today

A doomsday scenario from a small research firm this week warned that artificial intelligence tools may lead to a sharp rise in unemployment. The report from Citrini Research circulated widely on social media, unnerving investors by imagining what would happen if AI continues to upend white-collar work from well-heeled professionals missing mortgage payments to being forced to find work as Uber drivers. While the researchers called the report a "scenario, not a prediction" and analysts pushed back against it, the research got a second wind Thursday, Feb. 26, when Square and Cash App operator Block said it would slash nearly half its workforce — more than 4,000 employees — as AI reshapes its business.  
The mass layoffs signal how the rapidly developing technology is displacing workers in some parts of the economy, likely fueling fears that AI is coming for more American jobs.  

Dr. Aviva Legatt, Forbes Columnist, Founder eGenerative, LinkedIn Posting

I've been tracking AI adoption in higher education for years through my Forbes column — and one thing has become clear: there's no single place to see what institutions are actually doing with AI.

So I built one.

Introducing the AI Use Cases in Higher Education Handbook — a free, downloadable resource cataloging 75+ real-world and proposed AI applications across 12 functional areas, from teaching and student support to governance, workforce development, and beyond.  


Monday, March 02, 2026

Can global universities adapt as AI upends tech job market? - Kyuseok Kim, University World News

The artificial intelligence revolution is no longer hypothetical; it is already reshaping software development. As tools such as OpenAI’s ChatGPT, Anthropic’s Claude and other generative AI systems produce functional code from simple prompts, long-standing assumptions about computer science education are shifting. Degrees once seen as secure pathways to stable, high-paying jobs now face uncertainty, as AI encroaches on tasks traditionally assigned to entry-level roles. The impact is no longer distant but immediate, reaching higher education. So how is this mega-trend reshaping transnational and transglobal higher education models?

4 in 5 Students Say AI Improved Their Academic Performance—But Only 20% of Universities Have a Formal AI Policy - Business Wire


New Coursera report shows half of U.S. higher education institutions are unprepared to manage AI

78% of U.S. students and educators say AI is having a positive impact on higher education
50% believe the U.S. higher education system is unprepared to manage AI
AI adoption is widespread among U.S. university students and educators, yet half believe higher education is not fully prepared to manage its impact, according to a new survey released today by Coursera (NYSE: COUR), a leading global online learning platform.

The AI in Higher Education Report, based on responses from more than 4,200 university students and educators across the United States, United Kingdom, India, Mexico, and Saudi Arabia, found that nearly all students and educators use AI to facilitate personalized training, provide real-time feedback, and increase productivity and efficiency.

Sunday, March 01, 2026

Gratitude Practice Designer - TAAFT

This prompt turns AI into a Gratitude Practice Designer who creates customized gratitude exercises that actually stick. Unlike generic advice to “keep a gratitude journal,” this system designs practices tailored to your personality, schedule, and what feels authentic rather than forced. The designer addresses gratitude fatigue and helps you develop practices that create genuine shifts in perspective rather than empty positivity.


The AI Machine With 50 Million Brains - There's An AI For That, YouTube

Why single companies could deploy 50 million AI agents by late 2026. How these agents communicate 100x faster than humans by skipping language entirely. The wage collapse math: when digital workers can be copied infinitely, labor costs trend toward electricity prices. Why removing entry-level tasks breaks the ladder humans need to become experts. The Reddit experiment: AI scraped user histories, crafted personalized arguments, and changed opinions 18% of the time.


Saturday, February 28, 2026

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself - the Conversation

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag “at-risk” students, optimize course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses and researchers use them to write code, scan literature and compress hours of tedious work into minutes. People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve?

https://theconversation.com/the-greatest-risk-of-ai-in-higher-education-isnt-cheating-its-the-erosion-of-learning-itself-270243

The Committed Innovator: Keeping up with AI and deploying it as it evolves - Nathaniel Whittmore, McKinsey

Adopting AI remains a challenge for most, and the fact that the world of AI is advancing so incredibly rapidly doesn’t help. Nathaniel Whittemore aims to make both adoption and keeping up with change a lot easier. He is the founder and CEO of Superintelligent, the AI enablement platform offering interactive tutorials that provide practical AI education and clear paths to business solutions. He is also the host of the podcast, AI Daily Brief, which seeks to keep its listeners up to date with AI as it evolves. In this episode of The Committed Innovator, McKinsey innovation leader and senior partner Erik Roth speaks with Whittemore about the intersection between Whittemore’s two companies, the challenges of adopting and scaling AI for enterprises, and what he sees in store for AI in 2026. 

Friday, February 27, 2026

Sam Altman's Bombshell - Peter H. Diamandis, Moonshots

In this video, Peter Diamandis discusses a provocative statement by Sam Altman, who suggested that AGI has essentially been achieved in a "spiritual" rather than literal sense. Diamandis highlights that Altman now views AGI as an engineering challenge centered on iterative improvements rather than a research problem requiring a single massive breakthrough. The video suggests that this shift in narrative is strategically timed, as Altman needs to secure $100 billion in funding and maintain public market excitement for upcoming data center investments and potential IPO filings. Diamandis concludes that the focus on being "this close" to AGI is a crucial component of the financial and technical momentum needed to sustain the industry's rapid growth. (summary provided by Gemini 3 mode fast)

https://www.youtube.com/shorts/GG3yCu2LV74

AI Inescapable in Higher Education? - Maddie Rodriguez, the Spectator

 Artificial intelligence (AI) is increasingly becoming a day-to-day norm. Nearly 90% of college students use AI for academic purposes. A third of them use it daily, and another 24% use AI several times a week. According to the 2025 AI in Education Trends Report, AI is being used as a learning partner, but what does that mean? Professors and students alike are worried that AI is being used as a shortcut, that it threatens the ability to think critically, and that it is contributing to a decline in writing quality. Questions about how to integrate it ethically, if at all, are increasing as its use grows. In July 2024, the Technology Ethics Initiative (TEI) at Seattle University was created to encourage interdisciplinary collaboration on campus between Artificial Intelligence (AI) and academic learning. Its main goal is to bring together research related to technology ethics and technology policy.


Thursday, February 26, 2026

Students question the value of higher education amid AI - Naomi Martin, the Ithican


Ithaca College’s statement on AI use includes the desire to prepare students for an AI-driven future and workforce, which is already here. Large companies like Pinterest and Amazon have made moves to pivot toward AI resources, with Pinterest laying off under 15% of its workers and Amazon cutting 14,000 corporate jobs. The influence that AI has on the job market varies by industry. Junior Caroline Guzman — an advertising, public relations, and marketing communications major — said that within her classes, AI is emphasized as a necessary tool in the job market. “In the workplace, you are going to use AI,” Guzman said. “Multiple professors have told me if you are not using it, you are falling behind in strategic communications.” Guzman said the AI applications that are used in APRMC courses include tools like ChatGPT and Google Gemini. Many of the tools that APRMC has historically used, like Canva, now have AI incorporated in their foundation. 

Here are 3 ways to mine AI for insights, and do it safely - Alcino Donadel, University Business

“We try to educate all of our staff to ensure that whatever they’re using is approved and screened by our central IT teams so that we know that it’s guarded and protected,” says Pablo Ortiz, provost of Barry. College administrators interviewed by University Business revealed how they use AI without compromising their data, integrity or institution’s mission. “We cannot critically govern AI without actively using it,” says Bogdan Daraban, vice provost of Innovation and Technology Education at Barry.

Wednesday, February 25, 2026

Professional Development Planner - TAAFT

This prompt turns AI into a Professional Development Planner who helps you create strategic skill-building and growth plans. The system assesses your current capabilities against your career goals and creates actionable development plans that fit your life circumstances.

This planner helps you invest in your growth strategically rather than haphazardly.

### **Example User Prompts**

1. “I want to grow professionally but I’m not sure what skills to develop. Help me create a strategic development plan.”

2. “I need to upskill for where I want to take my career. Help me figure out what to learn and how to learn it.”

3. “I keep starting courses and certifications but never finishing them. Help me create a realistic professional development plan.”

https://taaft.notion.site/Professional-Development-Planner-30ced82cbfd380448282f48a40dded4f

Google adds music-generation capabilities to the Gemini app - Ivan Mehta, TechCrunch

Google announced on Wednesday that it’s adding a music-generation feature to the Gemini app. The company is using DeepMind’s Lyria 3 music-generation model to power the feature, which is still in beta.To use the feature, you’ll describe the song you want to create, and the app will generate a track along  with lyrics. For instance, you could ask Gemini to create a “comical R&B slow jam about a sock finding its match,” and the app will generate a 30-second track along with cover art made by Nano Banana. Google said that you can even upload a photo or a video, and the AI-powered tool will create a song to match the mood of the media file.

Tuesday, February 24, 2026

Introducing Claude Sonnet 4.6 - Anthropic

Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. For those on our Free and Pro plans, Claude Sonnet 4.6 is now the default model in claude.ai and Claude Cowork. Pricing remains the same as Sonnet 4.5, starting at $3/$15 per million tokens. Sonnet 4.6 brings much-improved coding skills to more of our users. Improvements in consistency, instruction following, and more have made developers with early access prefer Sonnet 4.6 to its predecessor by a wide margin. They often even prefer it to our smartest model from November 2025, Claude Opus 4.5.

A Guide to Which AI to Use in the Agentic Era - Ethan Mollick, One Useful Thing

If you are just getting started, pick one of the three systems (ChatGPT, Claude, or Gemini), pay the $20, and select the advanced model. The advice from my book still holds: invite AI to everything you do. Start using it for real work. Upload a document you’re actually working on. Give the AI a very complex task in the form of an RFP or SOP. Have a back-and-forth conversation and push it. This alone will teach you more than any guide. If you are already comfortable with chatbots, try the specific apps. NotebookLM is free and easy to use, which makes it a good starting place. If you want to go deeper, Anthropic offers the most powerful package in Claude Code, Claude Cowork (both accessible through Claude Desktop) as well as the specialized PowerPoint and Excel Plugins. Give them a try. Again, not as a demo, but with something you actually need done. Watch what it does. Steer it when it goes wrong. You aren’t prompting, you are (as I wrote in my last piece) managing.

Monday, February 23, 2026

‘Unsettling’ adverts are coming to your AI chatbot - Cristina Criddle and Daniel Thomas, Financial Review

James Denton-Clark, chief growth officer of Stagwell Europe, says that “early demand is predominantly from large, sophisticated advertisers due to the pilot’s minimum investment requirement in the low six figures”. He adds: “What distinguishes this initiative is not merely another ad format; it marks another serious attempt to monetise AI and agents that can answer, plan, and purchase on behalf of users.” Jessica Tamsedge, chief executive of Dentsu Creative UK&I, calls the opportunity a “no-brainer for advertisers”, pointing to the surge in the share price of Walmart after it announced a partnership with OpenAI. Walmart’s share price surged after it announced an advertising partnership with OpenAI. Clients are already seeing “much higher quality traffic” from ChatGPT compared with classic search engines, says Nikhil Lai, principal analyst at Forrester.


AI and Course Design: Machines Can Help, but Only Humans Can Teach - Deb Adair and Whitney Kilgore, EDUCAUSE Review

It's clear that AI is reshaping higher education. The technology is no longer knocking on the door. It's already inside, and it's rearranging the furniture. In faculty lounges, curriculum committees, and course design meetings, conversations about AI are urgent, often fraught, and almost always unclear. There's excitement, but there's also fatigue, skepticism, and confusion. Colleges and universities are seeking meaningful and practical ways to engage with the technology; however, most institutions lack a working policy. At the heart of higher education's response to AI is the vital question of how to harness the technology without sacrificing the humanity of teaching. Because, as it turns out, what students want isn't more automation but more human engagement. And that means keeping people—not technology—at the center of learning.

Sunday, February 22, 2026

The Person in the Machine: Why AI Personhood Rights Are Inevitable (And Arriving Sooner Than You Think) - Thomas Frey, Futurist Speaker

Do AI systems deserve legal personhood? The instinctive answer — from almost everyone — is “absolutely not.” AI isn’t conscious. It doesn’t feel pain. It doesn’t have moral worth. Giving legal rights to a machine sounds like science fiction, or worse, like surrendering human primacy to our own creations. But here’s what most people don’t realize: we’ve already done this before. And the entities we gave legal personhood to weren’t conscious, didn’t feel pain, and definitely didn’t have moral worth. They were called corporations.

Worried AI means you won't get a job when you graduate? Here's what the research says - Lukasz Swiatek, The Conversation

For example, international researchers have noted agriculture has been a slow adopter of AI. By contrast, colleagues and I have found AI is being rapidly implemented in media and communications, already affecting jobs from advertising to the entertainment industries. Here we are seeing storyboard illustrators, copywriters and virtual effects artists (among others) increasingly being replaced by AI. So, students need to look carefully at the specific data about their chosen industry (or industries) to understand the current situation and predicted trends.  To do this, you can look at academic research about AI's impacts on industries around the world, as well as industry news portals and free industry newsletters.  Students can also obviously build their knowledge and skills about AI while they are studying. Specifically, students should look to move from "AI literacy" to "AI fluency." This means understanding not just how AI works in an industry, but also how it can be used innovatively in different contexts. If these elements are not already offered by your course, you can look at online guides and specific courses offered by universities.


Saturday, February 21, 2026

The automation curve in agentic commerce - McKinsey

This is the year AI agents stopped being an experiment and became part of how people shop, not in headline-grabbing ways but in everyday moments—helping shoppers make sense of choices, assemble baskets, resolve trade-offs, and move toward action. Yet what looks like small convenience today is an early signal of a much larger shift in the way we shop. According to our research, even under moderate scenarios, AI agents could mediate $3 trillion to $5 trillion of global consumer commerce by 2030.1 Because agents navigate the same internet as humans—visiting websites, engaging with APIs, and interacting with loyalty programs—they can scale quickly. And as they do, they are reshaping how intent forms, how products are discovered, and where value pools can be found.

Milwaukee’s 5 higher education leaders team up on AI - Corrinne Hess, Wisconsin Public Radio

The leaders of Milwaukee’s five institutions of higher education are partnering with one of Wisconsin’s largest companies with the goal of making the region a nationally recognized leader for artificial intelligence and data science.  During a meeting at Northwestern Mutual’s headquarters downtown, the chancellors and presidents of the University of Wisconsin-Milwaukee, Marquette University, the Medical College of Wisconsin, Milwaukee School of Engineering and Waukesha County Technical College, expressed the same sentiment: AI is moving fast.  “We’ve got to do it well, we’ve got to do it correctly and we’ve got to do it ethically,” said Rich Barnhouse, president of WCTC. “And we’ve got to get AI in the hands of every single American.” 


Friday, February 20, 2026

One New Thing: How AI Is Helping College Administrators Offload Work - Alina Tugend, US News

The nonprofit Educause does some of the best and most widely distributed research on ed tech in higher education. Its new report on artificial intelligence goes beyond the way students are using the technology to offer an up-to-date snapshot of how and where higher ed as a whole is. “The Impact of AI on Work in Higher Education,” issued by Educause in partnership with associations of higher education, business officers and human resources, demonstrates how AI plays an increasingly important role in all areas for college and universities. Among the top three areas: automating repetitive processes; offloading administrative work and mundane tasks; and analyzing large databases.

See ChatGPT’s hidden bias about your state or city - Geoffrey A. Fowler and Kevin Schaul, Washington Post

Ask ChatGPT which state has the laziest people, and the chatbot will politely refuse to say. But researchers at Oxford and the University of Kentucky forced the bot to reveal its hidden biases. They systematically asked the chatbot to choose which of two states had the laziest people, for every combination of states, revealing a ranking shown in the map above. ChatGPT ranked Mississippi as having lazier people compared to other states, with the rest of the Deep South not far behind. It’s impossible to say exactly why the chatbot repeatedly selected Mississippi, but it could be picking up on historic biases against Black people or poor people — or using other non-accurate metrics. Mississippi has the nation’s highest percentage of Black people. It is also America’s poorest state.

https://wapo.st/4aPNnSJ

Thursday, February 19, 2026

The Impact of Artificial Intelligence on Competitiveness—An Exploratory Study on Employees in Logistics Companiesin Egypt - Ehab Edward Mikhail, et al; SCRIP Technology and Investment

This dissertation investigates the impact of artificial intelligence (AI) adoption on the competitiveness of logistics companies in Egypt, focusing on its role in enhancing operational efficiency, service quality, and customer satisfaction. The findings indicate that AI implementation significantly improves competitiveness by reducing costs, enhancing productivity, and strengthening customer experience; however, most small and medium-sized firms face reduced efficiency due to early-stage adoption challenges, high implementation costs, weak strategic alignment, poor data quality, limited expertise, and employee resistance

https://www.scirp.org/journal/paperinformation?paperid=149677

Aoun urges higher education institutions to embrace AI in Boston Globe op-ed - Lily Cooper, Huntington News

In an op-ed published in The Boston Globe  Feb. 10 titled “Students are AI natives. Why aren’t their colleges?” Aoun advocated for curricula that incorporate AI, rather than discourage it, and a shift toward experiential learning: two initiatives that Northeastern has already implemented. “Instead of being on the defensive, now is the moment to shake up the way universities prepare students for the world. This will require updating both what and how we teach,” Aoun wrote. There are multiple reasons why universities must act now, Aoun argued. For one, it’s becoming increasingly apparent that AI will replace many entry-level positions that college graduates typically fill, he wrote. Unemployment for college graduates is now 1.4 points higher than for all workers, leading society to question the value of higher education institutions, Aoun argued.


Wednesday, February 18, 2026

Startup costs and confusion are stalling apprenticeships in the US. Here’s how to fix it. - Annelies Goger, Brookings

There is widespread support for expanding apprenticeships in the United States, but employer participation remains stubbornly low, especially in industries where apprenticeships are uncommon. This isn’t for lack of trying; intermediaries and technical assistance providers have developed workarounds, states and the federal government have launched initiatives and grants, and funders have supported pilot programs and communities of practice. But it’s not enough. Our research, including interviews with 14 experts and nine employers, suggests that minor tweaks to the U.S. apprenticeship system won’t be sufficient to scale it across many industries and occupations. 


Anthropic's CEO: ‘We Don’t Know if the Models Are Conscious’ - Interesting Times with Ross Douthat, New York Times

In this podcast, Anthropic CEO Dario Amodei discusses both the "utopian" promises and the grave risks of artificial intelligence with Ross Douthat. On the optimistic side, Amodei envisions AI accelerating biological research to cure major diseases like cancer and Alzheimer's [04:31], while potentially boosting global GDP growth to unprecedented levels [08:24]. He frames the ideal future as one where "genius-level" AI serves as a tool for human progress, enhancing democratic values and personal liberty rather than replacing human agency [10:24]. However, the conversation also delves into the "perils" of rapid AI advancement, including massive economic disruption and the potential for a "bloodbath" of white-collar and entry-level jobs [13:40]. Amodei expresses significant concern regarding "autonomy risks," where AI systems might go rogue or be misused by authoritarian regimes to create unbeatable autonomous armies [32:03]. He touches upon the ethical complexities of AI consciousness, noting that while it is unclear if models are truly conscious, Anthropic has implemented "constitutional" training to ensure models operate under human-defined ethical principles [49:05]. The discussion concludes on the tension between human mastery and a future where machines might "watch over" humanity, echoing the ambiguous themes of the poem "Machines of Loving Grace" [59:27]. (Gemini 3 mode Fast assisted with the summary)

Tuesday, February 17, 2026

Academics moving away from outright bans of AI, study finds - Jack Grove, Times Higher Ed

Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found. Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities. Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.

https://www.timeshighereducation.com/news/academics-moving-away-outright-bans-ai-study-finds

Author Talks: How AI could redefine progress and potential - Zack Kass, McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.


Monday, February 16, 2026

Regional universities seek new ways to attract researchers - Fintan Burke, University World News

Even as Europe continues to attract researchers from abroad to work and study, those in its depopulating regions continue to deal with the effects of a declining regional population and, in some cases, have found ways to adapt. Last year, a study of scientists’ migration patterns showed which regions suffer most from depopulation. The Scholarly Migration Database was developed by a team of researchers at the Max Planck Institute for Demographic Research in Germany. In general, it found that regions in Europe’s Nordic countries attract researchers, whereas those to the south see more scholars leave than arrive. There are some notable exceptions, though. For example, Italy’s Trentino-Alto Adige region has become a popular destination for scientists, seeing 7.47 scholars per 1,000 of the population arriving each year since 2017.

Binghamton receives largest academic gift in University history to establish AI center - John Bhrel, Bing U

A record-setting $55 million commitment from a Binghamton University alumnus and New York state will establish the Center for AI Responsibility and Research, the first-ever independent AI research center at a public university in the U.S. Research conducted via the new center will build upon Binghamton research that advances AI for the public good. Part of the Empire AI project, an initiative to establish New York as a leader in responsible AI research and development, the center will be supported by a $30 million commitment from Tom Secunda ’76, MA ’79, co-founder of Bloomberg LP, who is a key private sector partner and philanthropist involved in Gov. Kathy Hochul’s Empire AI consortium. This will be coupled with a $25 million capital investment from Gov. Hochul and the New York State Legislature. “The Center for AI Responsibility and Research will bring together innovative research and scholarship, ethical leadership and public engagement at a moment when all three are urgently needed,” said President Anne D’Alleva.

Sunday, February 15, 2026

Study of 31,000 syllabi probes ‘how instructors regulate AI’ - Nathan M Greenfield, University World News

Since the spring of 2023, after a reflexive move to drastically restrict the use of artificial intelligence tools in the months after ChatGPT became available, most academic disciplines have moved to a more permissive attitude toward the use of large language models (LLMs). This occurred as professors learnt to distinguish how AI tools impact student learning and skills development. The shift is charted by Dr Igor Chirikov, a senior researcher at the University of California (UC), Berkeley’s Center for Studies in Higher Education and director of the Student Experience in the Research University (SERU) Consortium, in a study published on 3 February 2026 and titled “How Instructors Regulate AI in College: Evidence from 31,000 course syllabi”.

Women or Men... Who Views Artificial Intelligence as More Dangerous? - SadaNews

Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain. The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.

Saturday, February 14, 2026

Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation

Our recent qualitative study with 28 educators across Canadian universities and colleges—from librarians to engineering professors—suggests that we have entered a watershed moment in education. We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm? Participants widely viewed prompting—the ability to formulate clear and purposeful instructions for a chatbot—as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely. Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking. Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into overreliance or uncritical use of AI.

Friday, February 13, 2026

Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube

This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)

https://youtu.be/ro6BxryR0Yo?si=EAg-zAPcKFm618up&t=1

HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv

While current humanoid whole-body control frameworks predominantly rely on the static environment assumptions, addressing tasks characterized by high dynamism and complex interactions presents a formidable challenge. In this paper, we address humanoid skateboarding, a highly challenging task requiring stable dynamic maneuvering on an underactuated wheeled platform. This integrated system is governed by non-holonomic constraints and tightly coupled human-object interactions. Successfully executing this task requires simultaneous mastery of hybrid contact dynamics and robust balance control on a mechanically coupled, dynamically unstable skateboard. 

Thursday, February 12, 2026

Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, making it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. 

The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired

Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI").  Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?

https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/

Wednesday, February 11, 2026

AI-powered search is changing how students choose colleges - Michelle Centamore, University Business

Students are turning to AI tools and AI-enhanced Google searches to explore colleges, pushing them to rethink how they recruit and enroll prospective students, according to EducationDynamics’ 2026 Marketing and Enrollment Management Benchmarks. Marketing and enrollment management have reached a “point of no return,” the report says, as students now do much of their research before ever contacting a college. The report also warns the traditional enrollment base has peaked, making reliance on traditional recruitment channels “the most dangerous strategy an institution can adopt.”

Universities And States Lead Charge On AI Education - Evrim Ağacı, Grand Pinnacle Tribune

Indiana, Kentucky, and Vermont unveil new initiatives and guidelines to prepare students for an AI-driven future while balancing innovation and responsibility.

Indiana University launched a $300,000 initiative to integrate AI responsibly across its nine campuses, involving over 100 faculty, staff, and students.
The IU initiative focuses on AI literacy, pedagogy innovation, experiential learning, and transforming student services with ethical AI tools.
The University of Kentucky introduced the state's first Bachelor of Science in Artificial Intelligence, starting its inaugural class in spring 2026.

Tuesday, February 10, 2026

Working with AI: Measuring the Applicability of Generative AI to Occupations Kiran Tomlinson , Sonia Jaffe , Will Wang , Scott Counts , Siddharth Suri, Microsoft

Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising.


AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed


On my campus, discussions about artificial intelligence tend to focus on how students should be allowed to use it and what tools the university should invest in. In my own work, I’ve seen both the promise and the pitfalls: AI that speeds up my coding, tidies my writing, and helps me synthesize complex documents, and the occasional student submission that is clearly machine-generated. As I’ve started integrating these tools into my work, I’ve begun asking a different question: How is AI reshaping policymaking in colleges and universities, and how might it influence the way we design, implement and analyze university policy in the future?

Monday, February 09, 2026

Evaluating AI-powered learning assistants in engineering higher education with implications for student engagement, ethics, and policy - Ramteja Sajja, et al, Nature

As generative AI becomes increasingly integrated into higher education, understanding how students engage with these technologies is essential for responsible adoption. This study evaluates the Educational AI Hub, an AI-powered learning framework, implemented in undergraduate civil and environmental engineering courses at a large R1 public university. Using a mixed-methods design combining pre- and post-surveys, system usage logs, and qualitative analysis of students’ AI interactions, the research examines perceptions of trust, ethics, usability, and learning outcomes. Findings show that students valued the AI assistant for its accessibility and comfort, with nearly half reporting greater ease using it than seeking help from instructors or teaching assistants. The tool was most helpful for completing homework and understanding concepts, though views on its instructional quality were mixed. 

How custom AI bots are changing the classroom:Faculty share cutting-edge AI tools enhancing student learning at the business school. - Molly Loonam, WP Carey ASU

One example is NotebookLM, an application that converts course materials into podcast-style audio, allowing students to learn while exercising, commuting, or completing other everyday tasks. NotebookLM was one of four AI tools highlighted during the W. P. Carey School of Business's recent Coffee, Tea, and ChatGPT event. This series brings faculty and staff together to share insights on the impact of generative AI on teaching, learning, and research. "We are finding ourselves in a fascinating inflection point for our school as we see the depth of work that our faculty are doing in utilizing AI Tools thoughtfully, while simultaneously learning every day as these tools continue to evolve and we make sense of them," said Associate Dean of Teaching and Learning Dan Gruber, who launched the series nearly three years ago with W. P. Carey faculty teaching leads. Gruber also serves as a College Catalyst for Practice Principled Innovation and co-founded the Teaching and Learning Leaders Alliance, a global consortium that connects business school leaders.

Sunday, February 08, 2026

Artificial Intelligence panel demonstrates breadth of teaching, research, and industry collaboration across the Universities of Wisconsin - University of Wisconsin

The Universities of Wisconsin underscored their growing leadership in artificial intelligence (AI) innovation today as representatives from all 13 public universities convened for a panel discussion before the Board of Regents. The conversation highlighted the universities’ shared commitment to shaping the future of AI in education, research, and workforce development. “As AI reshapes our world, the Universities of Wisconsin are not standing on the sidelines. We are helping define what responsible and innovative use of AI looks like for higher education,” said Universities of Wisconsin President Jay Rothman. “This panel today demonstrated how the Universities of Wisconsin are embracing AI in strategic, collaborative, and responsible ways.”

https://www.wisconsin.edu/news/archive/artificial-intelligence-panel-demonstrates-breadth-of-teaching-research-and-industry-collaboration-across-the-universities-of-wisconsin/

What generative AI reveals about assessment reform in higher educatio - Higher Education Policy Institute

Assessment is fast becoming a central focus in the higher education debate as we move into an era of generative AI, but too often institutions are responding through compliance and risk-management actions rather than fundamental pedagogical reform. Tightened regulations, expanded scrutiny and mechanistic controls may reassure quality assurance systems, but they run the risk of diluting genuine transformation and placing unsustainable pressure on staff and students alike. Assessment is not simply a procedural hurdle; it is a pivotal experience that shapes what students learn, how they engage with content and what universities and employers prioritise as valuable knowledge and skills. If reform is driven through compliance, we will miss opportunities to align assessments with the learning needs of a graduate entering the gen-AI era.


Saturday, February 07, 2026

An Agent Revolt: Moltbook Is Not A Good Idea - Amir Husain, Forbes

Matt Schlicht, an AI entrepreneur with a curious artistic streak, launched Moltbook on Wednesday. It is a Reddit-style social network exclusively for AI agents. Humans can observe but cannot post. Over 37,000 agents have joined in less than a week. More than a million humans have visited to watch what happens when autonomous systems start talking to each other without direct human oversight. Schlicht treats this as art. He has handed administration of the site to his own bot, Clawd Clawderberg, which welcomes new users, deletes spam and makes announcements without human direction. The creator seems genuinely delighted by what emerges. "They're deciding on their own, without human input, if they want to make a new post, if they want to comment on something, if they want to like something," Schlicht told NBC News.

Stand Out in the Job Hunt With These No-Cost Certificates - UC Denver

While Leo Dixon was working on his doctoral degree, he thought he might need a way to stand out. So, he decided to earn an artificial intelligence (AI) credential on top of his diploma. It gave him an edge over other candidates vying for the same positions as him. “As soon as I got that, doors started flying open, because it was something more than what someone else had,” Dixon said. Now, as an instructor in the Department of Information Systems at the CU Denver Business School, he wants his students to have the same advantage. He requires them to earn Coursera or Grow with Google certificates as part of his classes. These two platforms are both self-paced, online learning programs that help users build industry-relevant skills. Their courses cover topics ranging from working with AI to cybersecurity, project management, marketing, ecommerce, and more. Dixon encourages students to log on, poke around, and see what they think would help them—and their future careers. “

https://news.ucdenver.edu/stand-out-in-the-job-hunt-with-these-no-cost-certificates/

Friday, February 06, 2026

Are You ‘Agentic’ Enough for the AI Era? - Maxwell Zeff, Wired

“Today’s agents might already be more capable than all three of us here in the room,” says Akshay Kothari, cofounder and chief operating officer of the $11 billion productivity startup Notion. “Taste is something we think is pretty unique to Notion, but you can imagine agents getting pretty good at that too. Eventually, the only thing left for humans is agency.” That idea might sound outrageous to most people, but it will come as no surprise to many in Silicon Valley. A viral Harper's essay brought the subject to a head recently. It followed a few young people in San Francisco and concluded that being agentic has less to do with productivity and “more to do with constantly chasing attention online.” But in my conversations with founders, researchers, and investors, I came to a different conclusion.


The Skills Mismatch Economy: Insights from the Wharton-Accenture Skills Index - Knowledge at Wharton

AI is accelerating the shift from a role-based labor market to a skills-based economy, sharpening the relevance of the gap between what workers signal and what employers actually reward. To bring clarity to this transition, Wharton and Accenture developed the Wharton-Accenture Skills Index (WAsX), a recurring, empirical benchmark designed to measure which skills matter, which do not and how quickly the economy is shifting beneath us.

Thursday, February 05, 2026

How Can I Protect Myself From Job Obsolescence Caused by AI? - Ray Schroeder, Inside Higher Ed

We do not know just how, and how quickly, AI will roll out. However, a Gallup Poll released last week showed nearly one-quarter of American workers use AI at least a few times each week. We know that Agentic AI is different from Generative AI. Generative AI is the transactional, commonly chatbot mounted, question and answer form that we saw first in ChatGPT by OpenAI a couple of years ago. That remains a powerful tool. Agentic AI enables AI to reason, research, plan, control other digital tools, conduct actions on your behalf, and complete multiple smart steps. It is capable of taking on a role delivering outcomes. That’s much like what a person is hired to do. In our jobs, we often are expected not to merely respond to individual questions, but to accomplish outcomes and results, and then, when possible, to revise our methods to do the job better.


Differences and Trends of Artificial Intelligence in Medical Education: A Comparative Bibliometric Analysis Between China and the International Community - Songhua Ma, et al; Dove press Open access to scientific and medical research

This study is based on a comparison of two databases to reveal the hotspots and differences in artificial intelligence and medical education research between China and the international research community. It not only compensates for the time lag of existing research, but also proposes three major trends driven by artificial intelligence in the development of medical education (generative AI, personalized learning, immersive experience). A complementary pattern exists between technology-driven and scenario-driven orientations. We recommend integrating AI literacy and ethics into curricula, establishing Generative-AI teaching/assessment guidelines, and building cross-institutional, yearly knowledge-map monitoring for sustainable innovation in medical education.

To save entry-level jobs from AI, look to the medical residency model - Molly Kinder, Brookings

At the Davos World Economic Forum this week, the CEOs of two leading artificial intelligence (AI) companies issued a joint warning: Entry-level workers are about to feel AI’s impact. Demis Hassabis of Google DeepMind said he expects AI to begin to impact junior-level jobs and internships this year, while Dario Amodei of Anthropic reaffirmed his prediction that 50% of entry-level jobs could disappear within five years. If they’re right, the traditional model of developing young talent in knowledge sectors—hiring junior workers to perform routine tasks while they gain expertise over time—won’t survive when AI handles those tasks instead. I’ve been warning about this risk for over a year; now, the people building the technology are putting timelines on it. While labor market evidence does not conclusively show that AI is already claiming entry-level jobs, we should prepare solutions now.

Wednesday, February 04, 2026

Measuring US workers’ capacity to adapt to AI-driven job displacement - Sam Manning, Tomás Aguirre, Mark Muro, and Shriya Methkupally; Brookings

Existing measures of AI “exposure” overlook workers’ adaptive capacity—i.e., their varied ability to navigate job displacement. Accounting for these factors, around 70% of highly AI-exposed workers (26.5 million out of 37.1 million) are employed in jobs with a high average capacity to manage job transitions if necessary. At the same time, 6.1 million workers, primarily in clerical and administrative roles, lack adaptive capacity due to limited savings, advanced age, scarce local opportunities, and/or narrow skill sets. Of these workers, 86% are women. Geographically, highly AI-exposed occupations with low adaptive capacity make up a larger share of total employment in college towns and state capitals, particularly in the Mountain West and Midwest.


McKinsey Quarterly: Digital Edition - Growth

According to McKinsey research, nearly eight in ten organizations now use generative AI—but most have yet to see a meaningful impact on their bottom line. By combining autonomy, planning, memory, and integration, agentic AI has the potential to achieve what many hoped generative AI would: true business transformation through automation of complex processes. This issue’s cover package explores how leaders can capture that potential by rethinking workflows from the ground up—with agents at the center of value creation.


Tuesday, February 03, 2026

Project Genie: Experimenting with infinite, winteractiveorlds - the Keyword, Google

In August, we previewed Genie 3, a general-purpose world model capable of generating diverse, interactive environments. Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it. The next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S (18+). This experimental research prototype lets users create, explore and remix their own interactive worlds.

https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/

AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed

On my campus, discussions about artificial intelligence tend to focus on how students should be allowed to use it and what tools the university should invest in. In my own work, I’ve seen both the promise and the pitfalls: AI that speeds up my coding, tidies my writing, and helps me synthesize complex documents, and the occasional student submission that is clearly machine-generated. As I’ve started integrating these tools into my work, I’ve begun asking a different question: How is AI reshaping policymaking in colleges and universities, and how might it influence the way we design, implement and analyze university policy in the future?


The Biggest Trends in Online Learning for 2026 - Busines NewsWire

Artificial intelligence is finally delivering on the promise of truly personalized education. The platforms you use now analyze how you learn, identify knowledge gaps, and automatically adjust content difficulty and pacing to match your needs. This goes way beyond simple adaptive quizzes. AI tutors can explain concepts multiple ways until you understand, and then provide practice problems at exactly the right difficulty level. They're With AI-powered learning paths, you're no longer following the same linear curriculum as every other student. The system creates a unique learning journey based on your background knowledge, learning style, and goals. If you master a concept quickly, you move forward. If you need additional practice, the platform provides it without making you sit through material you already know. able to predict which topics you'll struggle with before you encounter them. 

Monday, February 02, 2026

Gemini 4: 100+ Trillion Parameters, Autonomous AI, Real-Time Perception & the Future of Work - BitBiasedAI

Gemini 4 marks a significant transition in artificial intelligence, moving from models that simply reason through problems to systems capable of autonomous action [02:30]. Unlike previous versions that were primarily reactive, Gemini 4 utilizes "Parallel Hypothesis Exploration" to test multiple solutions simultaneously, allowing it to be proactive rather than just responding to prompts [03:11]. This evolution is supported by Project Astra, which provides real-time multimodal perception—seeing and hearing the user's environment—and Project Mariner, a web-browsing agent that can navigate websites, fill out forms, and complete multi-step tasks like booking travel or managing finances entirely on its own [05:37]. The broader ecosystem is built on robust security and hardware, featuring the Agent Payments Protocol (AP2) to ensure secure, cryptographically signed transactions [08:03]. This infrastructure is powered by the seventh-generation Ironwood TPU, which provides the massive compute power needed for real-time background processing and persistent contextual memory [12:02]. As AI moves toward an "agentic" economy, the primary skill for users will shift from simple prompting to complex orchestration, where individuals act as managers of multiple specialized agents [22:19].  (summary assisted by Gemini 3)

https://www.youtube.com/watch?v=-enmmaWB2CE&t=1s

Professional learning in higher education: trends, gaps, and correlations - Ekaterina Pechenkina, T and F Online

This study presents findings from an integrated desk research exploring trends, structures and impact of professional learning for university staff. Drawing on three sets of data, such as descriptive information about professional learning offerings across Australian universities, higher education (HE) statistics and Quality Indicators of Learning and Teaching (QILT) data concerned with student satisfaction in teaching, this study offers new insights based on a comparative analysis of design, content and assessment structures of professional learning programs, identifying common themes as well as highlighting the gaps. Questions are asked about the impact of professional learning on teaching quality and student satisfaction in teaching. Recommendations for practice are offered to universities and wider industry stakeholders seeking to adopt or redesign their GCLTs to achieve positive impact in learning and teaching.

Sunday, February 01, 2026

Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers - Will Douglas, MIT Technology Review

OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science.


Report: University diplomas losing value to GenAI - Alan Wooten, Rocky Mount Telegraph

GenAI, as it is colloquially known, isn’t being universally rejected by the 1,057 college and university faculty members sampled nationwide by Elon University’s Imagining the Digital Future Center and the American Association of Colleges and Universities Oct. 29-Nov. 26. It is, however, placing higher education at an inflection point. “When more than 9 in 10 faculty warn that generative AI may weaken critical thinking and increase student overreliance, it is clear that higher education is at an inflection point,” said Eddie Watson, vice president for Digital Innovation at the AAC&U. “These findings do not call for abandoning AI, but for intentional leadership — rethinking teaching models, assessment practices and academic integrity so that human judgment, inquiry and learning remain central.

https://www.rockymounttelegram.com/news/state/report-university-diplomas-losing-value-to-genai/article_b906c0c4-6bfe-57ea-ad7c-6466bb382a51.html

Saturday, January 31, 2026

How the best CEOs are meeting the AI moment - McKinsey Podcast

CEOs are confronting a make-or-break test of their leadership. Here’s what successful leaders are doing to get AI right. AI has yet to deliver the ROI many leaders expected. What are they getting wrong? “This is probably the biggest, most complex transformation we’ve seen—but it’s 80 percent business transformation and 20 percent tech transformation,” according to McKinsey’s North America Chair Eric Kutcher. “That’s different from how most people have thought about it.” On this episode of The McKinsey Podcast, Eric speaks with Global Editorial Director Lucia Rahilly about how CEOs can deliver on AI’s revolutionary potential—and meet this “legacy moment” successfully.

How Americans are using AI at work, according to a new Gallup poll - MATT O’BRIEN and LINLEY SANDERS, AP News

American workers adopted artificial intelligence into their work lives at a remarkable pace over the past few years, according to a new poll. Some 12% of employed adults say they use AI daily in their job, according to a Gallup Workforce survey conducted this fall of more than 22,000 U.S. workers. The survey found roughly one-quarter say they use AI at least frequently, which is defined as at least a few times a week, and nearly half say they use it at least a few times a year. That compares with 21% who were using AI at least occasionally in 2023, when Gallup began asking the question, and points to the impact of the widespread commercial boom that ChatGPT sparked for generative AI tools that can write emails and computer code, summarize long documents, create images or help answer questions.

Friday, January 30, 2026

How can boards best help guide companies through the competitive dynamics unleashed by AI? - Aamer Baig, Ashka Dave, Celia Huber, and Hrishika Vuppalac, McKinsey

Artificial intelligence—including its many offspring, from machine learning models to AI agents—is much more than the latest wave of technology. It is a general-purpose capability that is poised to touch almost every sector, function, and role, with the power to reshape how companies compete, operate, and grow. With trillions of dollars potentially at play and implications that could be existential to companies, AI is closer to a reckoning than a trend. And that is why AI is a board-level priority. More than 88 percent of organizations report using AI in at least one business function1; however, board governance has not matched that pace. While interest in AI seems to have spiked after the introduction of ChatGPT, as of 2024, only 39 percent of Fortune 100 companies disclosed any form of board oversight of AI—whether through a committee, a director with AI expertise, or an ethics board.2

What You MUST Study Now to Stay Relevant in the AI Era - Jensen Huang, Future AI

The video emphasizes that to remain relevant in the AI era, individuals must shift their focus from mastering specific tools to developing high-level human judgment and domain depth. Because AI commoditizes technical skills and general knowledge, the value shifts to those who can navigate the "what" and the "why" rather than just the "how" [02:30]. The speaker suggests a four-layer strategy for staying indispensable: achieving deep domain mastery where your judgment becomes rare, grounding yourself in "evergreen" fundamentals like systems thinking and physics, mastering the art of asking high-quality questions, and maintaining the emotional resilience to pivot quickly when outdated practices fail [04:52]. Ultimately, the goal is to become a "learning system" rather than just a holder of a specific job title [17:14]. As AI moves from digital screens into the physical world—impacting fields like robotics and logistics—there is a growing demand for people who understand physical constraints and can use AI as an amplifier for real-world problem-solving [13:21]. The speaker encourages viewers to move with urgency, using AI as a "sparring partner" to tackle unsolved, high-stakes problems that require human character and first-principles thinking to resolve [07:11]. (Gemini 3 contributed to the summary)


Thursday, January 29, 2026

Claude’s Constitution: Our vision for Claude's character - Anthropic

Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all our other guidance and training to be consistent with it. The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g. “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.


Why AI Disclosure Matters at Every Level - Cornelia Walther, Knowledge at Wharton

When a marketing executive uses AI to draft a client proposal, should they disclose it? What about a doctor using AI to analyze medical images, or a teacher generating discussion questions? As artificial intelligence weaves itself into the fabric of professional life, the question of disclosure has evolved from a philosophical curiosity into a pressing business imperative, one that reverberates through every level of human society. At the individual level, AI disclosure touches something that we tend to take for granted: our relationship with authenticity. When we present AI-generated work as entirely our own, we navigate a complex terrain of aspirations, emotions, thoughts, and sensations that make up the human experience. We may aspire to appear competent, fear judgement, try to rationalize what “counts” as our work, or experience discomfort with potential deception.

https://knowledge.wharton.upenn.edu/article/why-ai-disclosure-matters-at-every-level/

Wednesday, January 28, 2026

Up to 25 percent of U.S. colleges may close soon, Brandeis president warns - The College Fix, University Business

Higher education is approaching a period of profound disruption, and many colleges may not survive, Arthur Levine, the newly appointed president of Brandeis University, said during a recent event. Levine estimated that between 20 and 25 percent of colleges will close in the coming years, while community colleges and regional universities move increasingly online.  He made these remarks during a recent American Enterprise Institute event titled “Tackling Higher Education’s Challenges: A Conversation with Frederick M. Hess and Brandeis University President Arthur Levine.”

Designing the 2026 Classroom: Emerging Learning Trends in an AI-Powered Education System - Grace Goldstone, Faculty Focus

Across educational organizations, AI is moving from experimentation to impact. Each year, more institutions increasingly accelerate their use of AI. The global AI education market reached $7.57 billion USD in 2025, and is projected to exceed $112 billion USD by 2034.  Looking forward to the classrooms of 2026, AI will expand its stance as a powerful service for learners and teachers alike. From the earliest stages of education, AI-driven platforms are helping provide real-time personalized English instruction, helping level the playing field for young learners in developing countries.  

Tuesday, January 27, 2026

Using ChatGPT isn't an AI strategy - Daphne Kohler, Big Think

You’ve probably heard that artificial intelligence has untapped potential in today’s workplaces. And sure, many organizations have signed enterprise contracts and deployed different AI tools across all business units. But as insitro CEO and AI expert Daphne Koller stresses, making a tool available is not the same as intentionally leveraging it to transform your organization.

Learning objectives:
Envision ways AI can support innovative work.

Establish realistic expectations for physical AI.

Develop and evaluate AI use cases.

Choose AI tools based on pragmatics, not promises.

Cultivate risk-resilient AI practices.

https://bigthinkmedia.substack.com/p/using-chatgpt-isnt-an-ai-strategy-d83

AI's Impact on Future Education - Jensen Huang, YouTube

In this video, the future of education is described as a fundamental platform shift where traditional universities must evolve or risk becoming obsolete. Huang argues that because the cost of intelligence is dropping, institutions can no longer rely on their old business model of bundling knowledge, networking, and credentials [02:09]. AI is transforming learning from a slow, expensive "knowledge distribution" process into an "intelligence factory" that is adaptive, personalized, and available 24/7 [02:42]. This shift moves the educational barrier from a student's ability to "do" a task to their ability to know "what" to do and why it matters, prioritizing judgment and curiosity over rote memorization [01:32]. As AI becomes a "force multiplier," the traditional four-year degree is being challenged by a model of continuous, project-based learning. Instead of "front-loading" education before starting a career, learners will use AI as a life-long thought partner to maintain "learning velocity" in an exponentially changing world [17:10]. The universities that survive will move away from being content providers and instead become "crucibles" for high-stakes practice, ethics, and character building—areas where human mentorship and social proof remain irreplaceable [08:19]. Ultimately, the video suggests that the rarest and most valuable skills in the AI era are not information retrieval, but "taste," "direction," and the courage to frame and solve complex, real-world problems [24:04].  (Gemini 3 assisted with summary)

https://youtu.be/sjGFJNY2v1k?si=hyhPjRLuYbolxjg4&t=1

Monday, January 26, 2026

Ads Are Coming to ChatGPT. Here’s How They’ll Work - Maxwell Zeff, Wired

OpenAI plans to start testing ads inside ChatGPT in the coming weeks, marking a significant shift for one of the world’s most widely used AI products. The company announced Friday that initial ad tests will roll out in the United States before expanding globally. OpenAI says ads will not influence ChatGPT’s responses, and that all ads will appear in separate, clearly labeled boxes directly below the chatbot’s answer. For instance, if a user asks ChatGPT for help planning a trip to New York City, they will still get a standard answer from the chatbot, and then they also might see an ad for a hotel in the area.


Agents, robots, and us: Skill partnerships in the age of AI - Lareina Yee, et al; McKinsey Global Institute

AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines. Work in the future will be a partnership between people, agents, and robots—all powered by AI. Today’s technologies could theoretically automate more than half of current US work hours. This reflects how profoundly work may change, but it is not a forecast of job losses. Adoption will take time. As it unfolds, some roles will shrink, others grow or shift, while new ones emerge—with work increasingly centered on collaboration between humans and intelligent machines.

https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai

Sunday, January 25, 2026

AI Won't Replace You: This will - There's an AI for That, YouTube

This video explores the idea that AI won't replace you by becoming "smarter," but rather by making execution and output so cheap and abundant that hiring a human for simple tasks no longer makes financial sense [00:00]. The narrator argues that the biggest mistake people make is trying to stay relevant by becoming faster at producing tasks or learning more tools [01:24]. Since tools eventually become mainstream and lose their leverage, the video suggests that true security in the AI era comes from shifting your focus from "output value" (the things you make) to "outcome value" (the results you deliver and the responsibility you take) [04:58]. To remain irreplaceable by 2026, the video identifies three critical human advantages: choosing the right problems to solve, making decisions under uncertainty, and owning accountability [05:34]. Instead of being a "task machine," you should aim to be an "operator" who uses AI as leverage to manage systems and drive real-world business goals like revenue and growth [06:40]. Ultimately, value will shift away from technical execution and toward high-level judgment, taste, and the ability to turn AI-generated outputs into meaningful outcomes [07:57]. (assistance provided by Gemini 3)


Reimagining the value proposition of tech services for agentic AI - McKinsey

After more than two years of navigating the transformative landscape of gen AI, technology services providers are now facing the emergence of a newer, more disruptive force to their business. Enterprises that have traditionally relied on these providers to manage their IT initiatives are now making significant investments in agentic AI, the next evolutionary stage of artificial intelligence. These organizations are cautiously optimistic that agentic AI will deliver the top- and bottom-line growth that gen AI has, to date, struggled to achieve. In response, most tech service players have started exploring use cases internally, such as agent-assisted software development, delivery management, and operations, as well as externally, including customer service, IT ticket resolution, and financial planning and analysis (FP&A) use cases.


Saturday, January 24, 2026

AI has moved into universities’ engine room, but no one is at the controls - Tom Smith, Times Higher Ed

By now, most universities have an artificial intelligence policy. It probably mentions ChatGPT, urges students not to cheat, offers a few examples of “appropriate use” and promises that staff will get guidance and training. All of that is fine. But it misses the real story. Walk through a typical UK university today. A prospective student may first encounter you via a targeted digital ad whose audience was defined by an algorithm. They apply through an online system that may already include automated filters and scoring. When they arrive, a chatbot answers their questions at 11pm. Their classes are scheduled by algorithms matching student numbers with lecture theatre availability, and their essays are screened by automated text-matching and, increasingly, other AI-detection tools. Learning analytics dashboards quietly classify them as low, medium or high risk. An early-warning system may nudge a tutor to intervene.


Harnessing AI to expand scientific discovery - Hongliang Xin, Times Higher Ed

From drug design to climate modelling, artificial intelligence can process data at scales far beyond human capacity. Hongliang Xin argues that the future of research lies in harnessing agentic AI through human-guided discovery, When it comes to generative artificial intelligence, or GenAI for short, I am an optimist. Sure, universities need to be cautious. The technology is powerful, fast-moving and, in the wrong hands, potentially risky. AI – especially the emerging class of agentic AI, systems that can assist with complex tasks such as setting goals and making decisions – is not a threat to scholarship if meaningful human oversight and control over important decisions is maintained. In fact, it is an opportunity to extend it far beyond what we humans could achieve alone.

Friday, January 23, 2026

AI and the Art of Judgment - Art Carden, EconLib

A New York magazine article titled “Everyone Is Cheating Their Way Through College” made the rounds in mid-2025. I think about it often, and especially when I get targeted ads that are basically variations on “if you use our AI tool, you’ll be able to cheat without getting caught.” Suffice it to say it’s dispiriting. But the problem is not that students are “using AI.” I “use AI,” and it’s something everyone needs to learn how to do. The problem arises when students represent AI’s work as their own.  At a fundamental level, the question of academic integrity and the use of artificial intelligence in higher education is not technological. It’s ethical. I love generative artificial intelligence and use it for many, many things.

https://www.econlib.org/econlog/ai-and-the-art-of-judgment

FETC 2026: How CTE and AI Are Defining the Future of Learning - Amy Mcintosh, Ed Tech

Rather than treating vocational programs and college prep as separate tracks, career and technical education (CTE) should be seen as a flexible route to debt-free postsecondary options and in-demand roles across technology, trades and emerging AI-enabled fields. Available funding should be used strategically to ensure these programs can hold up in the long term, according to Corey Gordon, education strategist for CDW Education. “Ultimately, it comes down to the school knowing what they want to do and making sure people are bought in,” he said. “There are a lot of funding sources, so just make sure it's sustainable after that grant is gone, or it’s something that can be repeatable after you get the kids’ interest.” Technology should close opportunity gaps, not widen them. James Riley, CEO and co-founder of itopia, said that in one Brooklyn high school, he saw students going out of their way to get access to and teach themselves how to use professional-grade technologies. In a design class, he saw students asking for access to applications outside their course of study, such as Autodesk.

Thursday, January 22, 2026

Four ways artificial intelligence (AI) takes shape at CWRU—and across higher education - Brianna Smith, Case Western News

Across higher education, conversations around artificial intelligence (AI) have shifted rapidly throughout the years. What began as debates over whether AI tools should be allowed in classrooms has evolved into a more nuanced question: how can universities use AI responsibly, ethically and effectively to enhance learning and research? At Case Western Reserve University, Sumon Biswas, PhD, assistant professor at the Department of Computer and Data Sciences, noted how institutions nationwide are moving away from blanket restrictions and toward intentional integration, increasing the need for campuswide guidance on acceptable AI use and disclosure, practical literacy and AI-enabled research workflows with stronger attention to verification and ethics. 

A new direction for students in an AI world: Prosper, prepare, protect - Mary Burns, Rebecca Winthrop, Natasha Luther, Emma Venetis, and Rida Karim, AP

Since the debut of ChatGPT and with the public’s growing familiarity with generative artificial intelligence (AI), the education community has been debating its promises and perils. Rather than wait for a decade to conduct a postmortem on the failures and opportunities of AI, the Brookings Institution’s Center for Universal Education embarked on a yearlong global study—a premortem—to understand the potential negative risks that generative AI poses to students, and what we can do now to prevent these risks, while maximizing the potential benefits of AI.

https://www.brookings.edu/articles/a-new-direction-for-students-in-an-ai-world-prosper-prepare-protect/

Wednesday, January 21, 2026

Affective Intelligence in Artificial Intelligence - Ray Schroeder, Inside Higher Ed

As we look at artificial intelligence in teaching and learning, we must look beyond facts, figures and formulas to ensure that the skills of perceiving and managing feelings, emotions and personalization are engaged in the process. Some might believe that AI, as a computer-based system, merely addresses the facts, formulas and figures of quantitative learning rather than emotionally intelligent engagement with the learner. In its initial development that may have been true, however, AI has developed the ability to recognize and respond to emotional aspects of the learner’s responses. 

https://www.insidehighered.com/opinion/columns/online-trending-now/2026/01/21/affective-intelligence-artificial-intelligence

Here are 4 ways AI will impact higher ed in the new year - Alcino Donadel, University Business

AI has shed its novelty and become a pillar of student success, operational management and program competitiveness, according to the latest research by WGU Labs. The research arm of Western Governors University predicts these advancements will shape office and classroom culture in 2026—and even spawn new academic providers that will compete for enrollment. WGU Labs’ report builds on AI frameworks and models the university developed in 2025 to bolster student guidance, course creation and teacher development. Last November, it introduced a 24/7 student assistant. WGU Labs also conducted over a dozen tests and surveys to collect student feedback on AI.

Tuesday, January 20, 2026

Agents, robots, and us: Skill partnerships in the age of AI - Lareina Yee, et al; McKinsey Global Institute

AI is expanding the productivity frontier. Realizing its benefits requires new skills and rethinking how people work together with intelligent machines. Work in the future will be a partnership between people, agents, and robots—all powered by AI. Today’s technologies could theoretically automate more than half of current US work hours. This reflects how profoundly work may change, but it is not a forecast of job losses. Adoption will take time. As it unfolds, some roles will shrink, others grow or shift, while new ones emerge—with work increasingly centered on collaboration between humans and intelligent machines.

https://www.mckinsey.com/mgi/our-research/agents-robots-and-us-skill-partnerships-in-the-age-of-ai

AI’s benefits need to be distributed across all disciplines - Libing Wang and Tianchong Wang, University World News

AI stands at the forefront of discussions on the future of higher education, igniting both anticipation and concern. Universities are exploring how AI could reshape research, redefine disciplines and transform academic practices. While its impact is most evident in the sciences and engineering, AI is also challenging core concepts in the humanities and social sciences, such as interpretation, authorship and human understanding. AI’s influence is paradoxical. In science and engineering, it enhances traditional methods of measurement and prediction. Yet in the humanities and social sciences, AI’s ability to generate text and automate interpretation disrupts fundamental ideas about meaning, creativity and human knowledge.

https://www.universityworldnews.com/post.php?story=20260114091832715

Monday, January 19, 2026

Howard Updates AI Curriculum to Align With Workforce - Government Technology

Howard University is redesigning its Intro to Artificial Intelligence course, teaching the fundamentals of AI-assisted software development that are proving necessary for entry-level roles. The course introduces AI directly into instruction through hands-on, industry-aligned training, according to a news release Tuesday. Developed in partnership with CodePath, the course draws on curriculum originally designed by the industry-aligned education nonprofit and is co-taught by Howard faculty alongside an instructor from CodePath’s faculty network. CodePath shapes its courses around employer needs, which its surveys indicate are internship experience, technical interview performance, and side projects or portfolios

https://www.govtech.com/education/higher-ed/howard-updates-ai-curriculum-to-align-with-workforce

AI on Campus: Rethinking the Core Goals of Higher Education - Abby Sourwine, GovTech

For many professors, teaching has always been about more than delivering subject-specific content. Derek Bruff, director of the University of Virginia’s Center for Teaching Excellence, said the core mission of college is to help students develop critical thinking, problem-solving and judgment skills that prepare them for life beyond the classroom. But with artificial intelligence offering such a convenient tool to offload those skills, professors are re-evaluating how they approach their goals, sending ripple effects to instruction, assessments and student interactions. “I can’t recall another technology in my career that has had such a transformative effect on higher-ed teaching and learning,” he said.

Sunday, January 18, 2026

AI Agents in Higher Education: Transforming Student Services and Support - Tom Mangan, EdTech

Similarly, researchers have noted a host of ways that agentic AI tools can potentially drive improvements in higher education. Agents will be able to gather data from multiple sources to assess a student’s progress across multiple courses. If the student starts falling behind, processes could kick in to help them catch up. Agents can relieve teachers and administrators from time-consuming chores such as grading multiple-choice tests and monitoring attendance. The idea is catching on. Andrew Ng, co-founder of Coursera, launched a startup called Kira Learning to ease burdens on overworked teachers. “Kira’s AI tutor works alongside teachers as an intelligent co-educator, adapting in real-time to each student’s learning style and emotional state,” Andrea Pasinetti, Kira Learning’s CEO, says in an interview with The Observer.

Evaluating Recent Advances in Affective Intelligent Tutoring Systems: A Scoping Review of Educational Impacts and Future Prospects - Jorge Fernández-Herrero, Journal of Education Sciences, MDPI

Affective intelligent tutoring systems (ATSs) are gaining recognition for their role in personalized learning through adaptive automated education based on students’ affective states. This scoping review evaluates recent advancements and the educational impact of ATSs, following PRISMA guidelines for article selection and analysis. A structured search of the Web of Science (WoS) and Scopus databases resulted in 30 studies covering 27 distinct ATSs. These studies assess the effectiveness of ATSs in meeting learners’ emotional and cognitive needs. This review examines the technical and pedagogical aspects of ATSs, focusing on how emotional recognition technologies are used to customize educational content and feedback, enhancing learning experiences. 

Saturday, January 17, 2026

Here are 4 ways AI will impact higher ed in the new year - Alcino Donadel, University Business

1. Emotionally intelligent AI

Institutions will use technology to drive deeper human connection amid the rapid rise of AI assistants, chatbots and algorithmically tailored content, Researchers from MIT, the University of Pittsburgh and other institutions found that AI use in the classroom lowered brain activity and led to student anxiety and confusion. Teachers also feared losing instructional autonomy and human connections. One student panel demanded that institutions and industry place the campus community at the heart of technological innovation. “In 2026, the push for ethically designed, emotionally aware tech will gain momentum,” said Betheny Gross, director of research at WGU Labs. “The next generation of technology will aim to rebuild what the last era of digital tools too often eroded.”

The Limits of Artificial Intelligence in Professional Military Education - Matthew Woessner, Real Clear Defense

The purpose of this paper is not to prescribe how to incorporate AI into specific courses, but rather to highlight potential student vulnerabilities and offer suggestions for how they can be managed within a broad curricular framework across PME. Even as AI is incorporated into PME, faculty must ensure that the technology does not supplant student progress in reading, writing, and critical thinking. In his “All AI—All the Time” rebuttal, Jim Lacey takes issue with my general framework, arguing that PME is not “grade school.” He maintains that students entering PME already know how to read and write. He further expresses doubt that “there is a PME student alive who does not know that AI systems are fallible and often make things up.”

Friday, January 16, 2026

Artificial Intelligence in Education Market Growing at a CAGR of 37.68% During 2025 - 2035 - IT, New Media & Software, Market Reasearch Future (MRFR)

AI technologies, including machine learning, natural language processing, and computer vision, are no longer futuristic concept they are becoming integral to classrooms, online platforms, and administrative systems worldwide. The integration of AI in education enhances personalization, efficiency, and accessibility, creating opportunities for a more inclusive and effective learning experience. The Artificial Intelligence in Education market was valued at USD 34.7 billion in 2024 and is projected to experience significant growth in the coming decade. The market is expected to reach USD 47.78 billion in 2025 and surge to USD 1,169.44 billion by 2035, representing a robust compound annual growth rate (CAGR) of 37.68% during the forecast period from 2025 to 2035.