This is the year AI agents stopped being an experiment and became part of how people shop, not in headline-grabbing ways but in everyday moments—helping shoppers make sense of choices, assemble baskets, resolve trade-offs, and move toward action. Yet what looks like small convenience today is an early signal of a much larger shift in the way we shop. According to our research, even under moderate scenarios, AI agents could mediate $3 trillion to $5 trillion of global consumer commerce by 2030.1 Because agents navigate the same internet as humans—visiting websites, engaging with APIs, and interacting with loyalty programs—they can scale quickly. And as they do, they are reshaping how intent forms, how products are discovered, and where value pools can be found.
Saturday, February 21, 2026
Milwaukee’s 5 higher education leaders team up on AI - Corrinne Hess, Wisconsin Public Radio
The leaders of Milwaukee’s five institutions of higher education are partnering with one of Wisconsin’s largest companies with the goal of making the region a nationally recognized leader for artificial intelligence and data science. During a meeting at Northwestern Mutual’s headquarters downtown, the chancellors and presidents of the University of Wisconsin-Milwaukee, Marquette University, the Medical College of Wisconsin, Milwaukee School of Engineering and Waukesha County Technical College, expressed the same sentiment: AI is moving fast. “We’ve got to do it well, we’ve got to do it correctly and we’ve got to do it ethically,” said Rich Barnhouse, president of WCTC. “And we’ve got to get AI in the hands of every single American.”
Friday, February 20, 2026
One New Thing: How AI Is Helping College Administrators Offload Work - Alina Tugend, US News
See ChatGPT’s hidden bias about your state or city - Geoffrey A. Fowler and Kevin Schaul, Washington Post
Ask ChatGPT which state has the laziest people, and the chatbot will politely refuse to say. But researchers at Oxford and the University of Kentucky forced the bot to reveal its hidden biases. They systematically asked the chatbot to choose which of two states had the laziest people, for every combination of states, revealing a ranking shown in the map above. ChatGPT ranked Mississippi as having lazier people compared to other states, with the rest of the Deep South not far behind. It’s impossible to say exactly why the chatbot repeatedly selected Mississippi, but it could be picking up on historic biases against Black people or poor people — or using other non-accurate metrics. Mississippi has the nation’s highest percentage of Black people. It is also America’s poorest state.
Thursday, February 19, 2026
The Impact of Artificial Intelligence on Competitiveness—An Exploratory Study on Employees in Logistics Companiesin Egypt - Ehab Edward Mikhail, et al; SCRIP Technology and Investment
This dissertation investigates the impact of artificial intelligence (AI) adoption on the competitiveness of logistics companies in Egypt, focusing on its role in enhancing operational efficiency, service quality, and customer satisfaction. The findings indicate that AI implementation significantly improves competitiveness by reducing costs, enhancing productivity, and strengthening customer experience; however, most small and medium-sized firms face reduced efficiency due to early-stage adoption challenges, high implementation costs, weak strategic alignment, poor data quality, limited expertise, and employee resistance
https://www.scirp.org/journal/paperinformation?paperid=149677
Aoun urges higher education institutions to embrace AI in Boston Globe op-ed - Lily Cooper, Huntington News
In an op-ed published in The Boston Globe Feb. 10 titled “Students are AI natives. Why aren’t their colleges?” Aoun advocated for curricula that incorporate AI, rather than discourage it, and a shift toward experiential learning: two initiatives that Northeastern has already implemented. “Instead of being on the defensive, now is the moment to shake up the way universities prepare students for the world. This will require updating both what and how we teach,” Aoun wrote. There are multiple reasons why universities must act now, Aoun argued. For one, it’s becoming increasingly apparent that AI will replace many entry-level positions that college graduates typically fill, he wrote. Unemployment for college graduates is now 1.4 points higher than for all workers, leading society to question the value of higher education institutions, Aoun argued.
Wednesday, February 18, 2026
Startup costs and confusion are stalling apprenticeships in the US. Here’s how to fix it. - Annelies Goger, Brookings
There is widespread support for expanding apprenticeships in the United States, but employer participation remains stubbornly low, especially in industries where apprenticeships are uncommon. This isn’t for lack of trying; intermediaries and technical assistance providers have developed workarounds, states and the federal government have launched initiatives and grants, and funders have supported pilot programs and communities of practice. But it’s not enough. Our research, including interviews with 14 experts and nine employers, suggests that minor tweaks to the U.S. apprenticeship system won’t be sufficient to scale it across many industries and occupations.
Anthropic's CEO: ‘We Don’t Know if the Models Are Conscious’ - Interesting Times with Ross Douthat, New York Times
In this podcast, Anthropic CEO Dario Amodei discusses both the "utopian" promises and the grave risks of artificial intelligence with Ross Douthat. On the optimistic side, Amodei envisions AI accelerating biological research to cure major diseases like cancer and Alzheimer's [04:31], while potentially boosting global GDP growth to unprecedented levels [08:24]. He frames the ideal future as one where "genius-level" AI serves as a tool for human progress, enhancing democratic values and personal liberty rather than replacing human agency [10:24]. However, the conversation also delves into the "perils" of rapid AI advancement, including massive economic disruption and the potential for a "bloodbath" of white-collar and entry-level jobs [13:40]. Amodei expresses significant concern regarding "autonomy risks," where AI systems might go rogue or be misused by authoritarian regimes to create unbeatable autonomous armies [32:03]. He touches upon the ethical complexities of AI consciousness, noting that while it is unclear if models are truly conscious, Anthropic has implemented "constitutional" training to ensure models operate under human-defined ethical principles [49:05]. The discussion concludes on the tension between human mastery and a future where machines might "watch over" humanity, echoing the ambiguous themes of the poem "Machines of Loving Grace" [59:27]. (Gemini 3 mode Fast assisted with the summary)
Tuesday, February 17, 2026
Academics moving away from outright bans of AI, study finds - Jack Grove, Times Higher Ed
Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found. Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities. Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.
https://www.timeshighereducation.com/news/academics-moving-away-outright-bans-ai-study-finds
Author Talks: How AI could redefine progress and potential - Zack Kass, McKinsey
In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.
Monday, February 16, 2026
Regional universities seek new ways to attract researchers - Fintan Burke, University World News
Even as Europe continues to attract researchers from abroad to work and study, those in its depopulating regions continue to deal with the effects of a declining regional population and, in some cases, have found ways to adapt. Last year, a study of scientists’ migration patterns showed which regions suffer most from depopulation. The Scholarly Migration Database was developed by a team of researchers at the Max Planck Institute for Demographic Research in Germany. In general, it found that regions in Europe’s Nordic countries attract researchers, whereas those to the south see more scholars leave than arrive. There are some notable exceptions, though. For example, Italy’s Trentino-Alto Adige region has become a popular destination for scientists, seeing 7.47 scholars per 1,000 of the population arriving each year since 2017.
Binghamton receives largest academic gift in University history to establish AI center - John Bhrel, Bing U
Sunday, February 15, 2026
Study of 31,000 syllabi probes ‘how instructors regulate AI’ - Nathan M Greenfield, University World News
Since the spring of 2023, after a reflexive move to drastically restrict the use of artificial intelligence tools in the months after ChatGPT became available, most academic disciplines have moved to a more permissive attitude toward the use of large language models (LLMs). This occurred as professors learnt to distinguish how AI tools impact student learning and skills development. The shift is charted by Dr Igor Chirikov, a senior researcher at the University of California (UC), Berkeley’s Center for Studies in Higher Education and director of the Student Experience in the Research University (SERU) Consortium, in a study published on 3 February 2026 and titled “How Instructors Regulate AI in College: Evidence from 31,000 course syllabi”.
Women or Men... Who Views Artificial Intelligence as More Dangerous? - SadaNews
Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain. The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.
Saturday, February 14, 2026
Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs
A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good.
ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation
Friday, February 13, 2026
Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube
This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)
HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv
Thursday, February 12, 2026
Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times
The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired
Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI"). Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?
https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/
Wednesday, February 11, 2026
AI-powered search is changing how students choose colleges - Michelle Centamore, University Business
Universities And States Lead Charge On AI Education - Evrim Ağacı, Grand Pinnacle Tribune
Tuesday, February 10, 2026
Working with AI: Measuring the Applicability of Generative AI to Occupations Kiran Tomlinson , Sonia Jaffe , Will Wang , Scott Counts , Siddharth Suri, Microsoft
Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising.
AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed
Monday, February 09, 2026
Evaluating AI-powered learning assistants in engineering higher education with implications for student engagement, ethics, and policy - Ramteja Sajja, et al, Nature
How custom AI bots are changing the classroom:Faculty share cutting-edge AI tools enhancing student learning at the business school. - Molly Loonam, WP Carey ASU
Sunday, February 08, 2026
Artificial Intelligence panel demonstrates breadth of teaching, research, and industry collaboration across the Universities of Wisconsin - University of Wisconsin
The Universities of Wisconsin underscored their growing leadership in artificial intelligence (AI) innovation today as representatives from all 13 public universities convened for a panel discussion before the Board of Regents. The conversation highlighted the universities’ shared commitment to shaping the future of AI in education, research, and workforce development. “As AI reshapes our world, the Universities of Wisconsin are not standing on the sidelines. We are helping define what responsible and innovative use of AI looks like for higher education,” said Universities of Wisconsin President Jay Rothman. “This panel today demonstrated how the Universities of Wisconsin are embracing AI in strategic, collaborative, and responsible ways.”
What generative AI reveals about assessment reform in higher educatio - Higher Education Policy Institute
Assessment is fast becoming a central focus in the higher education debate as we move into an era of generative AI, but too often institutions are responding through compliance and risk-management actions rather than fundamental pedagogical reform. Tightened regulations, expanded scrutiny and mechanistic controls may reassure quality assurance systems, but they run the risk of diluting genuine transformation and placing unsustainable pressure on staff and students alike. Assessment is not simply a procedural hurdle; it is a pivotal experience that shapes what students learn, how they engage with content and what universities and employers prioritise as valuable knowledge and skills. If reform is driven through compliance, we will miss opportunities to align assessments with the learning needs of a graduate entering the gen-AI era.
Saturday, February 07, 2026
An Agent Revolt: Moltbook Is Not A Good Idea - Amir Husain, Forbes
Stand Out in the Job Hunt With These No-Cost Certificates - UC Denver
While Leo Dixon was working on his doctoral degree, he thought he might need a way to stand out. So, he decided to earn an artificial intelligence (AI) credential on top of his diploma. It gave him an edge over other candidates vying for the same positions as him. “As soon as I got that, doors started flying open, because it was something more than what someone else had,” Dixon said. Now, as an instructor in the Department of Information Systems at the CU Denver Business School, he wants his students to have the same advantage. He requires them to earn Coursera or Grow with Google certificates as part of his classes. These two platforms are both self-paced, online learning programs that help users build industry-relevant skills. Their courses cover topics ranging from working with AI to cybersecurity, project management, marketing, ecommerce, and more. Dixon encourages students to log on, poke around, and see what they think would help them—and their future careers. “
https://news.ucdenver.edu/stand-out-in-the-job-hunt-with-these-no-cost-certificates/
Friday, February 06, 2026
The Skills Mismatch Economy: Insights from the Wharton-Accenture Skills Index - Knowledge at Wharton
Thursday, February 05, 2026
How Can I Protect Myself From Job Obsolescence Caused by AI? - Ray Schroeder, Inside Higher Ed
We do not know just how, and how quickly, AI will roll out. However, a Gallup Poll released last week showed nearly one-quarter of American workers use AI at least a few times each week. We know that Agentic AI is different from Generative AI. Generative AI is the transactional, commonly chatbot mounted, question and answer form that we saw first in ChatGPT by OpenAI a couple of years ago. That remains a powerful tool. Agentic AI enables AI to reason, research, plan, control other digital tools, conduct actions on your behalf, and complete multiple smart steps. It is capable of taking on a role delivering outcomes. That’s much like what a person is hired to do. In our jobs, we often are expected not to merely respond to individual questions, but to accomplish outcomes and results, and then, when possible, to revise our methods to do the job better.
Differences and Trends of Artificial Intelligence in Medical Education: A Comparative Bibliometric Analysis Between China and the International Community - Songhua Ma, et al; Dove press Open access to scientific and medical research
To save entry-level jobs from AI, look to the medical residency model - Molly Kinder, Brookings
Wednesday, February 04, 2026
Measuring US workers’ capacity to adapt to AI-driven job displacement - Sam Manning, Tomás Aguirre, Mark Muro, and Shriya Methkupally; Brookings
Existing measures of AI “exposure” overlook workers’ adaptive capacity—i.e., their varied ability to navigate job displacement. Accounting for these factors, around 70% of highly AI-exposed workers (26.5 million out of 37.1 million) are employed in jobs with a high average capacity to manage job transitions if necessary. At the same time, 6.1 million workers, primarily in clerical and administrative roles, lack adaptive capacity due to limited savings, advanced age, scarce local opportunities, and/or narrow skill sets. Of these workers, 86% are women. Geographically, highly AI-exposed occupations with low adaptive capacity make up a larger share of total employment in college towns and state capitals, particularly in the Mountain West and Midwest.
McKinsey Quarterly: Digital Edition - Growth
According to McKinsey research, nearly eight in ten organizations now use generative AI—but most have yet to see a meaningful impact on their bottom line. By combining autonomy, planning, memory, and integration, agentic AI has the potential to achieve what many hoped generative AI would: true business transformation through automation of complex processes. This issue’s cover package explores how leaders can capture that potential by rethinking workflows from the ground up—with agents at the center of value creation.
Tuesday, February 03, 2026
Project Genie: Experimenting with infinite, winteractiveorlds - the Keyword, Google
In August, we previewed Genie 3, a general-purpose world model capable of generating diverse, interactive environments. Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it. The next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S (18+). This experimental research prototype lets users create, explore and remix their own interactive worlds.
https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed
On my campus, discussions about artificial intelligence tend to focus on how students should be allowed to use it and what tools the university should invest in. In my own work, I’ve seen both the promise and the pitfalls: AI that speeds up my coding, tidies my writing, and helps me synthesize complex documents, and the occasional student submission that is clearly machine-generated. As I’ve started integrating these tools into my work, I’ve begun asking a different question: How is AI reshaping policymaking in colleges and universities, and how might it influence the way we design, implement and analyze university policy in the future?
The Biggest Trends in Online Learning for 2026 - Busines NewsWire
Monday, February 02, 2026
Gemini 4: 100+ Trillion Parameters, Autonomous AI, Real-Time Perception & the Future of Work - BitBiasedAI
Gemini 4 marks a significant transition in artificial intelligence, moving from models that simply reason through problems to systems capable of autonomous action [02:30]. Unlike previous versions that were primarily reactive, Gemini 4 utilizes "Parallel Hypothesis Exploration" to test multiple solutions simultaneously, allowing it to be proactive rather than just responding to prompts [03:11]. This evolution is supported by Project Astra, which provides real-time multimodal perception—seeing and hearing the user's environment—and Project Mariner, a web-browsing agent that can navigate websites, fill out forms, and complete multi-step tasks like booking travel or managing finances entirely on its own [05:37]. The broader ecosystem is built on robust security and hardware, featuring the Agent Payments Protocol (AP2) to ensure secure, cryptographically signed transactions [08:03]. This infrastructure is powered by the seventh-generation Ironwood TPU, which provides the massive compute power needed for real-time background processing and persistent contextual memory [12:02]. As AI moves toward an "agentic" economy, the primary skill for users will shift from simple prompting to complex orchestration, where individuals act as managers of multiple specialized agents [22:19]. (summary assisted by Gemini 3)
Professional learning in higher education: trends, gaps, and correlations - Ekaterina Pechenkina, T and F Online
Sunday, February 01, 2026
Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers - Will Douglas, MIT Technology Review
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science.
Report: University diplomas losing value to GenAI - Alan Wooten, Rocky Mount Telegraph
GenAI, as it is colloquially known, isn’t being universally rejected by the 1,057 college and university faculty members sampled nationwide by Elon University’s Imagining the Digital Future Center and the American Association of Colleges and Universities Oct. 29-Nov. 26. It is, however, placing higher education at an inflection point. “When more than 9 in 10 faculty warn that generative AI may weaken critical thinking and increase student overreliance, it is clear that higher education is at an inflection point,” said Eddie Watson, vice president for Digital Innovation at the AAC&U. “These findings do not call for abandoning AI, but for intentional leadership — rethinking teaching models, assessment practices and academic integrity so that human judgment, inquiry and learning remain central.