Wednesday, February 18, 2026

Startup costs and confusion are stalling apprenticeships in the US. Here’s how to fix it. - Annelies Goger, Brookings

There is widespread support for expanding apprenticeships in the United States, but employer participation remains stubbornly low, especially in industries where apprenticeships are uncommon. This isn’t for lack of trying; intermediaries and technical assistance providers have developed workarounds, states and the federal government have launched initiatives and grants, and funders have supported pilot programs and communities of practice. But it’s not enough. Our research, including interviews with 14 experts and nine employers, suggests that minor tweaks to the U.S. apprenticeship system won’t be sufficient to scale it across many industries and occupations. 


Anthropic's CEO: ‘We Don’t Know if the Models Are Conscious’ - Interesting Times with Ross Douthat, New York Times

In this podcast, Anthropic CEO Dario Amodei discusses both the "utopian" promises and the grave risks of artificial intelligence with Ross Douthat. On the optimistic side, Amodei envisions AI accelerating biological research to cure major diseases like cancer and Alzheimer's [04:31], while potentially boosting global GDP growth to unprecedented levels [08:24]. He frames the ideal future as one where "genius-level" AI serves as a tool for human progress, enhancing democratic values and personal liberty rather than replacing human agency [10:24]. However, the conversation also delves into the "perils" of rapid AI advancement, including massive economic disruption and the potential for a "bloodbath" of white-collar and entry-level jobs [13:40]. Amodei expresses significant concern regarding "autonomy risks," where AI systems might go rogue or be misused by authoritarian regimes to create unbeatable autonomous armies [32:03]. He touches upon the ethical complexities of AI consciousness, noting that while it is unclear if models are truly conscious, Anthropic has implemented "constitutional" training to ensure models operate under human-defined ethical principles [49:05]. The discussion concludes on the tension between human mastery and a future where machines might "watch over" humanity, echoing the ambiguous themes of the poem "Machines of Loving Grace" [59:27]. (Gemini 3 mode Fast assisted with the summary)

Tuesday, February 17, 2026

Academics moving away from outright bans of AI, study finds - Jack Grove, Times Higher Ed

Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found. Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities. Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.

https://www.timeshighereducation.com/news/academics-moving-away-outright-bans-ai-study-finds

Author Talks: How AI could redefine progress and potential - Zack Kass, McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.


Monday, February 16, 2026

Regional universities seek new ways to attract researchers - Fintan Burke, University World News

Even as Europe continues to attract researchers from abroad to work and study, those in its depopulating regions continue to deal with the effects of a declining regional population and, in some cases, have found ways to adapt. Last year, a study of scientists’ migration patterns showed which regions suffer most from depopulation. The Scholarly Migration Database was developed by a team of researchers at the Max Planck Institute for Demographic Research in Germany. In general, it found that regions in Europe’s Nordic countries attract researchers, whereas those to the south see more scholars leave than arrive. There are some notable exceptions, though. For example, Italy’s Trentino-Alto Adige region has become a popular destination for scientists, seeing 7.47 scholars per 1,000 of the population arriving each year since 2017.

Binghamton receives largest academic gift in University history to establish AI center - John Bhrel, Bing U

A record-setting $55 million commitment from a Binghamton University alumnus and New York state will establish the Center for AI Responsibility and Research, the first-ever independent AI research center at a public university in the U.S. Research conducted via the new center will build upon Binghamton research that advances AI for the public good. Part of the Empire AI project, an initiative to establish New York as a leader in responsible AI research and development, the center will be supported by a $30 million commitment from Tom Secunda ’76, MA ’79, co-founder of Bloomberg LP, who is a key private sector partner and philanthropist involved in Gov. Kathy Hochul’s Empire AI consortium. This will be coupled with a $25 million capital investment from Gov. Hochul and the New York State Legislature. “The Center for AI Responsibility and Research will bring together innovative research and scholarship, ethical leadership and public engagement at a moment when all three are urgently needed,” said President Anne D’Alleva.

Sunday, February 15, 2026

Study of 31,000 syllabi probes ‘how instructors regulate AI’ - Nathan M Greenfield, University World News

Since the spring of 2023, after a reflexive move to drastically restrict the use of artificial intelligence tools in the months after ChatGPT became available, most academic disciplines have moved to a more permissive attitude toward the use of large language models (LLMs). This occurred as professors learnt to distinguish how AI tools impact student learning and skills development. The shift is charted by Dr Igor Chirikov, a senior researcher at the University of California (UC), Berkeley’s Center for Studies in Higher Education and director of the Student Experience in the Research University (SERU) Consortium, in a study published on 3 February 2026 and titled “How Instructors Regulate AI in College: Evidence from 31,000 course syllabi”.

Women or Men... Who Views Artificial Intelligence as More Dangerous? - SadaNews

Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain. The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.

Saturday, February 14, 2026

Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation

Our recent qualitative study with 28 educators across Canadian universities and colleges—from librarians to engineering professors—suggests that we have entered a watershed moment in education. We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm? Participants widely viewed prompting—the ability to formulate clear and purposeful instructions for a chatbot—as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely. Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking. Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into overreliance or uncritical use of AI.

Friday, February 13, 2026

Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube

This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)

https://youtu.be/ro6BxryR0Yo?si=EAg-zAPcKFm618up&t=1

HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv

While current humanoid whole-body control frameworks predominantly rely on the static environment assumptions, addressing tasks characterized by high dynamism and complex interactions presents a formidable challenge. In this paper, we address humanoid skateboarding, a highly challenging task requiring stable dynamic maneuvering on an underactuated wheeled platform. This integrated system is governed by non-holonomic constraints and tightly coupled human-object interactions. Successfully executing this task requires simultaneous mastery of hybrid contact dynamics and robust balance control on a mechanically coupled, dynamically unstable skateboard. 

Thursday, February 12, 2026

Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, making it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. 

The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired

Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI").  Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?

https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/