Existing measures of AI “exposure” overlook workers’ adaptive capacity—i.e., their varied ability to navigate job displacement. Accounting for these factors, around 70% of highly AI-exposed workers (26.5 million out of 37.1 million) are employed in jobs with a high average capacity to manage job transitions if necessary. At the same time, 6.1 million workers, primarily in clerical and administrative roles, lack adaptive capacity due to limited savings, advanced age, scarce local opportunities, and/or narrow skill sets. Of these workers, 86% are women. Geographically, highly AI-exposed occupations with low adaptive capacity make up a larger share of total employment in college towns and state capitals, particularly in the Mountain West and Midwest.
Wednesday, February 04, 2026
McKinsey Quarterly: Digital Edition - Growth
According to McKinsey research, nearly eight in ten organizations now use generative AI—but most have yet to see a meaningful impact on their bottom line. By combining autonomy, planning, memory, and integration, agentic AI has the potential to achieve what many hoped generative AI would: true business transformation through automation of complex processes. This issue’s cover package explores how leaders can capture that potential by rethinking workflows from the ground up—with agents at the center of value creation.
Tuesday, February 03, 2026
Project Genie: Experimenting with infinite, winteractiveorlds - the Keyword, Google
In August, we previewed Genie 3, a general-purpose world model capable of generating diverse, interactive environments. Even in this early form, trusted testers were able to create an impressive range of fascinating worlds and experiences, and uncovered entirely new ways to use it. The next step is to broaden access through a dedicated, interactive prototype focused on immersive world creation. Starting today, we're rolling out access to Project Genie for Google AI Ultra subscribers in the U.S (18+). This experimental research prototype lets users create, explore and remix their own interactive worlds.
https://blog.google/innovation-and-ai/models-and-research/google-deepmind/project-genie/
The Biggest Trends in Online Learning for 2026 - Busines NewsWire
Monday, February 02, 2026
Gemini 4: 100+ Trillion Parameters, Autonomous AI, Real-Time Perception & the Future of Work - BitBiasedAI
Gemini 4 marks a significant transition in artificial intelligence, moving from models that simply reason through problems to systems capable of autonomous action [02:30]. Unlike previous versions that were primarily reactive, Gemini 4 utilizes "Parallel Hypothesis Exploration" to test multiple solutions simultaneously, allowing it to be proactive rather than just responding to prompts [03:11]. This evolution is supported by Project Astra, which provides real-time multimodal perception—seeing and hearing the user's environment—and Project Mariner, a web-browsing agent that can navigate websites, fill out forms, and complete multi-step tasks like booking travel or managing finances entirely on its own [05:37]. The broader ecosystem is built on robust security and hardware, featuring the Agent Payments Protocol (AP2) to ensure secure, cryptographically signed transactions [08:03]. This infrastructure is powered by the seventh-generation Ironwood TPU, which provides the massive compute power needed for real-time background processing and persistent contextual memory [12:02]. As AI moves toward an "agentic" economy, the primary skill for users will shift from simple prompting to complex orchestration, where individuals act as managers of multiple specialized agents [22:19]. (summary assisted by Gemini 3)
Professional learning in higher education: trends, gaps, and correlations - Ekaterina Pechenkina, T and F Online
Sunday, February 01, 2026
Prism is a ChatGPT-powered text editor that automates much of the work involved in writing scientific papers - Will Douglas, MIT Technology Review
OpenAI just revealed what its new in-house team, OpenAI for Science, has been up to. The firm has released a free LLM-powered tool for scientists called Prism, which embeds ChatGPT in a text editor for writing scientific papers. The idea is to put ChatGPT front and center inside software that scientists use to write up their work in much the same way that chatbots are now embedded into popular programming editors. It’s vibe coding, but for science.
Report: University diplomas losing value to GenAI - Alan Wooten, Rocky Mount Telegraph
GenAI, as it is colloquially known, isn’t being universally rejected by the 1,057 college and university faculty members sampled nationwide by Elon University’s Imagining the Digital Future Center and the American Association of Colleges and Universities Oct. 29-Nov. 26. It is, however, placing higher education at an inflection point. “When more than 9 in 10 faculty warn that generative AI may weaken critical thinking and increase student overreliance, it is clear that higher education is at an inflection point,” said Eddie Watson, vice president for Digital Innovation at the AAC&U. “These findings do not call for abandoning AI, but for intentional leadership — rethinking teaching models, assessment practices and academic integrity so that human judgment, inquiry and learning remain central.
Saturday, January 31, 2026
How the best CEOs are meeting the AI moment - McKinsey Podcast
How Americans are using AI at work, according to a new Gallup poll - MATT O’BRIEN and LINLEY SANDERS, AP News
Friday, January 30, 2026
How can boards best help guide companies through the competitive dynamics unleashed by AI? - Aamer Baig, Ashka Dave, Celia Huber, and Hrishika Vuppalac, McKinsey
What You MUST Study Now to Stay Relevant in the AI Era - Jensen Huang, Future AI
The video emphasizes that to remain relevant in the AI era, individuals must shift their focus from mastering specific tools to developing high-level human judgment and domain depth. Because AI commoditizes technical skills and general knowledge, the value shifts to those who can navigate the "what" and the "why" rather than just the "how" [02:30]. The speaker suggests a four-layer strategy for staying indispensable: achieving deep domain mastery where your judgment becomes rare, grounding yourself in "evergreen" fundamentals like systems thinking and physics, mastering the art of asking high-quality questions, and maintaining the emotional resilience to pivot quickly when outdated practices fail [04:52]. Ultimately, the goal is to become a "learning system" rather than just a holder of a specific job title [17:14]. As AI moves from digital screens into the physical world—impacting fields like robotics and logistics—there is a growing demand for people who understand physical constraints and can use AI as an amplifier for real-world problem-solving [13:21]. The speaker encourages viewers to move with urgency, using AI as a "sparring partner" to tackle unsolved, high-stakes problems that require human character and first-principles thinking to resolve [07:11]. (Gemini 3 contributed to the summary)
Thursday, January 29, 2026
Claude’s Constitution: Our vision for Claude's character - Anthropic
Claude’s constitution is a detailed description of Anthropic’s intentions for Claude’s values and behavior. It plays a crucial role in our training process, and its content directly shapes Claude’s behavior. It’s also the final authority on our vision for Claude, and our aim is for all our other guidance and training to be consistent with it. The document is written with Claude as its primary audience, so it might read differently than you’d expect. For example, it’s optimized for precision over accessibility, and it covers various topics that may be of less interest to human readers. We also discuss Claude in terms normally reserved for humans (e.g. “virtue,” “wisdom”). We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.
Why AI Disclosure Matters at Every Level - Cornelia Walther, Knowledge at Wharton
When a marketing executive uses AI to draft a client proposal, should they disclose it? What about a doctor using AI to analyze medical images, or a teacher generating discussion questions? As artificial intelligence weaves itself into the fabric of professional life, the question of disclosure has evolved from a philosophical curiosity into a pressing business imperative, one that reverberates through every level of human society. At the individual level, AI disclosure touches something that we tend to take for granted: our relationship with authenticity. When we present AI-generated work as entirely our own, we navigate a complex terrain of aspirations, emotions, thoughts, and sensations that make up the human experience. We may aspire to appear competent, fear judgement, try to rationalize what “counts” as our work, or experience discomfort with potential deception.
https://knowledge.wharton.upenn.edu/article/why-ai-disclosure-matters-at-every-level/