Thursday, October 09, 2025

Governor Newsom signs SB 53, advancing California’s world-leading artificial intelligence industry - Governor Gavin Newsom

Governor Newsom today signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA), authored by Senator Scott Wiener (D-San Francisco) – legislation carefully designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies. The new law builds on recommendations from California’s first-in-the-nation report, called for by Governor Newsom and published earlier this year — and helps advance California’s position as a national leader in responsible and ethical AI, the world’s fourth-largest economy, the birthplace of new technology, and the top pipeline for tech talent.

Udemy Banks on Artificial Intelligence to Power Online Learning - Bloomberg Businessweek

Udemy is in the midst of pivoting from a leading online learning platform to an AI-powered skills acceleration platform built for individuals and organizations. The company says it has expanded its focus from learning to "reskilling," which includes assessment, role plays, and new learning experiences. The company runs what it calls a "two-sided" business: a marketplace for consumers as well as Udemy Business for enterprises, which is designed to help businesses become more competitive.

Wednesday, October 08, 2025

50 AI agents get their first annual performance review - 6 lessons learned - Joe McKendrick, ZDnet

"Agentic AI efforts that focus on fundamentally reimagining entire workflows -- that is, the steps that involve people, processes, and technology -- are more likely to deliver a positive outcome," according to the review. Start with addressing key user pain points, the co-authors suggest. Organizations with document-intensive workflows, such as insurance companies or legal firms, for example, benefit from having agents handle tedious steps. There will always be a need for human workers to "oversee model accuracy, ensure compliance, use judgment, and handle edge cases," the co-authors emphasized. Redesign work "so that people and agents can collaborate well together. Without that focus, even the most advanced agentic programs risk silent failures, compounding errors, and user rejection." 

The future of work is agentic - McKinsey

Think about your org chart. Now imagine it features both your current colleagues—humans, if you’re like most of us—and AI agents. That’s not science fiction; it’s happening—and it’s happening relatively quickly, according to McKinsey Senior Partner Jorge Amar. In this episode of McKinsey Talks Talent, Jorge joins McKinsey talent leaders Brooke Weddle and Bryan Hancock and Global Editorial Director Lucia Rahilly to talk about what these AI agents are, how they’re being used, and how leaders can prepare now for the workforce of the not-too-distant future.

Tuesday, October 07, 2025

Factors influencing undergraduates’ ethical use of ChatGPT: a reasoned goal pursuit approach - Radu BogdanToma & Iraya Yánez-Pérez, Interactive Learning Environments

The widespread use of large language models, such as ChatGPT, has changed the learning behaviours of undergraduate students, raising issues of academic dishonesty. This study investigates the factors that influence the ethical use of ChatGPT among undergraduates using the recently proposed Theory of Reasoned Goal Pursuit. Through a qualitative elicitation procedure, 26 salient beliefs were identified, representing procurement and approval goals, advantages and disadvantages, the social influence, and factors facilitating and hindering ethical use of ChatGPT. A subsequent, two-wave quantitative survey provided promising evidence for the theory, revealing that positive attitudes and subjective norms emerged as key antecedents of motivation to use ChatGPT ethically.

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Juncheng Guo & Huanxi Li, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy.

Monday, October 06, 2025

Sans Safeguards, AI in Education Risks Deepening Inequality - Government Technology

A new UNESCO report cautions that artificial intelligence has the potential to threaten students’ access to quality education. The organization calls for a focus on people, to ensure digital tools enhance education. While AI and other digital technology hold enormous potential to improve education, a new UNESCO report warns they also risk eroding human rights and worsening inequality if deployed without deliberately robust safeguards. Digitalization and AI in education must be anchored in human rights, UNESCO argued in the report, AI and Education: Protecting the Rights of Learners, and the organization urged governments and international organizations to focus on people, not technology, to ensure digital tools enhance rather than endanger the right to education.

https://www.govtech.com/education/k-12/sans-safeguards-ai-in-education-risks-deepening-inequality

What's your college's AI policy? Find out here. - Chase DiBenedetto, Mashable

 Part of ChatGPT for Education, OpenAI has announced educational partnerships with Harvard Business School, University of Pennsylvania's Wharton College, Duke, University of California, Los Angeles (UCLA), UC San Diego, UC Davis, Indiana University, Arizona State University, Mount Sinai's Ichan School of Medicine, and the entire California State University (CSU) System — OpenAI's collaboration with CSU schools is the largest ChatGPT deployment yet. But there are dozens more, an OpenAI spokesperson told Mashable, that haven't made their ChatGPT partnerships public. Ed Clark, chief information officer for CSU, told Mashable that the decision to partner with OpenAI came from a survey of students that showed many were already signing up for AI accounts using their student emails — faculty and staff were too. 

Sunday, October 05, 2025

Linking digital competence, self-efficacy, and digital stress to perceived interactivity in AI-supported learning contexts - Jiaxin Ren, Nature

As artificial intelligence technologies become more integrated into educational contexts, understanding how learners perceive and interact with such systems remains an important area of inquiry. This study investigated associations between digital competence and learners’ perceived interactivity with artificial intelligence, considering the potential mediating roles of information retrieval self-efficacy and self-efficacy for human–robot interaction, as well as the potential moderating role of digital stress. Drawing on constructivist learning theory, the technology acceptance model, cognitive load theory, the identical elements theory, and the control–value theory of achievement emotions, a moderated serial mediation model was tested using data from 921 Chinese university students. The results indicated that digital competence was positively associated with perceived interactivity, both directly and indirectly through a sequential pathway involving the two forms of self-efficacy. 

What your students are thinking about artificial intelligence - Florencia Moore & Agostina Arbia, Time Higher Eduction

Students have been quick to adopt and integrate GenAI into their study practices, using it as a virtual assistant to enhance and enrich their learning. At the same time, they sometimes rely on it as a substitute for their own ideas and thinking, since GenAI can complete academic tasks in a matter of seconds. While the first or even second iteration may yield a hallucinated or biased response, with prompt refinement and guidance, it can produce results very close to our expectations almost instantly.

https://www.timeshighereducation.com/campus/what-your-students-are-thinking-about-artificial-intelligence

Saturday, October 04, 2025

Syracuse University adopts Claude for Education - EdScoop

yracuse University, the private research institution in New York, this week announced that it’s formed a partnership with Anthropic, the company behind the popular Claude chatbot, to provide students, faculty and staff with a version of the software designed for use in higher education. “Expanding access to Claude for all members of our community is another step in making Syracuse University the most digitally connected campus in America,” Jeff Rubin, senior vice president and chief digital officer, said in a press release. “By equipping every student, faculty member and staff member with Claude, we’re not only fueling innovation, but also preparing our community to navigate, critique and co-create with AI in real-world contexts.”

Colleges are giving students ChatGPT. Is it safe? - Rebecca Ruiz and Chase DiBenedetto - Mashable

This fall, hundreds of thousands of students will get free access to ChatGPT, thanks to a licensing agreement between their school or university and the chatbot's maker, OpenAI. When the partnerships in higher education became public earlier this year, they were lauded as a way for universities to help their students familiarize themselves with an AI tool that experts say will define their future careers. At California State University (CSU), a system of 23 campuses with 460,000 students, administrators were eager to team up with OpenAI for the 2025-2026 school year. Their deal provides students and faculty access to a variety of OpenAI tools and models, making it the largest deployment of ChatGPT for Education, or ChatGPT Edu, in the country. 

Friday, October 03, 2025

We’re introducing GDPval, a new evaluation that measures model performance on economically valuable, real-world tasks across 44 occupations. - OpenAI

We found that today’s best frontier models are already approaching the quality of work produced by industry experts. To test this, we ran blind evaluations where industry experts compared deliverables from several leading models—GPT‑4o, o4-mini, OpenAI o3, GPT‑5, Claude Opus 4.1, Gemini 2.5 Pro, and Grok 4—against human-produced work. Across 220 tasks in the GDPval gold set, we recorded when model outputs were rated as better than (“wins”) or on par with (“ties”) the deliverables from industry experts, as shown in the bar chart below.... We also see clear progress over time on these tasks. Performance has more than doubled from GPT‑4o (released spring 2024) to GPT‑5 (released summer 2025), following a clear linear trend. In addition, we found that frontier models can complete GDPval tasks roughly 100x faster and 100x cheaper than industry experts.

The AI Institute for Adult Learning and Online Education - Georgia Tech

(AI-ALOE), led by Georgia Tech and funded by the National Science Foundation, is a multi-institutional research initiative advancing the use of artificial intelligence (AI) to transform adult learning and online education. Through collaborative research and innovation, AI-ALOE develops AI technologies and strategies to enhance teaching, personalize learning, and expand educational opportunities at scale. Since its launch, AI-ALOE has developed seven innovative AI technologies, deployed across more than 360 classes at multiple institutions, reaching over 30,000 students. Recent research news indicated that Jill Watson, our virtual teaching assistant, outperforms ChatGPT in real classrooms. In addition, our collaborative teams have produced about 160 peer-reviewed publications, advancing both research and practice in AI-augmented learning. We invite you to join us for our upcoming virtual research showcase and discover the latest innovations and breakthroughs in AI for education.

Thursday, October 02, 2025

Operationalize AI Accountability: A Leadership Playbook - Kevin Werbach, Knowledge at Wharton

Goal
Deploy AI systems with confidence by ensuring they are fair, transparent, and accountable — minimizing risk and maximizing long-term value.
Nano Tool
As organizations accelerate their use of AI, the pressure is on leaders to ensure these systems are not only effective but also responsible. A misstep can result in regulatory penalties, reputational damage, and loss of trust. Accountability must be designed in from the start — not bolted on after deployment.

Strengthening our Frontier Safety Framework - Four Flynn, Helen King, Anca Dragan, Google Deepmind

AI breakthroughs are transforming our everyday lives, from advancing mathematics, biology and astronomy to realizing the potential of personalized education. As we build increasingly powerful AI models, we’re committed to responsibly developing our technologies and taking an evidence-based approach to staying ahead of emerging risks. Today, we’re publishing the third iteration of our Frontier Safety Framework (FSF) — our most comprehensive approach yet to identifying and mitigating severe risks from advanced AI models. This update builds upon our ongoing collaborations with experts across industry, academia and government. We’ve also incorporated lessons learned from implementing previous versions and evolving best practices in frontier AI safety.

We urgently call for international red lines to prevent unacceptable AI risks. - AI Red Lines

Some advanced AI systems have already exhibited deceptive and harmful behavior, and yet these systems are being given more autonomy to take actions and make decisions in the world. Left unchecked, many experts, including those at the forefront of development, warn that it will become increasingly difficult to exert meaningful human control in the coming years.  Governments must act decisively before the window for meaningful intervention closes. An international agreement on clear and verifiable red lines is necessary for preventing universally unacceptable risks. These red lines should build upon and enforce existing global frameworks and voluntary corporate commitments, ensuring that all advanced AI providers are accountable to shared thresholds. We urge governments to reach an international agreement on red lines for AI — ensuring they are operational, with robust enforcement mechanisms — by the end of 2026. 

Wednesday, October 01, 2025

AI Hallucinations May Soon Be History - Ray Schroeder, Inside Higher Ed

On Sept. 14, OpenAI researchers published a not-yet-peer-reviewed paper, “Why Language Models Hallucinate,” on arXiv. Gemini 2.5 Flash summarized the findings of the paper: "Systemic Problem: Hallucinations are not simply bugs but a systemic consequence of how AI models are trained and evaluated. Evaluation Incentives: Standard evaluation methods, particularly binary grading systems, reward models for generating an answer, even if it’s incorrect, and punish them for admitting uncertainty. Pressure to Guess: This creates a statistical pressure for large language models (LLMs) to guess rather than say “I don’t know,” as guessing can improve test scores even with the risk of being wrong."

AI is changing how Harvard students learn: Professors balance technology with academic integrity - MSN

AI has quickly become ubiquitous at Harvard. According to The Crimson’s 2025 Faculty of Arts and Sciences survey, nearly 80% of instructors reported encountering student work they suspected was AI-generated—a dramatic jump from just two years ago. Despite this, faculty confidence in identifying AI output remains low. Only 14% of respondents felt “very confident” in their ability to distinguish human from AI work. Research from Pennsylvania State University underscores this challenge: humans can correctly detect AI-generated text roughly 53% of the time, only slightly better than flipping a coin.


Tuesday, September 30, 2025

Who’s funding the AI data center boom? - McKinsey

Millions of servers run 24/7 to power the AI boom in data centers across the globe, and demand isn’t slowing down. McKinsey research shows that by 2030, data centers are projected to require $6.7 trillion in capital expenditures worldwide to keep pace. Who are the investors behind this multitrillion-dollar race to fund AI compute power? McKinsey’s Mark Patel, Pankaj Sachdeva, and coauthors share five key investor archetypes, each with unique opportunities, challenges, and potential solutions.

Making the Case for Technology To Drive Higher Ed Enrollment - Tony Digrazia, Ed Tech

The past nine months have only worsened the issues higher education is facing, leaving university IT teams with fewer staffers working for lower salaries amid overall tightening budgets, some of it in response to an array of new financial challenges. That’s not to mention the challenges that were ever-present in the years leading up to 2025. The enrollment cliff is here, and while overall enrollments have steadied after a yearslong decline, the full impacts of the enrollment cliff may not be felt until a couple more freshman classes are enrolled. And faith in higher education has never been lower, with a meager 35% of Americans saying college is “very important,” according to a newly released Gallup poll.

Monday, September 29, 2025

College Students’ Test Scores Soared After ChatGPT. Their Writing? Not So Much - Steve Fink, Study Finds

Exam scores jumped nearly 22 points after ChatGPT’s launch, while writing project marks dropped by about 10.

Passing students generally improved, but failing students showed mixed results—better exams but lower overall marks.
Creative research proposals showed no change, highlighting tasks where AI offers little advantage.
Universities face a dilemma: AI boosts the easiest-to-grade assessments, while deeper tasks require costly human review.

Author Talks: The key to ideation? Start with the answer, not the problem - McKinsey

What do you mean by ‘begin with the answer’? They don’t call it a “creative leap” for nothing. Nobody talks about a series of steps that lead to a creation, a genuinely creative idea. The concept of divergent thinking, or “going wide,” is key to creating truly new ideas. Most of what we typically do is convergent thinking: reducing, criticizing, judging, deciding. Once you have an answer, that’s exciting. Then you can usually work back through it to prove it should work in theory. That’s what I mean by starting with the answer. Everyone likes to think that you can start with an analysis of the data and come up with an insight. Then you can start talking about possible solutions, proceed in a linear process of steps, and arrive at a great idea. I just haven’t experienced that ideas happen in that way. Great ideas come out of generating lots of ideas, most of which will be bad, one of which—just one of which—could be brilliant.

https://www.mckinsey.com/featured-insights/mckinsey-on-books/author-talks-the-key-to-ideation-start-with-the-answer-not-the-problem

Sunday, September 28, 2025

Detecting and reducing scheming in AI models - OpenAI

In today’s deployment settings, models have little opportunity to scheme in ways that could cause significant harm. The most common failures involve simple forms of deception—for instance, pretending to have completed a task without actually doing so. We've put significant effort into studying and mitigating deception and have made meaningful improvements in GPT‑5⁠ compared to previous models. For example, we’ve taken steps to limit GPT‑5’s propensity to deceive, cheat, or hack problems—training it to acknowledge its limits or ask for clarification when faced with impossibly large or under-specified tasks and to be more robust to environment failures—though these mitigations are not perfect and continued research is needed.


Public views on being human in 2035 - Lee Rainy, Elon University

This July 2025 survey by the Imagining the Digital Future Center found that American adults expect the changes in many human capacities in the coming AI-influenced decade will be potent and mostly negative. Americans said the widespread adoption of AI systems will have significant impact overall on human capacities in the coming decade About half (52%) of American adults surveyed said the impact AI will have on human capacities by the year 2035 will be revolutionary or deep and meaningful; 38% said the changes will be clear and distinct; and 7% said they expect that the changes will be barely perceptible. Just 3% said the impact will be inconsequential.

Saturday, September 27, 2025

Want to future-proof your campus? Start here - Kevin Sanders, University Business

Higher education is at a crossroads. Our institutions are wrestling with enrollment cliffs, questions of relevance, technological disruption and the age-old challenge of governance. Boards debate how to remain solvent. Presidents strategize about new programs and partnerships. Provosts explore AI, online expansion or micro-credentials. Everyone is reaching for levers they hope will strengthen the institution. Yet beneath all these efforts lies a single, urgent question: How do we make our institutions stronger in a time of change? In my experience, the answer is deceptively simple: develop leaders.

https://universitybusiness.com/want-to-future-proof-your-campus-start-here/

A day in the life of a student, 2045 - John Johnston, eCampus News

It is 6:45 a.m. in the year 2045, and Maya wakes to the gentle chime of her AI-integrated learning assistant. The device, embedded into her home’s wall system, has already analyzed her biometric data, sleep cycle, and class schedule to recommend a custom morning routine. Today’s recommendation is a brief guided meditation, followed by a protein-based breakfast delivered via drone from the university’s dining cooperative. Before her feet touch the floor, her education has already begun.

Education must remain centered on curiosity, connection, and human agency

4 ways AI is empowering the next generation of great teachers

The rise of AI-native universities: OpenAI’s vision for every student

https://www.ecampusnews.com/ai-in-education/2025/09/05/a-day-in-the-life-of-a-student-2045/

Friday, September 26, 2025

Students are using AI tools instead of building foundational skills - but resistance is growing - Joe McKendrick, ZDnet

Whether you are studying information technology, teaching it, or creating the software that powers learning, it's clear that artificial intelligence is challenging and changing education. Now, questions are being asked about using AI to boost learning, an approach that has implications for long-term career skills and privacy.

There is growing concern about student dependence on AI.
Today's computer science grads might understand less about IT systems.
Some technology professors are pushing back against AI in classrooms.

Google narrows the gap with ChatGPT as millions tap Nano Banana to make hyperrealistic 3D figurines. - Robert Hart, the Verge

The surge has likely propelled Gemini to the top of various app stores around the world. At the time of writing, Gemini is the leading iPhone app on Apple’s App Stores in the US, UK, Canada, France, Australia, Germany, and Italy. In many cases, it reached the prime position by surpassing OpenAI’s ChatGPT, which now sits in second place. On September 11th, Woodward said “India has found” the image editor and later said that Google was going to have to implement “temporary limits” on usage in order to manage extreme demand. “It’s a full-on stampede to use” Gemini, he said, adding that the “team is doing heroics to keep the system up and running.” So, what’s driving the surge? While a variety of edits have been popular, the runaway hit of Nano Banana has people turning themselves — or their pets — into 3D figurines. 

https://www.theverge.com/news/778106/google-gemini-nano-banana-image-editor

Thursday, September 25, 2025

How this AI chatbot helps students navigate their first semester - Alcino Donadel, University Business

 Western New England University’s newest staff member is working around the clock to check in on students and instantly connect them with the campus resources they need. Luckily, no human is losing any sleep. Spirit, the university’s generative AI chatbot, is entering its second full academic year of fielding mobile text messages from students. Over the past 12 months, machine and man exchanged over 17,000 messages, with students initiating over 2,200 questions. “Sometimes it’s challenging to go find a staff member in some random building, but messaging the chatbot ensures that their voice is going to be heard,” says Jeanne Powers, executive director of the Student Hub, the university’s student support center. 

Google Notebook LM’s Capabilities and Impact: Expert analysis from - Agentic Brain, AI Report

The rapid expansion of artificial-intelligence tools has produced dozens of note-taking and research assistants, but few have delivered a coherent, end-to-end learning experience. Google’s Notebook LM stands out because it blends multimodal analysis, grounded responses and interactive learning aids into a single platform. Released in 2023 and continuously updated, Notebook LM has quickly become one of the most impressive AI-enhanced research agents available today. Unlike traditional chatbots that draw on general internet knowledge, Notebook LM grounds every response in the documents you provide. Uploads can include PDFs, Google Docs, Google Slides, websites, YouTube videos, audio files or plain text. Once added, the system becomes an “instant expert” on your materials. You can converse with it in a familiar chat interface or any of the following incredibly diverse capabilities

Wednesday, September 24, 2025

Here’s how to tackle this root cause for tech burnout - Alcino Donadel

Back-end operations are undergoing a period of upheaval as campus business units adopt new technology to enhance staff productivity. Uneven implementation can isolate staff and cause burnout, blunting the promise of new tools, according to an analysis of four recent reports covered by University Business. The reports examined staff sentiment and their work environments across various offices, including financial aid, cybersecurity, IT, enrollment management and teaching and learning. While the scope of each report differed, the surveys painted a picture of staff who are aware of (and often willing to adopt) new technologies, but are frequently hampered by insufficient institutional support. Among the most common staff demands was professional development in artificial intelligence.

https://universitybusiness.com/heres-how-to-tackle-this-root-cause-for-tech-burnout/

Researchers ‘polarised’ over use of AI in peer review - Tom Williams, Times Higher Ed

A poll by IOP Publishing found that there has been a big increase in the number of scholars who are positive about the potential impact of new technologies on the process, which is often criticised for being slow and overly burdensome for those involved. Z total of 41 per cent of respondents now see the benefits of AI, up from 12 per cent from a similar survey carried out last year. But this is almost equal to the proportion with negative opinions which stands at 37 per cent after a 2 per cent year-on-year increase.


Tuesday, September 23, 2025

First-of-its-kind AI tool to save 75% of academics’ time - Sara AlKuwari, University World News

Hamdan Bin Mohammed Smart University (HBMSU) in the United Arab Emirates has announced the launch of the region’s first AI-powered academic agent, a pioneering tool designed to save up to 75% of faculty members’ time while enhancing students’ academic achievement by 40%, marking a significant step in reshaping the future of higher education, writes Sara AlKuwari for Khaleej Times. The initiative, titled Artificial Intelligence Agent for Every Faculty, is the first of its kind in the UAE and the wider region. It integrates advanced AI capabilities into higher education in line with the UAE National Artificial Intelligence Strategy 2031 and Education Strategy 2033.

White House AI Task Force Positions AI as Top Education Priority - Julia Gilban-Cohen, GovTech

When Trump administration officials met with ed-tech leaders at the White House last week to discuss the nation’s vision for artificial intelligence in American life, they repeatedly underscored one central message: Education must be at the heart of the nation’s AI strategy. Established by President Trump’s April 2025 executive order, the White House Task Force on AI Education is chaired by director of science and technology policy Michael Kratsios, and is tasked with promoting AI literacy and proficiency among America’s youth and educators, organizing a nationwide AI challenge and forging public-private partnerships to provide AI education resources to K-12 students.

Monday, September 22, 2025

How to use ChatGPT at university without cheating: ‘Now it’s more like a study partner’ - the Guardian

According to a recent report from the Higher Education Policy Institute, almost 92% of students are now using generative AI in some form, a jump from 66% the previous year. “Honestly, everyone is using it,” says Magan Chin, a master’s student in technology policy at Cambridge, who shares her favourite AI study hacks on TikTok, where tips range from chat-based study sessions to clever note-sifting prompts. “It’s evolved. At first, people saw ChatGPT as cheating and [thought] that it was damaging our critical thinking skills. But now, it’s more like a study partner and a conversational tool to help us improve.”


OpenAI's fix for hallucinations is simpler than you think - Webb Wright, ZDnet

The solution, according to OpenAI, is therefore to focus not on feeding models more accurate information, but to adjust the structure of how their performance is assessed. Since a binary system of grading a model's output as either right or wrong is supposedly fueling hallucination, the OpenAI researchers say that the AI industry must instead start rewarding models when they express uncertainty.  After all, truth does not exist in black-and-white in the real world, so why should AI be trained as if it does? Running a model through millions of examples on the proper arrangement of subjects, verbs, and predicates will make them more fluent in their use of natural language, but as any living human being knows, reality is open to interpretation. In order to live functionally in the world, we routinely have to say, "I don't know." 

Sunday, September 21, 2025

AI a 'Game Changer' for Assistance, Q&As in NJ Classrooms - Brianna Kudisch, GovTech

An explosion of startups and established companies are offering slick new AI products and targeted training to educators and school administrators. For instance, the nation’s second largest teachers’ union recently announced a $23 million initiative with Microsoft and OpenAI, an artificial intelligence company, to provide free access to AI and training to all American Federation of Teachers members, starting with K-12 educators.

https://www.govtech.com/education/k-12/ai-a-game-changer-for-assistance-q-as-in-nj-classrooms

Gemini for Education catching on with higher ed, Google says - Edscoop

Google announced this week that its Gemini for Education product is catching on with higher education institutions, having been adopted into “academic and administrative frameworks” by more than 1,000 colleges and universities across the United States.  Gemini for Education, which the search giant claims will “transform teaching and learning” offers a generative artificial intelligence chatbot that is designed to provide assistance to students, help prepare for exams, create practice materials and design lesson plans. According to Google, the tool is now available free of charge.

Saturday, September 20, 2025

The Perceived Impact of Artificial Intelligence on Academic Learning - Mariana Dogaru, Frontiers in Artificial Intelligence

The results indicate that ChatGPT helps students work faster and understand concepts better, but the difficulty in checking sources raises ethical concerns like plagiarism. By examining ChatGPT's role in STEM education, this study points out the need for AI literacy training and institutional policies to ensure responsible use. The findings offer practical insights for educators to integrate AI tools effectively, fostering critical thinking and academic integrity in technology-driven learning environments.

Got AI skills? You can earn 43% more in your next job - and not just for tech work - Webb Wright, ZDnet

Demand for AI skills is on the rise across industries. A single AI skill makes a huge difference in listed salaries. Different industries are looking for different AI skills. As businesses race to adopt AI, they're placing a higher premium on job candidates who know their way around the technology. A recent study from labor market research firm Lightcast found that jobs requiring AI-related skills offer higher annual salaries than those that don't. This is true not only in tech-heavy industries like IT and computer science but also across a range of other sectors.

Friday, September 19, 2025

Did OpenAI just solve hallucinations? - Matthew Berman, YouTube

The video explains that hallucinations are ingrained in the models' construction, functioning more as features than bugs. This is compared to human behavior, where guessing on a test might be rewarded, leading models to guess rather than admit uncertainty. The core issue is the absence of a system that rewards models for expressing uncertainty or providing partially correct answers. The proposed solution involves creating models that only answer questions when they meet a certain confidence threshold and implementing a new evaluation system. This system would reward correct answers, penalize incorrect ones, and assign a neutral score for "I don't know" responses. The video concludes by suggesting that the solution lies in revising how models are evaluated and how reinforcement learning is applied. (summary provided in part by Gemini 2.5 Pro)

Sam Altman says that bots are making social media feel ‘fake’ - Julie Bort, Tech Crunch

He then live-analyzed his reasoning. “I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very ‘it’s so over/we’re so back’ extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i’m extra sensitive to it, and a bunch more (including probably some bots).” To decode that a little, he’s accusing humans of starting to sound like LLMs, even though LLMs — spearheaded by OpenAI — were literally invented to mimic human communication, right down to the em dash.

https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-are-making-social-media-feel-fake/

Thursday, September 18, 2025

AI Teaching Learners Today: Pick Your Pedagogy! - Ray Schroeder, Inside Higher Ed

The cost of developing, designing and teaching classes is often largely determined by the faculty and staff costs. Long-running lower-division classes at some universities may be taught by supervised teaching assistants or adjunct faculty whose salaries are lower than tenure-track faculty’s. However, we are now confronted with highly capable technologies that require little to no additional investment and can bring immediate revenue positive opportunities. Each university very soon will have to determine to what extent AI will be permitted to design and deliver classes, and under what oversight and supervision. A well-written, detailed prompt can be the equal of many of our teaching assistants, adjunct faculty and, yes, full-time faculty members who have not been deeply trained in effective pedagogy and current practice.

How should universities teach leadership now that teams include humans and autonomous AI agents? - Alex Zarifis, Times Higher Education

So, how should university teachers prepare a new generation of modern leaders to approach these mixed teams? Teaching leadership styles that are effective at motivating people is no longer enough. In addition, students must now learn how to build their team’s trust in AI, then they will need to know how to combine leadership styles in a way that gets the most out of both humans and AI.

Wednesday, September 17, 2025

Georgia Tech’s Jill Watson Outperforms ChatGPT in Real Classrooms - Georgia Institute of Technology

 A new version of Georgia Tech’s virtual teaching assistant, Jill Watson, has demonstrated that artificial intelligence can significantly improve the online classroom experience. Developed by the Design Intelligence Laboratory (DILab) and the U.S. National Science Foundation AI Institute for Adult Learning and Online Education (AI-ALOE), the latest version of Jill Watson integrates OpenAI’s ChatGPT and is outperforming OpenAI’s own assistant in real-world educational settings. Jill Watson not only answers student questions with high accuracy. It also improves teaching presence and correlates with better academic performance. Researchers believe this is the first documented instance of a chatbot enhancing teaching presence in online learning for adult students.

OPINION: AI can be a great equalizer, but it remains out of reach for millions of Americans; we cannot let that continue - Erin Mote, Hechinger Report

This digital divide is a persistent crisis that deepens societal inequities, and we must rally around one of the most effective tools we have to combat it: the Universal Service Fund. The USF is a long-standing national commitment built on a foundation of bipartisan support and born from the principle that every American, regardless of their location or income, deserves access to communications services. Without this essential program, over 54 million students, 16,000 healthcare providers and 7.5 million high-need subscribers would lose internet service that connects classrooms, rural communities (including their hospitals) and libraries to the internet.

Tuesday, September 16, 2025

AI for Next Generation Science Education - Xiaoming Zhai, Georgia Tech

September 24, via Zoom. This talk explores the transformative role of artificial intelligence (AI) in advancing next generation science education, particularly through assessment and instructional support aligned with the Next Generation Science Standards (NGSS). Xiaoming Zhai presents how AI technologies—including machine learning, natural language processing, computer vision, and generative AI—can enhance the assessment of complex, three-dimensional student learning outcomes such as modeling, argumentation, and scientific explanation. By leveraging tools like fine-tuned language models and computer vision networks, the talk demonstrates the potential for scalable, accurate, and equitable automatic scoring of student work, both written and drawn.

Tech leadership is business leadership - McKinsey

As the line between technology and business disappears, corporate leaders of enterprise tech, digital, and information face a new mandate: transform innovation into measurable value. The modern tech officer not only has to understand how the landscape is shifting but also must manage initiatives across a broad range of stakeholders by playing the role of either orchestrator, builder, protector, or operator. Check out this interview series hosted by Gayatri Shenai and Ann Carver, conveners of McKinsey’s Women in Tech conference, to hear from trailblazing leaders who are not only breaking barriers but also reshaping the tech landscape.

Monday, September 15, 2025

Duke University pilot project examining pros and cons of using artificial intelligence in college - AP

As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.” Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester. 

Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement - Kate Knibbs, Wired

Anthropic will pay at least $3,000 for each copyrighted work that it pirated. The company downloaded unauthorized copies of books in early efforts to gather training data for its AI tools. This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October.

Sunday, September 14, 2025

Should AI Get Legal Rights? - Kylie Robeson, Wired

In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year. Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.”

Responsible AI in higher education: Building skills, trust and integrity - Alexander Shevchenko, World Economic Forum

Many institutions are moving from policing AI use to partnering with students. This transition emphasizes trust, transparency and ongoing skill development, mirroring the realities of modern careers where AI is ubiquitous. It also highlights the crucial role of faculty in guiding responsible and meaningful AI use. One practical example of this approach is Grammarly for Education. Seamlessly integrating with learning management systems and writing platforms, it supports students through brainstorming, research, drafting and revision. In doing so, the conversation has matured beyond simply detecting AI use; educators and students are now exploring how AI can deepen learning, sharpen critical thinking and inspire creativity.

Saturday, September 13, 2025

Why liberal arts schools are now hopping on skills-based microcredentials - Alcino Donadel, University Business

New market demands are pushing small, four-year liberal arts colleges to offer microcredentials, indicating growing momentum across sectors of higher education to elevate workforce readiness within their academic offerings. Chief learning officers at community colleges are leading the charge in expanding non-degree offerings, reporting the highest levels of institutional investment in this area. Meanwhile, large research universities—like the University of Colorado Boulder and the University of Tennessee at Knoxville—are catching up. However, strict faculty governance and curriculum processes and different accreditation standards have caused some liberal arts schools to lag, says Mike Simmons, an associate executive director at the American Association of Collegiate Registrars and Admissions Officers.


Academics must be open to changing their minds on acceptable AI use - Ava Doherty, Times Higher Education

Honest and open-ended conversations over how AI can be productively used in the learning journey are needed, not ChatGPT bans, says Ava Doherty. Students today face a striking paradox: they are among the most technologically literate generations in history, yet they are deeply anxious about their career prospects in an artificial intelligence-driven future. Since the launch of ChatGPT, the rapid advance of artificial intelligence (AI) has fundamentally reshaped the graduate job market. This shift presents unique challenges and opportunities for students, universities and the broader higher education sector.

Friday, September 12, 2025

Oxford becomes first UK university to offer ChatGPT Edu to all staff and students - University of Oxford

The University of Oxford will become the first university in the UK to provide free ChatGPT Edu access to all staff and students, starting this academic year. OpenAI’s flagship GPT-5 model will be provided across the University and Oxford Colleges through ChatGPT Edu, a version of ChatGPT built for universities that includes enterprise-level security and controls. This university-wide rollout follows a successful year-long pilot involving around 750 academics, research staff, postgraduate research students and professional services staff in a wide range of roles across the University and Colleges.

https://www.ox.ac.uk/news/2025-09-19-oxford-becomes-first-uk-university-offer-chatgpt-edu-all-staff-and-students

Navigating the AI Revolution in Higher Education - Alyse Jordan, Frontiers in Education

A systematic review conducted in the first nine months following ChatGPT's release provides valuable early insights into how AI has affected teaching, curriculum design, and assessment practices in higher education. The review identified both benefits and threats of AI integration, offering preliminary evidence to inform institutional policies and faculty practices (Liang et al., 2025). As the authors note, this represents "a first wave" of research, acknowledging how quickly AI systems are evolving and changing educational landscapes.Additionally, in specialized fields such as Mechanical Engineering Education (MEE), AI integration demonstrates unique applications and challenges. Research shows that AI significantly enhances learning experiences through technologies like computer-aided translation and natural language processing, making education more accessible and interactive.

https://www.frontiersin.org/journals/education/articles/10.3389/feduc.2025.1682901/abstract

Research: Teachers now outpace students in K12 AI use - Matt Zalaznick, University Business

In 2025, approximately 85% of high school and college instructors, as well as students aged 14 to 22, say they’ve tested AI, a significant leap from 66% in 2024. Nearly 90% of students report relying on AI for school, up from 77% in 2024, with the top three uses being:
Summarizing or synthesizing information (56%)
Research (46%)
Generating study guides or materials (45%)
Positive attitudes toward AI’s role in education (43%) also outpaced negative attitudes (37%) in 2025, a 3% increase from 2024.

Thursday, September 11, 2025

Insights on today’s labor market: Uncertainty, agentic AI, and more - McKinsey

It’s Labor Day in North America, a day to recognize and celebrate the contributions of workers across the continent—and also reflect on the continued uncertainty and rapid change in today’s labor market. In a recent episode of McKinsey Talks Talent, Indeed’s chief economist Svenja Gudell joined McKinsey’s Brooke Weddle, Bryan Hancock, and Lucia Rahilly to help business leaders make sense of the current collision of labor market trends: gen AI, agentic AI, an aging workforce, shifting priorities, and more. Tune in to the episode, then explore more of our insights on how to navigate the new world of work.

Getting Ahead of EU AI Literacy Requirements – How Businesses Can Stay Compliant and Competitive - Jonathan Armstrong, European Business Review

In most companies, AI is being used in business functions from HR and marketing to customer service. Figures reveal 78% of global companies use AI, with 71% deploying GenAI in at least one function[i]. However, often employees don’t fully understand how these tools work, and this gap can no longer be ignored.The EU AI Act, particularly Article 4, addresses this by making AI literacy a legal requirement. Since February 2025, any organisation operating in the EU, or offering AI-enabled services to EU markets, must ensure their employees, contractors, and suppliers have a sufficient understanding of the AI tools they use. It is not enough to deploy technology responsibly; organisations must demonstrate that their workforce knows what they are doing.

Wednesday, September 10, 2025

The future of work is agentic - McKinsey

Think about your org chart. Now imagine it features both your current colleagues—humans, if you’re like most of us—and AI agents. That’s not science fiction; it’s happening—and it’s happening relatively quickly, according to McKinsey Senior Partner Jorge Amar. In this episode of McKinsey Talks Talent, Jorge joins McKinsey talent leaders Brooke Weddle and Bryan Hancock and Global Editorial Director Lucia Rahilly to talk about what these AI agents are, how they’re being used, and how leaders can prepare now for the workforce of the not-too-distant future.

Seizing the agentic AI advantage - McKinsey

Nearly eight in ten companies report using gen AI—yet just as many report no significant bottom-line impact.1 Think of it as the “gen AI paradox.” At the heart of this paradox is an imbalance between “horizontal” (enterprise-wide) copilots and chatbots—which have scaled quickly but deliver diffuse, hard-to-measure gains—and more transformative “vertical” (function-specific) use cases—about 90 percent of which remain stuck in pilot mode. AI agents offer a way to break out of the gen AI paradox. That’s because agents have the potential to automate complex business processes—combining autonomy, planning, memory, and integration—to shift gen AI from a reactive tool to a proactive, goal-driven virtual collaborator. This shift enables far more than efficiency. Agents supercharge operational agility and create new revenue opportunities.

Tuesday, September 09, 2025

Ep. 11 AGI and the Future of Higher Ed: Talking with Ray Schroeder - Unfixed: How AI is Reshaping Higher Education with Nick Janos and Zach Justus, Podcast

In this episode of Unfixed, we talk with Ray Schroeder—Senior Fellow at UPCEA and Professor Emeritus at the University of Illinois Springfield—about Artificial General Intelligence (AGI) and what it means for the future of higher education. While most of academia is still grappling with ChatGPT and basic AI tools, Schroeder is thinking ahead to AI agents, human displacement, and AGI’s existential implications for teaching, learning, and the university itself. We explore why AGI is so controversial, what institutions should be doing now to prepare, and how we can respond responsibly—even while we’re already overwhelmed.

Artificial Intelligence: Three top experts share advice on how to implement AI tools into your business today — Executive Insights, Louisville Business First

Certainly, AI can be used across the board and in all those functional areas within a company. A real-world example is a company that is a customer of ours literally spent 40 hours a month in each of its regions compiling data and providing some analysis of that data back up to the executive team. It was a very manual process and quite frankly, a pain point for each of the leaders that ran the regions across the country. They were able to install an AI project that allowed for the aggregation of data across all the offices within the region automatically and gain intelligence off of those operational metrics. It cut that time for report creation from 40 hours down to under an hour. So, operationally they saved themselves a week of someone’s time every month.

Monday, September 08, 2025

San José Completes First City-Led AI Startup Grants - Scarlett Evans, AI Business

The city of San José California has announced the winners of its inaugural AI Incentive Program, the first city-run grant program of its kind in the U.S.  Under the initiative, early-stage startups using AI to tackle everything from maternal health to food waste competed for public funding and professional services. From a pool of 170 competitors, three won a $50,000 grant, while one received $25,000 in funding. The winners were maternal health company Elythea, which uses AI voice agents to connect with at-risk patients, smart kitchen platform Metafoodx that uses embodied AI to cut waste and optimize operations at restaurants and hardware optimization company Clika, which simplifies AI models into low-power formats to improve accessibility and efficiency across edge devices. 

Google's New Universal Translator AI is FREE & More AI Use Cases - The AI Advantage, YouTube

In the first part of this podcast, the host discusses Google's new universal voice translator, a significant improvement in translation technology. This new feature, available as a free update to the Google Translate app, allows for real-time conversations between two people speaking different languages through a "conversation" feature. The translator is noted for its impressive speed and low latency, which makes for a much smoother user experience. The app also includes a mode that can be used on a table between two people, with each person seeing the translated text facing them. Users can tap a microphone button to speak, and their words are instantly translated and displayed on the other side of the screen. The host expresses excitement about this development, highlighting its potential to create more meaningful connections between people from different linguistic backgrounds.

Sunday, September 07, 2025

How AI Is Changing—Not ‘Killing’—College - Colleen Flaherty, Inside Higher Ed

Key findings from Inside Higher Ed’s student survey on generative AI show that using the evolving technology hasn’t diminished the value of college in their view, but it could affect their critical thinking skills. Some of the results are perhaps surprising: Relatively few students say that generative AI has diminished the value of college, in their view, and nearly all of them want their institutions to address academic integrity concerns—albeit via a proactive approach rather than a punitive one. Another standout: Half of students who use AI for coursework say it’s having mixed effects on their critical thinking abilities, while a quarter report it’s helping them learn better.

New AI-powered live translation and language learning tools in Google Translate - Matt Sheets, Google Keyword

Building on our existing live conversation experience, our advanced AI models are now making it even easier to have a live conversation in more than 70 languages — including Arabic, French, Hindi, Korean, Spanish, and Tamil. Whether you’re an early learner looking to begin practicing conversation or an advanced speaker looking to brush up on vocabulary for an upcoming trip, Translate can now create tailored listening and speaking practice sessions just for you. These interactive practices are generated on-the-fly and intelligently adapt to your skill level.

Saturday, September 06, 2025

Mass Intelligence: From GPT-5 to nano banana: everyone is getting access to powerful AI - Ethan Mollick, One Useful Thing

There have been two barriers to accessing powerful AI for most users. The first was confusion. Few people knew to select an AI model. Even fewer knew that picking o3 from a menu in ChatGPT would get them access to an excellent Reasoner AI model, while picking 4o (which seems like a higher number) would give them something far less capable. According to OpenAI, less than 7% of paying customers selected o3 on a regular basis, meaning even power users were missing out on what Reasoners could do. Another factor was cost. Because the best models are expensive, free users were often not given access to them, or else given very limited access. Google led the way in giving some free access to its best models, but OpenAI stated that almost none of its free customers had regular access to reasoning models prior to the launch of GPT-5.

Why did the CSU spend millions on ChatGPT amid a budget crisis? We asked school leaders - Julia Barajas, LAist


CSU CIO Ed Clark explained. We were [also] seeing that some universities in our own system were starting to negotiate deals with these vendors, but then others couldn't afford to do that. So, we're thinking: “We're not going to create a digital divide within our own system. We're going to make sure that everybody has access to these tools.” And we buttress that with: We believe that these tools are going to become fundamental, just like the internet is today — every industry, every academic field, every discipline is going to be using these tools. So, we need our students, our community members, to engage with them now. We're not going to wait until we're far behind everybody else ... to give this access. And on the workforce side, in terms of student preparation, we already know that employers are expecting students to graduate with [AI] skills. ... We want our students to be prepared for the workforce or graduate school or whatever they're going to do when they leave the CSU.

Friday, September 05, 2025

China Is Building a Brain-Computer Interface Industry - Emily Mullen, Wired

In a policy document released this month, China has signaled its ambition to become a world leader in brain-computer interfaces, the same technology that Elon Musk’s Neuralink and other US startups are developing. Brain-computer interfaces, or BCIs, read and decode neural activity to translate it into commands. Because they provide a direct link between the brain and an external device, such as a computer or robotic arm, BCIs have tremendous potential as assistive devices for people with severe physical disabilities.

Preparing students for a world shaped by artificial intelligence - the Guardian

Prof Leo McCann and Prof Simon Sweeney are right to warn that uncritical reliance on artificial intelligence risks bypassing deep learning (Letters, 16 September). But that does not mean large language models have no place in higher education. Used thoughtfully, they can enhance teaching and learning. Graduates will enter a workforce where AI is ubiquitous. To exclude it from education is to send students out unprepared. The task is not to ignore AI, but to teach students how to use it critically.

https://www.theguardian.com/technology/2025/sep/24/preparing-students-for-a-world-shaped-by-artificial-intelligence

Here are 4 pain points amid the new normal of online learning - Alcino Donadel, University Business

The rapid pace of AI development is transforming the online experience. More students will access AI tutoring and adaptive learning that create personalized programs. At the same time, respondents predicted declining prominence in full-time faculty and lecture-based instruction as central components to online learners’ experiences, as students rely instead on a mix of adjunct faculty and technology-mediated experiences. “Over the past decade, we’ve witnessed a profound shift: what began as an exception has become a baseline expectation,” Bethany Simunich, vice president for innovation and research at Quality Matters, an education quality assurance agency. “Today’s students—across every age and background—expect learning to be flexible and accessible.” 

Thursday, September 04, 2025

Teaching Online Podcast - Tom Cavanagh and Kelvin Thompson, University of Central Florida

Episode 193. Guests Ray Schroeder and Dr. Melissa Vito unpack decades of practical wisdom on leadership vision in conversation with hosts Tom and Kelvin. This episode is the first in a mini-series of “pillar panels” offering distilled insights from esteemed community members on key, “structural support” topics essential in the future of strategic online/digital education. This episode includes links and reflections synthesizing the advent and growth of online learning.

https://dub.sh/topcasts11e193

Opinion: Cutting Through the Hype for GenAI in Higher Educationv - Stephan Geering, GovTech

Amid so much promotion, news coverage and forecasting about artificial intelligence, the university CIO must distinguish between practical, impactful applications and those driven by hype or outweighed by risk. For chief information officers, the priority is clear: distinguish between practical, impactful applications and those driven by hype. The goal is to adopt AI that enhances teaching, learning and operational efficiency without compromising academic standards. Before assessing what's practical versus aspirational, CIOs must first ground their strategy in a clear understanding of responsible AI frameworks such as the National Institute of Standards and Technology AI Risk Management Framework and with an eye on upcoming federal and state regulation such as the Colorado AI Act.

Wednesday, September 03, 2025

AI Companies Roll Out Educational Tools - Ray Schroeder, Inside Higher Ed

Fall semesters are just beginning, and the companies offering three leading AI models—Gemini by Google, Claude by Anthropic and ChatGPT by OpenAI—have rolled out tools to facilitate AI-enhanced learning. Here’s a comparison and how to get them. Each of the three leading AI providers has taken a somewhat different approach to providing an array of educational tools and support for students, faculty and administrators. We can expect these tools to improve, proliferate and become a competitive battleground among the three. At stake is, at least in part, the future marketplace for their products. 

AI Is Eliminating Jobs for Younger Workers - Will Knight, Wired

Economists at Stanford University have found the strongest evidence yet that artificial intelligence is starting to eliminate certain jobs. But the story isn’t that simple: While younger workers are being replaced by AI in some industries, more experienced workers are seeing new opportunities emerge. Erik Brynjolfsson, a professor at Stanford University, Ruyu Chen, a research scientist, and Bharat Chandar, a postgraduate student, examined data from ADP, the largest payroll provider in the US, from late 2022, when ChatGPT debuted, to mid-2025. The researchers discovered several strong signals in the data—most notably that the adoption of generative AI coincided with a decrease in job opportunities for younger workers in sectors previously identified as particularly vulnerable to AI-powered automation (think customer service and software development). In these industries, they found a 16 percent decline in employment for workers aged 22 to 25.

Tuesday, September 02, 2025

Taking AI Welfare Seriously - Robert Long, et al; arXiv

In this report, we argue that there is a realistic possibility that some AI systems will be conscious and/or robustly agentic in the near future. That means that the prospect of AI welfare and moral patienthood, i.e. of AI systems with their own interests and moral significance, is no longer an issue only for sci-fi or the distant future. It is an issue for the near future, and AI companies and other actors have a responsibility to start taking it seriously. We also recommend three early steps that AI companies and other actors can take: They can (1) acknowledge that AI welfare is an important and difficult issue (and ensure that language model outputs do the same), (2) start assessing AI systems for evidence of consciousness and robust agency, and (3) prepare policies and procedures for treating AI systems with an appropriate level of moral concern. To be clear, our argument in this report is not that AI systems definitely are, or will be, conscious, robustly agentic, or otherwise morally significant.

Microsoft AI CEO Warns "Seemingly Conscious AI is Coming" - Wes Roth, YouTube

Get ready for this discussion about AI to start popping up a lot more and that discussion will be is AI conscious? Now you might have an immediate response to that like of course it's not or maybe it is there's some people who are convinced that it is. My take for the record has always been that while maybe not the current chatbots that we have right now. We might stumble upon something as we scale up and improve it, add more systems. But as you'll see, the issue isn't whether it is or not, but more that we have no idea. Not knowing will cause problems. Now, here you're going to see me talking to Nick Bostonramm. I was absolutely blown away by the fact that he took some time to talk to us.

Monday, September 01, 2025

ChatGPT-5 Gets Warmer, Friendlier Update - Scarlett Evans, AI Business

OpenAI has announced an update to its latest ChatGPT model to make it “warmer and friendlier” in response to user complaints. The announcement follows OpenAI’s launch of its latest GPT model earlier this month, which has been met with user complaints and calls for a return to the GPT-4o model. The update is hoped to alleviate some of these concerns, with a push for a more approachable chatbot that doesn’t increase in sycophancy (another recent complaint around the program.) “Changes are subtle, but ChatGPT should feel more approachable now,” the company wrote on X.

How to remain resilient, focused, and effective in uncertain times - McKinsey

Disruption isn’t an occasional hurdle; it’s the new normal. According to McKinsey research, 84 percent of leaders report feeling underprepared for future disruptions, with geopolitical tensions topping the list of concerns. Leaders today are called to steer through shifting trade policies, international conflict, and internal organizational pressures—all while keeping their people engaged and their strategies on track. McKinsey’s Ida Kristensen and coauthors outline four dimensions of resilience that can help organizations stay grounded and agile when the path ahead is unclear:

Financial
Operational
Organizational
External

Sunday, August 31, 2025

A ‘Great Defection’ threatens to empty universities and colleges of top teaching talent - Jon Marcus, Hechinger Report

An exodus appears to be under way of Ph.D.s and faculty generally, who are leaving academia in the face of political, financial and enrollment crises. It’s a trend federal data and other sources show began even before Trump returned to the White House. On top of everything else affecting higher education, this is likely to reduce the quality of education for undergraduates, experts say. Nearly 70 percent of people receiving doctorates were already leaving higher education for industry, government and other sectors, not including those without job offers or who opted to continue their studies, according to the most recent available figures from the National Science Foundation — up from fewer than 50 percent decades ago.

Anthropic’s Higher Ed AI Board Signals Shift From Tools To Guardrails - Dan Fitzpatrick, Forbes

Today, the company behind the AI chatbot Claude announced two initiatives designed to shape how institutions adopt AI. The first is the creation of a Higher Education Advisory Board made up of distinguished academic leaders. The second is the launch of three new AI Fluency courses aimed at both students and faculty. The moves underscore Anthropic’s dual strategy to influence policy through academic leadership while providing practical tools to accelerate adoption. Anthropic was founded in 2021 by former OpenAI employees and is known for its “safety-first” approach to AI. Its foray into education seems to reflect this ethos. “The choices made in the next few years about how AI enters the classroom will shape a generation’s relationship with both technology and learning,” the company said in its announcement.


Saturday, August 30, 2025

More Schools Are Considering Education-Focused AI Tools. What’s the Best Way to Use Them? - Lauren Coffey, EdSurge

Torney recommends institutions set guardrails early to use these tools, based on the goals they hope to achieve. “My main takeaway is that this is not a go-it-alone technology,” he says. “If you're a school leader and you as a staff haven't had a conversation on how to use these things and what they’re good at and not good at, that’s where you get into these potential dangers.” Paul Shovlin, an AI faculty fellow at the Center for Teaching and Learning at Ohio University, says the K-12 sector seems to have adopted the new tools at a quicker pace than its higher education counterparts.

https://www.edsurge.com/news/2025-08-22-more-schools-are-considering-education-focused-ai-tools-what-s-the-best-way-to-use-them

College students, educators worldwide begin fall semester using Elon University’s Student Guide to Artificial Intelligence - Elon University

As artificial intelligence transforms higher education, more than 21,000 educators and students worldwide are turning to a new resource to navigate this digital revolution. The Student Guide to Artificial Intelligence, published by Elon University in partnership with the American Association of Colleges and Universities, has been adopted at approximately 4,000 colleges, universities, schools and organizations globally, with more than 60,000 visitors to the guide’s website. The widespread adoption of this free resource demonstrates the urgent need for AI literacy in today’s academic environment. By mid-August, 97 colleges and universities had requested custom editions for distribution on their campuses, ranging from community colleges and small private institutions to flagship state universities. Many other institutions link to the guide as part of their learning resources for students.

Friday, August 29, 2025

Reconfiguring work: Change management in the age of gen AI - Erik Roth, McKinsey

Gen AI has the potential to completely change how employees work. Its natural language interface makes it easy to use, while its burgeoning reasoning and agentic capabilities allow it to perform increasingly complex tasks such as interpreting large volumes of information, coding, and answering queries. The most advanced agents are even starting to perform tasks such as creating spreadsheets and navigating web pages. And employees are clearly eager to use it; they are already doing so three times more than their leaders realize.2 But simply putting new technology into people’s hands does not ensure they will use it effectively, nor does it profoundly change the way a company works. Instead, CEOs need to deploy a novel change management approach that mobilizes their people, turning them from gen AI experimenters into gen AI accelerators. This is not a linear process. Change management in the gen AI age asks employees to become active participants rather than just users.

There are no entry-level jobs anymore. What now? - Dana Stephenson, the Hill

One recent report estimates that more than 90 percent of information technology jobs will be transformed by AI, and that nearly 40 percent of those roles will be at the entry level. The U.S. Bureau of Labor Statistics predicts that about 1 million office and administrative support jobs will be lost by 2029 due to technological advancements. Once staples of a first job, tasks like drafting a press release, organizing information or conducting basic research are now increasingly handled by AI agents. As a result, today’s graduates face a steeper climb into meaningful, sustainable careers. It is no longer enough to be merely hireable. Students can’t even start on the ground floor — they’re expected to skip a level to get in the door. 

Thursday, August 28, 2025

The Radical Changes AI Is Bringing To Higher Education - Nick Ladany, Forbes

AI in the form of a fully interactive, avatar professor, will equal and exceed the best versions of the best professors at any university; responsive, responsible, available day and night, informed with staggering amounts of knowledge, and demonstrate pedagogical approaches that match the personalized needs of any student. The fear that faculty jobs are at stake is a real one, and the roles of faculty members are sure to change. Rather than the sage on the stage, a professor’s role will be the guide on the side. Their job will be to provide community building among groups of students and offer additional personalized ways to introduce students to using AI in the workplace, such as using AI in health research. In this redefined role, professors will serve as subject matter experts and ensure that the AI professor’s responses are sound and don’t go off the rails. Finally, they will play a role, at least initially, in what the students will be assessed.

We must build AI for people; not to be a person: Seemingly Conscious AI is Coming - Mustafa Suleyman, Mustafa-Suleyman.ai

My life’s mission has been to create safe and beneficial AI that will make the world a better place. Today at Microsoft AI we build AI to empower people, and I’m focused on making products like Copilot responsible technologies that enable people to achieve far more than they ever thought possible, be more creative, and feel more supported. I want to create AI that makes us more human, that deepens our trust and understanding of one another, and that strengthens our connections to the real world. Copilot creates millions of positive, even life-changing, interactions every single day. This involves a lot of careful design choices to ensure it truly delivers an incredible experience. We won’t always get it right, but this humanist frame provides us with a clear north star to keep working towards.

Wednesday, August 27, 2025

Ex-Google exec says degrees in law and medicine are a waste of time because they take so long to complete that AI will catch up by graduation - Preston Fore, Fortune

“AI itself is going to be gone by the time you finish a PhD. Even things like applying AI to robotics will be solved by then,” Jad Tarifi, the founder of Google’s first generative-AI team, told Business Insider. Tarifi himself graduated with a PhD in AI in 2012, when the subject was far less mainstream. But today, the 42-year-old says, time would be better spent studying a more niche topic intertwined with AI, like AI for biology—or maybe not a degree at all. “Higher education as we know it is on the verge of becoming obsolete,” Tarifi said to Fortune. “Thriving in the future will come not from collecting credentials but from cultivating unique perspectives, agency, emotional awareness, and strong human bonds. “I encourage young people to focus on two things: the art of connecting deeply with others, and the inner work of connecting with themselves.”

At one elite college, over 80% of students now use AI – but it’s not all about outsourcing their work - Germán Reyes, Middlebury, The Conversation

Over 80% of Middlebury College students use generative AI for coursework, according to a recent survey I conducted with my colleague and fellow economist Zara Contractor. This is one of the fastest technology adoption rates on record, far outpacing the 40% adoption rate among U.S. adults, and it happened in less than two years after ChatGPT’s public launch. What we found challenges the panic-driven narrative around AI in higher education and instead suggests that institutional policy should focus on how AI is used, not whether it should be banned.

https://brookingsregister.com/premium/theconversation/stories/at-one-elite-college-over-80-of-students-now-use-ai-but-its-not-all-about-outsourcing,148387

Tuesday, August 26, 2025

AI is already displacing these jobs - Madison Mills, Axios

Industries that are considered advanced adopters of AI see the nearest-term labor impact. Over 80% of executives surveyed within tech and media, the only two sectors that showed clear signs of AI disruption, anticipate reduced hiring volumes in the next two years. Still, most companies surveyed are currently backfilling workers with AI, for example, rather than replacing them. By the numbers: For now, companies aren't firing employees but just canceling contracts that involve outsourced labor, a strategy that's leading to financial gains. Back-office automations also have a higher return on investment, with $2 million to $10 million in BPO expenditures eliminated for the firms studied by MIT researchers.
One company studied saved $8 million a year by spending $8,000 on an AI tool.

Smarter Support: How to Use AI in Online Courses and Teach Your Students to Use It Too - Joel Greene, Faculty Focus

Whether we were ready or not, AI is in the room. And if you’re teaching online, you’ve probably already seen it at work in discussion posts, essays, or that strangely perfect email. Instead of panicking or pretending it’s not happening, we’ve got a better option. We can help students learn how to use AI responsibly, because it’s not going away. Honestly, some of them are relying on it more than we realize (Colvard 2024). If you’re going to teach with AI, you’ve got to know what it can (and can’t) do. I’m talking about tools like ChatGPT, Grammarly, QuillBot, or even Microsoft Copilot. Give yourself a little “playtime” with them. Open one up and ask it to write a discussion post. Then see what it gets right and what falls flat. 

https://www.facultyfocus.com/articles/teaching-with-technology-articles/smarter-support-how-to-use-ai-in-online-courses-and-teach-your-students-to-use-it-too/

Monday, August 25, 2025

'This stuff is moving so quickly': Utah Tech leaders discuss AI, unveil new cybersecurity degree - Nick Fiala, St. George News / KSL

Utah Tech University spent part of its recent Fall Academic Convocation discussing the evolving use of artificial intelligence in both business and academia and how best to implement it in the future. The university also announced on Wednesday that it will be launching a new program this fall for students to acquire a bachelor of science degree in cybersecurity. The new cybersecurity degree will reportedly offer coursework in areas including ethical hacking, cloud and IoT security, cyber law and infrastructure defense. The university added that the program anticipates enrolling 35 students by its third year. "IoT" refers to "Internet of Things," meaning devices equipped to exchange data over the internet.

Does GenAI provide the opportunity for creativity to take centre stage? - Ioannis Glinavos, Times Higher Education

For centuries, universities have delivered scarce expertise. We stacked programmes like layer cakes: first theory, then practice, finally – if there was time – a sprinkle of creativity. Generative AI flips that order. Because routine skills are on tap, the bottleneck shifts upstream to ideation: spotting problems worth solving and framing them so the machine can help.

How should assessors use AI for marking and feedback?

An insider’s guide to how students use GenAI tools

Three reasons to harness AI for interdisciplinary collaboration

That demands divergent thinking, curiosity and ethical judgement – qualities our assessment regimes often squeeze out. We  need to treat creativity as a core literacy, not a decorative extra. Don’t get me wrong, skills are not irrelevant – they just look different. Prompt craft, data stewardship and model critique replace manual citation and calculator drills. But they are means, not ends.

https://www.timeshighereducation.com/campus/does-genai-provide-opportunity-creativity-take-centre-stage

Sunday, August 24, 2025

AI’s Rapid Integration into Higher Education Transforming Student Experiences and Faculty Challenges - SSB Crack News

A college senior entering the new academic year has experienced nearly their entire undergraduate journey alongside the rise of generative AI. The landscape shifted dramatically with the launch of ChatGPT in November 2022, coinciding with this student’s freshman year. Faculty and administration at institutions like Washington University in St. Louis have witnessed this rapid transformation. Fast forward three years, and the reliance on AI tools is startling. Reports indicate that nearly two-thirds of Harvard undergraduates were using generative AI at least once a week as of spring 2024. In a UK survey, 92% of full-time university students acknowledged employing AI in some manner, with a significant portion believing that AI-generated content could earn good grades in their subjects. Alarmingly, about 20% of those surveyed have experimented with AI to complete assignments, a trend expected to grow.

A scaffolded approach to teaching with GenAI - Rena Beatrice Alcalay, Times Higher Education

As GenAI continues to reshape higher education, this four-phase framework by Rena Beatrice Alcalay offers educators ways to guide students to use these tools critically and ethically, fostering agency, bias awareness and deeper engagement in philosophical writing assignments. This pedagogical stance emphasises agency: students learn to critically assess what to include or exclude from AI-generated suggestions and to distinguish between factual repetition and genuine conceptual development. At the heart of this approach is a commitment to helping students articulate ideas that reflect their values, a central goal in philosophy education.


Saturday, August 23, 2025

Claude Opus 4 and 4.1 can now end a rare subset of conversations - Anthropic

We recently gave Claude Opus 4 and 4.1 the ability to end conversations in our consumer chat interfaces. This ability is intended for use in rare, extreme cases of persistently harmful or abusive user interactions. This feature was developed primarily as part of our exploratory work on potential AI welfare, though it has broader relevance to model alignment and safeguards. In pre-deployment testing of Claude Opus 4, we included a preliminary model welfare assessment. As part of that assessment, we investigated Claude’s self-reported and behavioral preferences, and found a robust and consistent aversion to harm. This included, for example, requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror. Claude Opus 4 showed:


A strong preference against engaging with harmful tasks;
A pattern of apparent distress when engaging with real-world users seeking harmful content; and
A tendency to end harmful conversations when given the ability to do so in simulated user interactions.

AI Is Designing Bizarre New Physics Experiments That Actually Work - Anil Ananthaswamy, Wired

Although AI has not yet led to new discoveries in physics, it’s becoming a powerful tool across the field. Along with helping researchers to design experiments, it can find nontrivial patterns in complex data. For example, AI algorithms have gleaned symmetries of nature from the data collected at the Large Hadron Collider in Switzerland. These symmetries aren’t new—they were key to Einstein’s theories of relativity—but the AI’s finding serves as a proof of principle for what’s to come. Physicists have also used AI to find a new equation for describing the clumping of the universe’s unseen dark matter. “Humans can start learning from these solutions,” Adhikari said.

Friday, August 22, 2025

Sam Altman, OpenAI will reportedly back a startup that takes on Musk’s Neuralink - Julie Bort, Tech Crunch

Sam Altman is in the process of co-founding a new brain-to-computer interface startup called Merge Labs and raising funds for it with the capital possibly coming largely from OpenAI’s ventures team, unnamed sources told the Financial Times. The startup is expected to be valued at $850 million. A source familiar with the deal tells TechCrunch that talks are still early and OpenAI has not yet committed to participation, so terms could change. Merge Labs is also reportedly working with Alex Blania, who runs Tools for Humanity (formerly World) — Altman’s eye-scanning digital ID project that “allows anyone to verify their humanness,” as the company describes.

Google Pledges $1 Billion to Bring AI Training and Tools to US Colleges - CDO Magazine

Google has committed $1 billion over the next three years to equip U.S. higher education institutions and nonprofits with artificial intelligence training, research resources, and advanced tools. More than 100 universities, including major public systems like Texas A&M and the University of North Carolina, have already joined the initiative. Participating schools may receive direct funding, cloud computing credits, and free access to Google’s advanced Gemini chatbot for students. The investment—which covers both cash support and the value of Google’s paid AI services—aims to eventually reach every accredited nonprofit college in the U.S., with similar programs under discussion abroad, Senior Vice President James Manyika said.

https://www.cdomagazine.tech/aiml/google-pledges-1-billion-to-bring-ai-training-and-tools-to-us-colleges

Thursday, August 21, 2025

MIT's new AI can teach itself to control robots by watching the world through their eyes — it only needs a single camera News - Tristan Greene, Live Science

Scientists at MIT have developed a novel vision-based artificial intelligence (AI) system that can teach itself how to control virtually any robot without the use of sensors or pretraining. The system gathers data about a given robot’s architecture using cameras, in much the same way that humans use their eyes to learn about themselves as they move. This allows the AI controller to develop a self-learning model for operating any robot — essentially giving machines a humanlike sense of physical self-awareness.

https://www.livescience.com/technology/robotics/mits-new-ai-can-teach-itself-to-control-robots-by-watching-the-world-through-their-eyes-it-only-needs-a-single-camera

Gemini just got two of ChatGPT's best features - and they're free - Sabrina Ortiz, ZDnet

Gemini can now remember chat context for personalized answers. Users can use Temporary Chat for added privacy. Google also added new data control settings you'll want to look at now. You can now reference your past chats with Google's Gemini AI chatbot for more personalized responses, the company said Wednesday. Google also added a Temporary Chat feature and new data control settings. Everyone, including free users, can take advantage of the features in the Gemini app. While every major AI company is constantly racing to release the latest and greatest AI models, sometimes the most impactful updates are actually the less flashy features that improve the chatbot using experience. These new Gemini features aim to make users' lives easier in ways ChatGPT already has.