Friday, September 19, 2025

Did OpenAI just solve hallucinations? - Matthew Berman, YouTube

The video explains that hallucinations are ingrained in the models' construction, functioning more as features than bugs. This is compared to human behavior, where guessing on a test might be rewarded, leading models to guess rather than admit uncertainty. The core issue is the absence of a system that rewards models for expressing uncertainty or providing partially correct answers. The proposed solution involves creating models that only answer questions when they meet a certain confidence threshold and implementing a new evaluation system. This system would reward correct answers, penalize incorrect ones, and assign a neutral score for "I don't know" responses. The video concludes by suggesting that the solution lies in revising how models are evaluated and how reinforcement learning is applied. (summary provided in part by Gemini 2.5 Pro)

Sam Altman says that bots are making social media feel ‘fake’ - Julie Bort, Tech Crunch

He then live-analyzed his reasoning. “I think there are a bunch of things going on: real people have picked up quirks of LLM-speak, the Extremely Online crowd drifts together in very correlated ways, the hype cycle has a very ‘it’s so over/we’re so back’ extremism, optimization pressure from social platforms on juicing engagement and the related way that creator monetization works, other companies have astroturfed us so i’m extra sensitive to it, and a bunch more (including probably some bots).” To decode that a little, he’s accusing humans of starting to sound like LLMs, even though LLMs — spearheaded by OpenAI — were literally invented to mimic human communication, right down to the em dash.

https://techcrunch.com/2025/09/08/sam-altman-says-that-bots-are-making-social-media-feel-fake/

Thursday, September 18, 2025

AI Teaching Learners Today: Pick Your Pedagogy! - Ray Schroeder, Inside Higher Ed

The cost of developing, designing and teaching classes is often largely determined by the faculty and staff costs. Long-running lower-division classes at some universities may be taught by supervised teaching assistants or adjunct faculty whose salaries are lower than tenure-track faculty’s. However, we are now confronted with highly capable technologies that require little to no additional investment and can bring immediate revenue positive opportunities. Each university very soon will have to determine to what extent AI will be permitted to design and deliver classes, and under what oversight and supervision. A well-written, detailed prompt can be the equal of many of our teaching assistants, adjunct faculty and, yes, full-time faculty members who have not been deeply trained in effective pedagogy and current practice.

How should universities teach leadership now that teams include humans and autonomous AI agents? - Alex Zarifis, Times Higher Education

So, how should university teachers prepare a new generation of modern leaders to approach these mixed teams? Teaching leadership styles that are effective at motivating people is no longer enough. In addition, students must now learn how to build their team’s trust in AI, then they will need to know how to combine leadership styles in a way that gets the most out of both humans and AI.

Wednesday, September 17, 2025

Georgia Tech’s Jill Watson Outperforms ChatGPT in Real Classrooms - Georgia Institute of Technology

 A new version of Georgia Tech’s virtual teaching assistant, Jill Watson, has demonstrated that artificial intelligence can significantly improve the online classroom experience. Developed by the Design Intelligence Laboratory (DILab) and the U.S. National Science Foundation AI Institute for Adult Learning and Online Education (AI-ALOE), the latest version of Jill Watson integrates OpenAI’s ChatGPT and is outperforming OpenAI’s own assistant in real-world educational settings. Jill Watson not only answers student questions with high accuracy. It also improves teaching presence and correlates with better academic performance. Researchers believe this is the first documented instance of a chatbot enhancing teaching presence in online learning for adult students.

OPINION: AI can be a great equalizer, but it remains out of reach for millions of Americans; we cannot let that continue - Erin Mote, Hechinger Report

This digital divide is a persistent crisis that deepens societal inequities, and we must rally around one of the most effective tools we have to combat it: the Universal Service Fund. The USF is a long-standing national commitment built on a foundation of bipartisan support and born from the principle that every American, regardless of their location or income, deserves access to communications services. Without this essential program, over 54 million students, 16,000 healthcare providers and 7.5 million high-need subscribers would lose internet service that connects classrooms, rural communities (including their hospitals) and libraries to the internet.

Tuesday, September 16, 2025

AI for Next Generation Science Education - Xiaoming Zhai, Georgia Tech

September 24, via Zoom. This talk explores the transformative role of artificial intelligence (AI) in advancing next generation science education, particularly through assessment and instructional support aligned with the Next Generation Science Standards (NGSS). Xiaoming Zhai presents how AI technologies—including machine learning, natural language processing, computer vision, and generative AI—can enhance the assessment of complex, three-dimensional student learning outcomes such as modeling, argumentation, and scientific explanation. By leveraging tools like fine-tuned language models and computer vision networks, the talk demonstrates the potential for scalable, accurate, and equitable automatic scoring of student work, both written and drawn.

Tech leadership is business leadership - McKinsey

As the line between technology and business disappears, corporate leaders of enterprise tech, digital, and information face a new mandate: transform innovation into measurable value. The modern tech officer not only has to understand how the landscape is shifting but also must manage initiatives across a broad range of stakeholders by playing the role of either orchestrator, builder, protector, or operator. Check out this interview series hosted by Gayatri Shenai and Ann Carver, conveners of McKinsey’s Women in Tech conference, to hear from trailblazing leaders who are not only breaking barriers but also reshaping the tech landscape.

Monday, September 15, 2025

Duke University pilot project examining pros and cons of using artificial intelligence in college - AP

As part of a new pilot with OpenAI, all Duke undergraduate students, as well as staff, faculty and students across the University’s professional schools, gained free, unlimited access to ChatGPT-4o beginning June 2. The University also announced DukeGPT, a University-managed AI interface that connects users to resources for learning and research and ensures “maximum privacy and robust data protection.” Duke launched a new Provost’s Initiative to examine the opportunities and challenges AI brings to student life on May 23. The initiative will foster campus discourse on the use of AI tools and present recommendations in a report by the end of the fall 2025 semester. 

Anthropic Agrees to Pay Authors at Least $1.5 Billion in AI Copyright Settlement - Kate Knibbs, Wired

Anthropic will pay at least $3,000 for each copyrighted work that it pirated. The company downloaded unauthorized copies of books in early efforts to gather training data for its AI tools. This is the first class action settlement centered on AI and copyright in the United States, and the outcome may shape how regulators and creative industries approach the legal debate over generative AI and intellectual property. According to the settlement agreement, the class action will apply to approximately 500,000 works, but that number may go up once the list of pirated materials is finalized. For every additional work, the artificial intelligence company will pay an extra $3,000. Plaintiffs plan to deliver a final list of works to the court by October.

Sunday, September 14, 2025

Should AI Get Legal Rights? - Kylie Robeson, Wired

In the often strange world of AI research, some people are exploring whether the machines should be able to unionize. I’m joking, sort of. In Silicon Valley, there’s a small but growing field called model welfare, which is working to figure out whether AI models are conscious and deserving of moral considerations, such as legal rights. Within the past year, two research organizations studying model welfare have popped up: Conscium and Eleos AI Research. Anthropic also hired its first AI welfare researcher last year. Earlier this month, Anthropic said it gave its Claude chatbot the ability to terminate “persistently harmful or abusive user interactions” that could be “potentially distressing.”

Responsible AI in higher education: Building skills, trust and integrity - Alexander Shevchenko, World Economic Forum

Many institutions are moving from policing AI use to partnering with students. This transition emphasizes trust, transparency and ongoing skill development, mirroring the realities of modern careers where AI is ubiquitous. It also highlights the crucial role of faculty in guiding responsible and meaningful AI use. One practical example of this approach is Grammarly for Education. Seamlessly integrating with learning management systems and writing platforms, it supports students through brainstorming, research, drafting and revision. In doing so, the conversation has matured beyond simply detecting AI use; educators and students are now exploring how AI can deepen learning, sharpen critical thinking and inspire creativity.

Saturday, September 13, 2025

Why liberal arts schools are now hopping on skills-based microcredentials - Alcino Donadel, University Business

New market demands are pushing small, four-year liberal arts colleges to offer microcredentials, indicating growing momentum across sectors of higher education to elevate workforce readiness within their academic offerings. Chief learning officers at community colleges are leading the charge in expanding non-degree offerings, reporting the highest levels of institutional investment in this area. Meanwhile, large research universities—like the University of Colorado Boulder and the University of Tennessee at Knoxville—are catching up. However, strict faculty governance and curriculum processes and different accreditation standards have caused some liberal arts schools to lag, says Mike Simmons, an associate executive director at the American Association of Collegiate Registrars and Admissions Officers.


Academics must be open to changing their minds on acceptable AI use - Ava Doherty, Times Higher Education

Honest and open-ended conversations over how AI can be productively used in the learning journey are needed, not ChatGPT bans, says Ava Doherty. Students today face a striking paradox: they are among the most technologically literate generations in history, yet they are deeply anxious about their career prospects in an artificial intelligence-driven future. Since the launch of ChatGPT, the rapid advance of artificial intelligence (AI) has fundamentally reshaped the graduate job market. This shift presents unique challenges and opportunities for students, universities and the broader higher education sector.