In an era where headlines about artificial intelligence swing between utopian promises and dystopian warnings, we're missing perhaps the most profound opportunity of all: using AI to help us become better humans. This isn't about outsourcing our humanity. It's about leveraging technology to amplify our uniquely human capacities for care, support, learning, engagement, and love. As AI systems grow more capable, our ability to be deeply, authentically human becomes not just valuable; it becomes imperative.
Friday, April 04, 2025
SUPERAGENCY: What Could Possibly Go Right with Our AI Future" - Reid Hoffman and Greg Beato, Superagency
Superagency, by Reid Hoffman and Greg Beato, presents an optimistic view of AI's future, focusing on its potential to amplify human capabilities and improve society. Rather than dwelling on dystopian scenarios, the book explores how AI can enhance individual agency, enabling people to achieve more in areas like education, healthcare, and problem-solving. It advocates for an inclusive and adaptive approach to AI, emphasizing its role as a tool for positive change and encouraging readers to actively participate in shaping a future where human ingenuity and AI work in synergy. (summary by Gemini 2.0 Flash)
Thursday, April 03, 2025
New Auburn Engineering research center combines expertise in artificial intelligence, cybersecurity - Joe McAdory, Auburn University
Innovation and Collaboration in Higher Education During Challenging Times - Ray Schroeder, Inside Higher Ed
Wednesday, April 02, 2025
AI's Moore's Law: Measuring AI Ability to Complete Long Tasks
We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.
Artificial Intelligence in Medical Education: Transforming Learning and Practice - Aadhitya Sriram, et al; Cureus
Artificial intelligence (AI) is reshaping medical education by enhancing learning strategies, improving training efficiency, and offering personalized educational experiences. Traditional teaching methods, such as classroom lectures and clinical apprenticeships, face numerous challenges, including information overload, teaching quality variability, and standardisation difficulties. AI presents innovative, data-driven, and adaptive solutions to overcome these limitations, making medical training more effective and engaging. This study explores how AI can be applied in the field of medical education, also focusing on personalized learning, virtual simulations, assessment methods, and curriculum development.
Tuesday, April 01, 2025
AI Ethics in Higher Education: How Schools Are Proceeding - Adam Stone, EdTech
Higher education is uniquely positioned to deal with AI’s ethical considerations, partly because AI adoption is already prevalent in academia. At Miami University in Ohio, “there are courses about AI, and there are courses that use AI,” says Vice President for IT Services and CIO David Seidl. As AI use widens, colleges and universities need to give students “an ethical foundation, a conceptual foundation to prepare them for the future,” he says. Many schools have the institutional expertise on campus needed to lay that foundation. “We have people who are very thoughtful, who bring subject matter expertise from a lot of lenses, so that you can have well-informed conversations about the ethics of AI,” says Tom Andriola, University of California, Irvine’s vice chancellor for IT and data.
Making AI work for workers - McKinsey Quarterly
Monday, March 31, 2025
Publishers Embrace AI as Research Integrity Tool - Kathryn Palmer, Inside Higher Ed
The academic publishing industry is adopting AI-powered tools to improve the quality of peer-reviewed research and speed up production. The latter goal yields “obvious financial benefit” for publishers, one expert said. But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.
Supporting the Instructional Design Process: Stress-Testing Assignments with AI - Faculty Focus
One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration. While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see.
Sunday, March 30, 2025
AI that can match humans at any task will be here in five to 10 years, Google DeepMind CEO says - Ryan Browne, CNBC
Google DeepMind CEO Demis Hassabis said he thinks artificial general intelligence, or AGI, will emerge in the next five or 10 years. AGI broadly relates to AI that is as smart or smarter than humans. “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that,” Hassabis said. Dario Amodei, CEO of AI startup Anthropic, told CNBC at the World Economic Forum in Davos, Switzerland in January that he sees a form of AI that’s “better than almost all humans at almost all tasks” emerging in the “next two or three years.” Other tech leaders see AGI arriving even sooner. Cisco’s Chief Product Officer Jeetu Patel thinks there’s a chance we could see an example of AGI emerge as soon as this year.
Quantum Supremacy Claimed for Real-World Problem Solving - Berenice Baker, IOT World Today
D-Wave Quantum said that its Advantage2 annealing quantum computer achieved quantum supremacy on a practical, real-world problem. A new peer-reviewed paper published in Science, "Beyond-Classical Computation in Quantum Simulation," said D-Wave's system has outperformed classical supercomputers in simulating quantum dynamics in programmable spin glasses, complex magnetic material simulations with significant business and scientific applications. In this context, quantum supremacy refers to a quantum computer performing a computational task that is not feasible for even the most powerful classical supercomputers within a practical timeframe. D-Wave said the magnetic materials simulation problem would take nearly 1 million years and more energy than the world's annual electricity consumption if attempted on a classical GPU-based supercomputer, which D-Wave said Advantage2 achieved it in minutes.
Saturday, March 29, 2025
Beyond big models: Why AI needs more than just scale to reach AGI - Sascha Brodsky, IBM
While today’s AI models can generate fluent text, recognize images and even perform complex problem-solving tasks, they still fall short of human intelligence in key ways. Most surveyed AI researchers believe that deep learning alone isn’t enough to reach AGI. Instead, they argue that AI must integrate structured reasoning and a deeper understanding of cause and effect. IBM Fellow Francesca Rossi, past president of the Association of the Advancement for Artificial Intelligence, which published the survey, is among the experts who question whether bigger models will ever be enough. “We’ve made huge advances, but AI still struggles with fundamental reasoning tasks,” Rossi tells IBM Think. “To get anywhere near AGI, models need to truly understand, not just predict.”
OpenAI and Google ask the government to let them train AI on content they don’t own - Emma Roth, the Verge
OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.” The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump’s “AI Action Plan.” The initiative is supposed to “enhance America’s position as an AI powerhouse,” while preventing “burdensome requirements” from impacting innovation.