Choosing the right large language model can feel overwhelming with so many options out there, especially if you’re not exactly living and breathing AI. But as we’ve worked through each one, we’ve gotten a real sense of what they’re good at (and where they fall short). So, let’s talk about what to use, when.
Saturday, April 05, 2025
Gemini 2.5: Our most intelligent AI model - Koray Kavukcuoglu, Google
Gemini 2.5 is a thinking model, designed to tackle increasingly complex problems. Our first 2.5 model, Gemini 2.5 Pro Experimental, leads common benchmarks by meaningful margins and showcases strong reasoning and code capabilities. Gemini 2.5 models are thinking models, capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy. In the field of AI, a system’s capacity for “reasoning” refers to more than just classification and prediction. It refers to its ability to analyze information, draw logical conclusions, incorporate context and nuance, and make informed decisions. For a long time, we’ve explored ways of making AI smarter and more capable of reasoning through techniques like reinforcement learning and chain-of-thought prompting. Building on this, we recently introduced our first thinking model, Gemini 2.0 Flash Thinking.
Friday, April 04, 2025
Amplified Humanity: How AI Can Expand Our Capacity to Do Good and Be Good - Tawnya Means, University of Illinois Assistant Dean for Educational Innovation, via LinkedIn
In an era where headlines about artificial intelligence swing between utopian promises and dystopian warnings, we're missing perhaps the most profound opportunity of all: using AI to help us become better humans. This isn't about outsourcing our humanity. It's about leveraging technology to amplify our uniquely human capacities for care, support, learning, engagement, and love. As AI systems grow more capable, our ability to be deeply, authentically human becomes not just valuable; it becomes imperative.
SUPERAGENCY: What Could Possibly Go Right with Our AI Future" - Reid Hoffman and Greg Beato, Superagency
Superagency, by Reid Hoffman and Greg Beato, presents an optimistic view of AI's future, focusing on its potential to amplify human capabilities and improve society. Rather than dwelling on dystopian scenarios, the book explores how AI can enhance individual agency, enabling people to achieve more in areas like education, healthcare, and problem-solving. It advocates for an inclusive and adaptive approach to AI, emphasizing its role as a tool for positive change and encouraging readers to actively participate in shaping a future where human ingenuity and AI work in synergy. (summary by Gemini 2.0 Flash)
Thursday, April 03, 2025
New Auburn Engineering research center combines expertise in artificial intelligence, cybersecurity - Joe McAdory, Auburn University
Innovation and Collaboration in Higher Education During Challenging Times - Ray Schroeder, Inside Higher Ed
Wednesday, April 02, 2025
AI's Moore's Law: Measuring AI Ability to Complete Long Tasks
We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.
Artificial Intelligence in Medical Education: Transforming Learning and Practice - Aadhitya Sriram, et al; Cureus
Artificial intelligence (AI) is reshaping medical education by enhancing learning strategies, improving training efficiency, and offering personalized educational experiences. Traditional teaching methods, such as classroom lectures and clinical apprenticeships, face numerous challenges, including information overload, teaching quality variability, and standardisation difficulties. AI presents innovative, data-driven, and adaptive solutions to overcome these limitations, making medical training more effective and engaging. This study explores how AI can be applied in the field of medical education, also focusing on personalized learning, virtual simulations, assessment methods, and curriculum development.
Tuesday, April 01, 2025
AI Ethics in Higher Education: How Schools Are Proceeding - Adam Stone, EdTech
Higher education is uniquely positioned to deal with AI’s ethical considerations, partly because AI adoption is already prevalent in academia. At Miami University in Ohio, “there are courses about AI, and there are courses that use AI,” says Vice President for IT Services and CIO David Seidl. As AI use widens, colleges and universities need to give students “an ethical foundation, a conceptual foundation to prepare them for the future,” he says. Many schools have the institutional expertise on campus needed to lay that foundation. “We have people who are very thoughtful, who bring subject matter expertise from a lot of lenses, so that you can have well-informed conversations about the ethics of AI,” says Tom Andriola, University of California, Irvine’s vice chancellor for IT and data.
Making AI work for workers - McKinsey Quarterly
Monday, March 31, 2025
Publishers Embrace AI as Research Integrity Tool - Kathryn Palmer, Inside Higher Ed
The academic publishing industry is adopting AI-powered tools to improve the quality of peer-reviewed research and speed up production. The latter goal yields “obvious financial benefit” for publishers, one expert said. But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.
Supporting the Instructional Design Process: Stress-Testing Assignments with AI - Faculty Focus
One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration. While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see.
Sunday, March 30, 2025
AI that can match humans at any task will be here in five to 10 years, Google DeepMind CEO says - Ryan Browne, CNBC
Google DeepMind CEO Demis Hassabis said he thinks artificial general intelligence, or AGI, will emerge in the next five or 10 years. AGI broadly relates to AI that is as smart or smarter than humans. “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that,” Hassabis said. Dario Amodei, CEO of AI startup Anthropic, told CNBC at the World Economic Forum in Davos, Switzerland in January that he sees a form of AI that’s “better than almost all humans at almost all tasks” emerging in the “next two or three years.” Other tech leaders see AGI arriving even sooner. Cisco’s Chief Product Officer Jeetu Patel thinks there’s a chance we could see an example of AGI emerge as soon as this year.
Quantum Supremacy Claimed for Real-World Problem Solving - Berenice Baker, IOT World Today
D-Wave Quantum said that its Advantage2 annealing quantum computer achieved quantum supremacy on a practical, real-world problem. A new peer-reviewed paper published in Science, "Beyond-Classical Computation in Quantum Simulation," said D-Wave's system has outperformed classical supercomputers in simulating quantum dynamics in programmable spin glasses, complex magnetic material simulations with significant business and scientific applications. In this context, quantum supremacy refers to a quantum computer performing a computational task that is not feasible for even the most powerful classical supercomputers within a practical timeframe. D-Wave said the magnetic materials simulation problem would take nearly 1 million years and more energy than the world's annual electricity consumption if attempted on a classical GPU-based supercomputer, which D-Wave said Advantage2 achieved it in minutes.