Wednesday, June 18, 2025

1 big thing: The scariest AI reality - Mike Allen, Axios AM

The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

  • Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do.

Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown.

  • None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it.

The Rising Voices Podcast | Navigating Career Transitions - EDUCAUSE

This EDUCAUSE Rising Voices podcast episode features Wes Johnson and Sarah Buska, co-hosts, along with guests Jay James and Mike Rkiki, discussing career transitions in higher education. Jay and Mike share their experiences with significant career changes, highlighting that these transitions take time and are not always fully controllable [04:10, 08:52]. The podcast also explores deciding whether to advance within an institution or seek opportunities elsewhere [12:07, 17:18]. The guests emphasize the importance of community, mentorship, and self-compassion in navigating feelings of discomfort and imposter syndrome in new roles [20:01, 29:28]. The episode concludes with advice on building relationships and understanding organizational dynamics to succeed in a new role, while also considering one's career in a broader perspective [33:06, 37:52]. {summary by Gemini Flash 2.

Tuesday, June 17, 2025

AI’s big interoperability moment: Why A2A and MCP are key for agent collaboration - Tomas Talius, Venture Beat

As agents become more capable and specialized, enterprises are discovering that coordination is the next big challenge. Two open protocols — Agent-to-Agent (A2A) and Model Context Protocol (MCP) — are emerging to meet that need. They simplify how agents share tasks, exchange information, and access enterprise context, even when they were built using different models or tools. These protocols are more than technical conveniences. They are foundational to scaling intelligent software across real-world workflows. AI systems are moving beyond general-purpose copilots. In practice, most enterprises are designing agents to specialize: managing inventory, handling returns, optimizing routes, or processing approvals. Value comes not only from their intelligence, but from how these agents work together.


Mark Cuban and Anthropic’s CEO Are Arguing About How Many Jobs AI Will Replace - Jessica Stillman, Inc.

Will AI add 15 percent to the GDP or one percent? Is AI about to become smarter than humans or is it doomed to unreliable hallucinations for a long time yet? Is humanity about to enter an era of massive abundance or terrible loss? You can find a well-regarded expert arguing every one of these positions. Or you can look at the example of a recent back-and-forth between billionaire entrepreneur Mark Cuban and Dario Amodei, CEO of leading AI company Anthropic, about whether AI will soon be coming for your white-collar job. It’s clear from the Cuban and Amodei debate that AI disruption is in progress. It’s also clear absolutely no one is sure how it will play out. When things are this uncertain, the best way to prepare is to stay curious and keep learning.  


Monday, June 16, 2025

The tiny fish brain that could teach AI to think - Sascha Brodsky, IBM

In one of these suites, located in Ashburn, Virginia, a screen glows with a volumetric rendering of a larval zebrafish brain. Each neuron pulses as a pinpoint of light—a galaxy in miniature—captured mid-firing in high-resolution 3D. To the untrained eye, it looks like a murmuration of fireflies in a crystal dome. But for Jan-Matthis Lueckmann, a research scientist at Google Research, and his collaborators, it is a working map of cognition, encoded in flickers. It was a puzzle. And solving it could reshape our understanding of both the brain and artificial intelligence.

World's First SELF IMPROVING CODING AI AGENT | Darwin Godel Machine - Wes Roth, YouTube

287K subscribersThe video discusses the "Darwin Gödel Machine" (DGM) from Sakana AI, a system for the open-ended evolution of self-improving AI agents, focused on coding [01:04]. The DGM uses an evolutionary process where "parent" agents create "offspring" processes, improving task performance [01:14, 01:26]. It aims to overcome human-design limitations by allowing autonomous and continuous self-improvement, working with "frozen" foundation models [03:57, 06:26]. Tested on coding benchmarks, the DGM showed significant performance increases, outperforming human-designed agents [09:32, 13:39]. It improved tools and workflows, with transferable improvements across models and languages [15:59, 16:50]. The video also addresses safety concerns, including vulnerabilities and the potential for "objective hacking" [17:05, 19:25].


Sunday, June 15, 2025

Why This IBM Exec Says AI Adoption Should Be Led by HR - Kayla Webster, Inc.

HR is the natural choice to lead company-wide adoption of AI, according to Nickle LaMoreaux, senior vice president and chief human resources officer at IBM, who took to LinkedIn to make her case. She sat down Monday with LinkedIn chief people officer Teuila Hanson in the social-media platform’s latest episode of Conversations with CHROs, and Inc. got an exclusive first look. The two discussed issues that are keeping HR up at night. LaMoreaux said she believes HR should take the reins on AI adoption because the department is an expert on both skills and culture change.  “AI is about the technology, but it is about a lot more than that. It is about willingness to change how you lead people through the different roles of managers and leaders,” LaMoreaux said. Although many companies choose to give this responsibility to leaders who deal with new technologies—chief product officers, head of engineering, line of business owner, etc.—LaMoreaux says these professionals are good at adopting tech to complete job-related tasks, but they lack the skills to ensure company-wide adoption.

https://www.inc.com/kaylawebster/why-this-ibm-exec-says-ai-adoption-should-be-led-by-hr/91196316

Google Research Slashes Estimated Resources to Break RSA Encryption - Berenice Baker, IOT World Today

Study reveals quantum computers could crack RSA with 95% fewer qubits, accelerating industry's race to adopt quantum-safe security. According to research by Google quantum research scientist Craig Gidney and senior staff cryptography engineer Sophie Schmieg, a 2048-bit RSA encryption—a cornerstone of modern digital security—could theoretically be broken by a quantum computer. "Yesterday, we published a preprint demonstrating that 2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019," the researchers said in a blog post.

Saturday, June 14, 2025

When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack - Matt Marshall, Venture Beat

The recent uproar surrounding Anthropic’s Claude 4 Opus model – specifically, its tested ability to proactively notify authorities and the media if it suspected nefarious user activity – is sending a cautionary ripple through the enterprise AI landscape. While Anthropic clarified this behavior emerged under specific test conditions, the incident has raised questions for technical decision-makers about the control, transparency, and inherent risks of integrating powerful third-party AI models. The core issue, as independent AI agent developer Sam Witteveen and I highlighted during our recent deep dive videocast on the topic, goes beyond a single model’s potential to rat out a user. It’s a strong reminder that as AI models become more capable and agentic, the focus for AI builders must shift from model performance metrics to a deeper understanding of the entire AI ecosystem, including governance, tool access, and the fine print of vendor alignment strategies.


AI revolt: New ChatGPT model refuses to shut down when instructed - Anthony Cuthbertson, the Independent

OpenAI’s latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI’s new o3 model.The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. 


Friday, June 13, 2025

Pros and cons of educational AI - Ameera Fouad, Al-Ahram

Artificial intelligence (AI) has certainly transformed the way we see life. It can apparently do almost anything in a way impossible to believe when it was introduced nearly a decade ago. The way AI has become integrated into the education system cannot be disregarded as it has become a fact that everyone must relate to. AI has affected the education systems at all grades and levels. Nowadays, you can easily see a college student writing an essay using an AI-generated outline. Equally, you can see a fourth-grade student asking AI to simplify a difficult mathematical equation. Despite the tremendous leap that has taken place to help educators and students in Egypt use AI responsibly, there are still tremendous problems in using it.

AI Mythbusters: INBOUND Experts Set the Record Straight - Inbound

AI is everywhere. But that doesn’t mean we all agree on what it’s doing or what it’s actually good at. From sales teams to support communities, inboxes to dashboards, there’s a lot of confusion around how to use AI well. So we asked some of the top minds speaking at INBOUND to debunk the biggest myths they’re seeing on the AI frontlines and explain what the real opportunity looks like. The myths surrounding AI are misconceptions that are potentially costly blind spots for your business. But success doesn’t come from jumping on the latest tool. It comes from understanding where AI truly fits in your workflows and how to use it with intention.


Thursday, June 12, 2025

Opinion: Colleges Must Establish Their Purpose in the AI Era - Bloomberg Opinion

Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one’s homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don’t. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students’ work. It’s an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making.

For CEOs, AI tech literacy is no longer optional: Bridging the gap between AI hype and business value starts at the top.- Faisal Hoque, Fast Company

Artificial intelligence has been the subject of unprecedented levels of investment and enthusiasm over the past three years, driven by a tide of hype that promises revolutionary transformation across every business function. Yet the gap between this technology’s promise and the delivery of real business value remains stubbornly wide. A recent study by BCG found that while 98% of companies are exploring AI, only 26% have developed working products and a mere 4% have achieved significant returns on their investments. This striking implementation gap raises a critical question: Why do so many AI initiatives fail to deliver meaningful value? A big part of the answer lies in a fundamental disconnect at the leadership level: to put it bluntly, many senior executives just don’t understand how AI works.


Wednesday, June 11, 2025

Our New Co-Workers in Higher Ed - Ray Schroeder, Inside Higher Ed

I was reading a Substack posting from Jurgen Gravestein, conversational AI consultant at the Conversation Design Institute in the Netherlands. Gravestein is author of the newsletter Teaching Computers How to Talk. His writings prompted me to go to the source itself! I set up a conversation between Anthropic Claude 4 and a GPT that I trained, ChatGPT Ray’s EduAI Advisor. The result was a fascinating insight into perspectives from the two apps engaging one another in what truly appears to be a conversation about their “thoughts” on engaging with humans. I have stored the complete transcript. I encourage you to check it out in its entirety. However, let’s examine a few of the more insightful highlights here.

The ‘3-word rule’ that makes ChatGPT give expert-level responses Features - Amanda Caswell, Tom's Guide

The concept is simple: Add a short, three-word directive to your prompt that tells ChatGPT how to respond. These three words can instantly shape the tone, voice and depth of the output. You’re not rewriting your whole question. You’re just giving the AI a lens through which to answer.

Here are a few examples that work surprisingly well:

“Like a lawyer” — for structured, detailed and logical responses

“Be a teacher” — for simplified, clear and educational explanations

“Act like Hemingway” — for punchy, minimalist writing with impact

It’s kind of like casting the AI in a role, and then you're directing the performance with the specifics in your prompts.

Tuesday, June 10, 2025

What’s next in computing is generative and quantum - IBM

For AI, this means the debut of generative computing — a new way to interface with large language models. Generative computing will center the LLM as a compute element with a runtime built around it. For IBM’s clients, this development will make building AI agents and applications more secure, portable, maintainable, and efficient, said IBM Research VP of AI Sriram Raghavan. “It isn’t every day that a new computing element shows up in our industry,” he said. “Generative computing is a way to move away from prompting to real programming.” And for quantum computing, the next two years will bring quantum advantage, meaning that IBM’s quantum computers will be able to perform calculations of practical, commercial, or scientific importance, more cost-effectively, faster, or with greater accuracy than a classical computer alone could achieve.

AI Researcher SHOCKING "Singularity in 2025 Prediction" - Wes Roth, YouTube

This podcast episode discusses Dr. Alan D. Thompson's prediction that the singularity could occur sometime in mid-2025, suggesting we might already be in its early stages due to AI advancements. Dr. Thompson believes we are 94% of the way to Artificial General Intelligence (AGI) and approaching Artificial Super Intelligence (ASI), a point echoed by Arvin Shinivas of Perplexity. The host highlights Microsoft's AI in discovering a novel non-PFAS coolant as an example of advancements towards these markers, drawing parallels to predictions in Max Tegmark's book about a rapid acceleration in technological breakthroughs driven by ASI. The discussion also covers Google's Alpha Evolve, an AI system that has significantly improved Google's computational efficiency and has broad applications, as well as other notable Alpha AI systems, suggesting a rapid pace of AI development.


Monday, June 09, 2025

1 in 4 employers say they’ll eliminate degree requirements by year’s end - Carolyn Crist, Higher Ed Dive

A quarter of employers surveyed said they will remove bachelor’s degree requirements for some roles by the end of 2025, according to a May 20 report from Resume Templates. In addition, 7 in 10 hiring managers said their company looks at relevant experience over a bachelor’s degree while making hiring decisions. “Over the last five years, we’ve seen large organizations drop degree requirements in favor of certifications or experience, and now others are following suit,” said Julia Toothacre, chief career strategist for Resume Templates. “For employers, it expands the talent pool and generates positive PR. For candidates, it opens doors for those who can’t afford a degree or choose a different path. These jobs have the potential to lift people out of poverty.”


What College Graduates Need Most in the Age of AI - Michael Serazio, Time

Intellectual humility demands that education hedge both “with” and “against” AI, because we can’t know which technologies will triumph and which will collect dust. Some become Facebook; others, the Metaverse. While colleges sort out Chat GPT’s precise place in matters curricular, we can double down on delivering what Generation AI equally needs: the experience of humanity, a quality the machines can never know and must never supplant. This includes the experiential learning that accompanies volunteer service,  immersing students, three-dimensionally, in the lives and worlds of society’s marginalized.


Sunday, June 08, 2025

The analysis of generative artificial intelligence technology for innovative thinking and strategies in animation teaching - Xu Yao, Yaozhang Zhong & Weiran Cao, Nature

This work examines the application of Generative Artificial Intelligence (GAI) technology in animation teaching, focusing on its role in enhancing teaching quality and learning efficiency through innovative instructional strategies. A mixed-methods research approach is adopted, integrating quantitative analysis (experimental data and questionnaire surveys) and qualitative analysis (behavioral observations) to systematically assess the educational effectiveness of GAI technology. Beyond offering personalized learning solutions, GAI technology plays a crucial role in cultivating students’ creativity, critical thinking, and autonomous learning abilities. This work provides theoretical support and practical guidance for the digital transformation of animation teaching while underscoring the broader applicability of GAI technology in the education sector, offering new directions for the future development of intelligent education.


OpenAI upgrades the AI model powering its Operator agent - Kyle Wiggers, Tech Crunch

OpenAI is updating the AI model powering Operator, its AI agent that can autonomously browse the web and use certain software within a cloud-hosted virtual machine to fulfill users’ requests. Soon, Operator will use a model based on o3, one of the latest in OpenAI’s o series of “reasoning” models. Previously, Operator relied on a custom version of GPT-4o. By many benchmarks, o3 is a far more advanced model, particularly on tasks involving math and reasoning.

https://techcrunch.com/2025/05/23/openai-upgrades-the-ai-model-powering-its-operator-agent/

Saturday, June 07, 2025

Behind the Curtain: A white-collar bloodbath - Jim VandeHei,Mike Allen, Axios

Dario Amodei — CEO of Anthropic, one of the world's most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us: AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.

Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

How Science Can Fix Its Trust Problem - Cory Miller & Michael L. Platt, Knowledge at Wharton

Scientists today seem out of touch with reality. In the past, when a new administration proposed deep cuts to federal research, scientists reflexively girded for battle using a tried-and-true playbook. We circulated petitions, attended protests, fired off angry emails, lauded our accomplishments, and hoped the storm would pass, all while patting ourselves on the back. But these days, the rising tide of anti-science sentiment is not receding. The same public that once rose to support us is not showing up. Americans’ confidence in science has slipped to its lowest point in almost half a century. Only a third of Americans today think highly of universities — a number that has dropped by half in only a decade. The world changed, and scientists stubbornly did not.


Friday, June 06, 2025

AI is ‘breaking’ entry-level jobs that Gen Z workers need to launch careers, LinkedIn exec warns - Jason Ma, Fortune

“Now it is our office workers who are staring down the same kind of technological and economic disruption,” he wrote in a recent New York Times op-ed. “Breaking first is the bottom rung of the career ladder.” For example, AI tools are doing the types of simple coding and debugging tasks that junior software developers did to gain experience. AI is also doing work that young employees in the legal and retail sectors once did. And Wall Street firms are reportedly considering steep cuts to entry-level hiring. Meanwhile, the unemployment rate for college graduates has been rising faster than for other workers in past few years, Raman pointed out, though there isn’t definitive evidence yet that AI is the cause of the weak job market.


The people who think AI might become conscious - Pallab Ghosh, BBC

The "Dreamachine", at Sussex University's Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world. By learning the nature of consciousness, researchers hope to better understand what's happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven't already. But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?


Thursday, June 05, 2025

Sorry, Google and OpenAI: The future of AI hardware remains murky - Harry McCracken, Fast Company

2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devices—several of which were in the news this week. Let’s review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its $6.5 billion acquisition of Jony Ive’s startup IO, which will put the Apple design legend at the center of the ChatGPT maker’s quest to build devices around its AI. And on Thursday, Bloomberg’s Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year. What I didn’t get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murky—in certain respects, more than ever.

I tested Gemini 2.5 Pro vs Claude 4 Sonnet with the same 7 prompts — here’s who came out on top Face-off - Amanda Caswell Tom's Guide

When it comes to chatbot showdowns, I’ve run my fair share of head-to-heads. This latest contest comes just hours after Claude 4 Sonnet was unveiled and I couldn’t wait to see how it compared to Gemini 2.5 Pro, also new with updated features. Instead of just testing Gemini and Claude on typical productivity tasks, I wanted to see how these two AI titans handle nuance: creativity under pressure, ethical dilemmas, humor, ambiguity and deep technical reasoning. I gave Google Gemini 2.5 Pro and Claude 4 Sonnet, the same seven prompts — each designed to test a different strength, from emotional intelligence to code generation. While they both impressed me and this test taught me more about how they think, there was one clear winner.

https://www.tomsguide.com/ai/i-tested-gemini-2-5-pro-vs-claude-4-sonnet-with-the-same-7-prompts-heres-who-came-out-on-top

Wednesday, June 04, 2025

Agentic AI Is Already Changing the Workforce - Jen Stave, Ryan Kurt and John Winsor, Harvard Business Review

AI agents are fast becoming much more than just sidekicks for human workers. They’re becoming digital teammates—an emerging category of talent. To get the most out of these new teammates, leaders in HR and procurement will need to start developing an operational playbook for integrating them into hybrid teams and a workforce strategy. That strategy will require that companies either develop a talent-acquisition function of their own that allows them to integrate AI agents into their workforce, or partner with firms that now offer both human and AI staffing solutions. To succeed in this new environment, however, organizations must actively shape how AI is integrated into their labor strategy rather than waiting for the market to evolve around it. In this article, the authors survey this rapidly evolving terrain and recommend seven critical actions that companies should take to successfully adapt.


The new economics of enterprise technology in an AI world - Aamer Baig, James Kaplan, Jeffrey Lewis, and Pablo Prieto, McKinsey

Enterprise technology spending in the United States has been growing by 8 percent per year on average since 2022.1 This surge is not surprising, given the increasing role technology plays in how businesses function and create value. The issue lies in what companies are getting for that spend, and the track record on that score is mixed. While analysis linking tech spend to labor productivity is notoriously inexact, labor productivity has grown by close to 2 percent over the same period of time (Exhibit 1).2


Tuesday, June 03, 2025

OpenAI is buying Jony Ive’s AI hardware company - Jay Peters, the Verge

In an interview with Bloomberg, Ive called AI hardware misfires like the Humane Pin and Rabbit R1 “very poor products,” and said that “there has been an absence of new ways of thinking expressed in products.” The first product isn’t intended to be an iPhone killer, though: “In the same way that the smartphone didn’t make the laptop go away, I don’t think our first thing is going to make the smartphone go away,” OpenAI CEO Sam Altman told Bloomberg. “It is a totally new kind of thing.” “Jony recently gave me one of the prototypes of the device for the first time to take home, and I’ve been able to live with it, and I think it is the coolest piece of technology that the world will have ever seen,” Altman said. “I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves,” Ive said.

Duolingo CEO says AI is a better teacher than humans—but schools will still exist ‘because you still need childcare’ - Irina Ivanova, Fortune

Now the company has much broader ambitions. With a community of 116 million users a month, Duolingo has amassed loads of data about how people learn, accumulating tricks to keep learners engaged over the long term and even know how well a student will score on a test before they take it. According to founder and CEO Luis von Ahn, AI’s ability to individualize learning will lead to most teaching being done by computers in the next few decades. “Ultimately, I’m not sure that there’s anything computers can’t really teach you,” von Ahn said on the No Priors podcast recently. He predicted education would radically change, because “it’s just a lot more scalable to teach with AI than with teachers.”


Monday, June 02, 2025

Why you shouldn’t say ‘please’ to ChatGPT - Ritesh Chugh, ACS Information Age

OpenAI CEO Sam Altman recently revealed that including polite phrases when prompting AI systems costs the company tens of millions of dollars in additional electricity expenses. Every word we type is processed as part of a "token" — a unit of data that the AI system must analyse and respond to. The more tokens used, the more computing power and energy are required. Individually, the impact of a few extra words is trivial. But when scaled across millions of users each day, these additions significantly increase the workload on servers, resulting in higher energy consumption, greater carbon emissions, and substantial operational costs.

Google Unveils A.I. Chatbot, Signaling a New Era for Search - Tripp Mickle, NY Times

Google became the gateway to the internet by perfecting its search engine. For two decades, it surfaced 10 blue links that gave people access to the information they were looking for. But after a quarter century, the tech giant is betting that the future of search will be artificial intelligence. On Tuesday, Google said it was introducing a new feature in its search engine called A.I. Mode. The tool will function like a chatbot, allowing people to start a query, ask follow-up questions and use the company’s A.I. system to deliver comprehensive answers. “It’s a total reimagining of search,” said Sundar Pichai, the chief executive of Google, in a press briefing ahead of the company’s annual conference for software developers.


Sunday, June 01, 2025

OpenAI taps iPhone designer Jony Ive to develop AI devices - Cecily Mauran, Mashable

Altman also shared that he has a prototype of what Ive and his team have developed, calling it the "coolest piece of technology the world has ever seen." As far back as 2023, there were reports of OpenAI teaming up with Ive for some kind of AI-first device. Altman and Ive's bromance formed over ideas about developing an AI device beyond the current hardware limitations of phones and computers. "The products that we're using to deliver and connect us to unimaginable technology, they're decades old," said Ive in the video, "and so it's just common sense to at least think surely there's something beyond these legacy products."

Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI - Will Knight, Wired

Demis Hassabis, CEO of Google DeepMind, says that reaching artificial general intelligence or AGI—a fuzzy term typically used to describe machines with human-like cleverness—will mean honing some of the nascent abilities found in Google’s flagship Gemini models. Google announced a slew of AI upgrades and new products at its annual I/O event today in Mountain View, California. The search giant revealed upgraded versions of Gemini Flash and Gemini Pro, Google’s fastest and most capable models, respectively. Hassabis said that Gemini Pro outscores other models on LMArena, a widely used benchmark for measuring the abilities of AI models.