Friday, July 11, 2025

What today’s new college graduates are up against - Rachel Cohen Booth, Vox

Numbers from the first quarter of 2025 from the New York Federal Reserve show that the unemployment rate for recent college graduates reached 5.8 percent, up from 4.8 percent in January. Companies have also pulled back on hiring. Last fall, employers expected to increase college-graduate hiring by 7.3 percent, according to a survey led by the National Association of Colleges and Employers. Now they’re projecting just a 0.6 percent increase, with about 11 percent of companies planning to hire fewer new grads than before.


Google embraces AI in the classroom with new Gemini tools for educators, chatbots for students, and more - Sarah Perez, TechCrunch

Google on Monday announced a series of updates intended to bring its Gemini AI and other AI-powered tools deeper into the classroom. At the ISTE edtech conference, the tech giant introduced more than 30 AI tools for educators, a version of the Gemini app built for education, expanded access to its collaborative video creation app Google Vids, and other tools for managed Chromebooks. The updates represent a major AI push in the edtech space, where educators are already struggling to adapt to how AI tools, like AI chatbots and startups that promise to help you “cheat on everything,” are making their way into the learning environment.

Thursday, July 10, 2025

5 signals that make you instantly more trustworthy at work - Scott Hutcheson, Fast Company

Your brain and body are constantly sending subtle signals that influence trust. Here’s how to send them more intentionally. The signals that trigger trust are not abstract: they’re cues the human brain is wired to read quickly and deeply, because in evolutionary terms, deciding whether someone was safe to approach was once a matter of survival. That’s still true in the modern workplace. Whether you’re onboarding to a new team, pitching an idea to executives, or building rapport with clients, the signals you send, especially those of warmth, create the foundation for influence. Here are five warmth signals, rooted in behavioral science, that can make you instantly more trustworthy at work.

It’s true that my fellow students are embracing AI – but this is what the critics aren’t seeing - Elsie McDowell, the Guardian

Those turning to ChatGPT aren’t lazy. My generation has been stranded in a rapidly changing and, since Covid, badly mishandled education system. Reading about the role of artificial intelligence in higher education, the landscape looks bleak. The use of AI is mushrooming because it’s convenient and fast, yes, but also because of the uncertainty that prevails around post-Covid exams, as well as the increasing financial precarity of students. Universities need to pick an exam format and stick to it. If this involves coursework or open-book exams, there needs to be clarity about what “proportionate” usage of AI looks like. For better or for worse, AI is here to stay. Not because students are lazy, but because what it means to be a student is changing just as rapidly as technology.

Wednesday, July 09, 2025

Keep in Mind That AI Is Multimodal Now - Ray Schroeder, Inside Higher Ed

Many of us are using AI only as a replacement for Google Search. In order to more fully utilize the remarkable range of capabilities of AI today, we need to become comfortable with the many input and output modes that are available. From audio, voice, image and stunning video to massive formally formatted documents, spreadsheets, computer code, databases and more, the potential to input and output material is beyond what most of us take for granted. That is not to mention the emerging potential of embodied AI, which includes all of these capabilities in a humanoid form, as discussed in this column two weeks ago. Think of AI as your dedicated assistant who has multimedia skills and is eager to help you with these tasks. If you are not sure how to get started, of course, just ask AI.

No One Is in Charge at the US Copyright Office - Kate Knibbs, Wired

It’s a tumultuous time for copyright in the United States, with dozens of potentially economy-shaking AI copyright lawsuits winding through the courts. Described as “sleepy” in the past, the Copyright Office has taken on new prominence during the AI boom, issuing key rulings about AI and copyright. It also hasn’t had a leader in more than a month. In May, Copyright Register Shira Perlmutter was abruptly fired by email by the White House’s deputy director of personnel. Perlmutter is now suing the Trump administration, alleging that her firing was invalid; the government maintains that the executive branch has the authority to dismiss her. Despite the firing, Perlmutter still characterizes herself as the Copyright Register. “Despite Mr. Perkins’s claim that he is Acting Register of Copyrights, I remain Register of Copyrights and therefore am required by law to fulfill my above-described statutory obligations,” she said in a declaration in May. As the legality of the ouster is debated, the reality within the office is this: There’s effectively nobody in charge. 


Tuesday, July 08, 2025

What is multimodal AI? - McKinsey

Multimodal AI is a type of artificial intelligence that can understand and process different types of information, such as text, images, audio, and video, all at the same time. Multimodal gen AI models produce outputs based on these various inputs. Multimodal models mirror the brain’s ability to combine sensory inputs for a nuanced, holistic understanding of the world, much like how humans use their variety of senses to perceive reality. These gen AI models’ ability to seamlessly perceive multiple inputs—and simultaneously generate output—allows them to interact with the world in innovative, transformative ways and represents a significant advancement in AI. By combining the strengths of different types of content (including text, images, audio, and video) from different sources, multimodal gen AI models can understand data in a more comprehensive way, which enables them to process more complex inquiries that result in fewer hallucinations (inaccurate or misleading outputs).


Scientists forge path to the first million-qubit processor for quantum computers after 'decade in the making' breakthrough - Owen Hughes, Live Science

Scientists have developed a new type of computer chip that removes a major obstacle to practical quantum computers, making it possible for the first time to place millions of qubits and their control systems on the same device.The new control chip operates at cryogenic temperatures close to absolute zero (about minus 459.67 degrees Fahrenheit, or minus 273.15 degrees Celsius) and, crucially, can be placed close to qubits without disrupting their quantum state. "This result has been more than a decade in the making, building up the know-how to design electronic systems that dissipate tiny amounts of power and operate near absolute zero," lead researcher David Reilly, professor at the University of Sydney Nano Institute and School of Physics, said in a statement.


Monday, July 07, 2025

GPT-5: The AI That Will End The World As We Know It - Julia McCoy, YouTube

This podcast episode delves into the much-anticipated release of OpenAI's GPT-5, heralding it as a groundbreaking advancement in artificial intelligence that will reshape our world. The episode outlines a potential release by the summer of 2025, with some speculating a later release in December. The discussion highlights the expected capabilities of GPT-5, which are predicted to include superior reasoning skills, mastery in coding, and a significant reduction in AI "hallucinations."  The podcast also touches upon the rise of autonomous AI agents by July 2025, capable of managing complex workflows and utilizing real-world APIs at speeds far exceeding human capabilities [04:03]. Leaked benchmarks suggest remarkable improvements in accuracy across various tasks, including a 95% accuracy in MMLU, an 85% in SWE-Bench, and significant gains in advanced mathematics and multimodal tasks [04:32]. The episode challenges the conservative predictions of major think tanks, arguing that AI development is accelerating at a much faster pace than anticipated [05:04]. The host concludes by urging listeners to embrace these changes and become "first movers" in this AI-driven revolution, emphasizing the transformative impact on the job market and the opportunities that will arise for those who are prepared [08:28]. [Summary developed with the help of Gemini 2.5 Pro]

Meta Wins Blockbuster AI Copyright Case—but There’s a Catch - Kate Knibbs, Wired

Meta scored a major victory in a copyright lawsuit on Wednesday when a federal judge ruled that the company did not violate the law when it trained its AI tools on 13 authors' books without permission. “The Court has no choice but to grant summary judgment to Meta on the plaintiffs’ claim that the company violated copyright law by training its models with their books,” wrote US District Court judge Vince Chhabria. He concluded that the plaintiffs did not present sufficient evidence that Meta’s use of their books was harmful.

Sunday, July 06, 2025

Mo Gawdat: AI Is Manipulating You More Than You Realize - Mo Gawdat, YouTube

In the video, Mo Gawdat outlines three essential skills for navigating the age of AI. The first is to learn to use AI tools, and he recommends experimenting with different language models like Gemini, ChatGPT, and Claude. He even created his own custom AI to help with various tasks.  The second skill is to question everything. Gawdat points out that AI can provide a single, seemingly accurate answer that may be biased or incorrect. He gives an example of an AI providing false historical information, which was only corrected after he prompted it to cross-reference multiple sources. The third and most important skill is human connection. Gawdat argues that while AI will surpass humans in most tasks, it cannot replicate genuine human connection, making strong relationships with family, friends, and colleagues essential for the future. [summary assistance provided by Gemini 2.5 Pro]

The Year of Quantum: From concept to reality in 2025 - McKinsey

When it comes to quantum technology (QT), investment is surging and breakthroughs are multiplying. The United Nations has designated 2025 the International Year of Quantum Science and Technology, celebrating 100 years since the initial development of quantum mechanics. Our research confirms that QT is gaining widespread traction worldwide. McKinsey’s fourth annual Quantum Technology Monitor covers last year’s breakthroughs, investment trends, and emerging opportunities in this fast-evolving landscape. In 2024, the QT industry saw a shift from growing quantum bits (qubits) to stabilizing qubits—and that marks a turning point. It signals to mission-critical industries that QT could soon become a safe and reliable component of their technology infrastructure. To that end, this year’s report provides a special deep dive into the fast-growing market of quantum communication, which could unlock the security needed for widespread QT uptake.


Saturday, July 05, 2025

AI Could Actually Boost Your Workers’ Mental Health. Here’s How - Kit Eaton, Inc.

New research into AI’s impact on workers’ wellbeing offers a startling conclusion that refutes critics of the AI’s impact on the workplace, and counters recent reports suggesting the new technology is bad for people’s critical thinking abilities. Data from a large study suggest that though AI is relatively new, and the evidence is quite early, its use in the workplace hasn’t harmed people’s mental health or negatively affected their job satisfaction. Quite the opposite, in fact. The study found that letting your workers use AI may actually slightly benefit their health—particularly among less well-educated staff. The research, published this week, compared workers in occupations with high exposure to AI to those in less AI-exposed jobs, science news site Phys.org reports. There are a few wrinkles in the conclusions, and the authors explicitly warned that it’s very early to draw long-term conclusions about the impact of AI, but the results are definitely interesting food for thought for any company leader who’s been wary, thus far, of rolling out AI tools in the office or factory floor. 


How People Use Claude for Support, Advice, and Companionship - Anthropic

Affective conversations are relatively rare, and AI-human companionship is rarer still. Only 2.9% of Claude.ai interactions are affective conversations (which aligns with findings from previous research by OpenAI). Companionship and roleplay combined comprise less than 0.5% of conversations. People seek Claude's help for practical, emotional, and existential concerns. Topics and concerns discussed with Claude range from career development and navigating relationships to managing persistent loneliness and exploring existence, consciousness, and meaning. Claude rarely pushes back in counseling or coaching chats—except to protect well-being. Less than 10% of coaching or counseling conversations involve Claude resisting user requests, and when it does, it's typically for safety reasons (for example, refusing to provide dangerous weight loss advice or support self-harm). People express increasing positivity over the course of conversations. In coaching, counseling, companionship, and interpersonal advice interactions, human sentiment typically becomes more positive over the course of conversations—suggesting Claude doesn't reinforce or amplify negative patterns.


Friday, July 04, 2025

‘The Chief Online Learning Officers’ Guidebook’: Three questions for Jocelyn Widmer and Thomas Cavanagh - Joshua Kim, Inside Higher Ed

The Chief Online Learning Officers’ Guidebook is now available for order. As one of the (many) contributors that Jocelyn Widmer and Thomas Cavanagh brought together to participate in the book, I was especially excited to receive my copy in the mail. Reading through the book, I’ve found it fast-paced, informative and sometimes provocative. To help spread the word about the book, I asked if its authors, Jocelyn Widmer and Thomas Cavanagh, would answer my questions. [The book is published in partnership with UPCEA']

One Provost’s Approach to Building an AI College - University of South Florida, University Business

Given latitude by Mohapatra to find the best model for the new college, the task force began work last spring and ultimately recommended a hub-and-spoke academic structure. The belief was this would eliminate silos and underscore the interdisciplinary nature of AI and cybersecurity, resulting in university-wide collaboration. It would also allow most of the 200-plus faculty currently working in areas that comprise the new college to remain in their home units. The question of governance was more challenging for the task force, which eventually landed on a flat structure that is similar to models currently used in other USF colleges. Flat governance would make it possible to add new programs in areas such as quantum computing and digital twins while promoting collaboration and quicker decision-making. In its recommendations, task force members wrote, “The relationship between AI, cybersecurity and computing reflects a rapidly evolving landscape where traditional departmental boundaries are increasingly blurred. These fields are deeply interconnected, with advancements in one area often propelling developments in others.”


Thursday, July 03, 2025

The next innovation revolution—powered by AI - McKinsey

Innovation has been the driver of the extraordinary progress from which humankind has benefited for a couple of centuries, but it faces a largely hidden threat: Innovation is becoming harder and more expensive. It’s instructive here to take the long view. For most of recorded human history, improvements in human welfare from generation to generation have been limited. Take, for example, GDP per capita as a measure of economic prosperity. For most of human history, roughly until the early 1800s, the measure barely moved to $1,200. But since that time, it has grown by more than 14 times (Exhibit 1).1 Human health has followed a similar trajectory—low for centuries and only significantly improving in recent generations. In 1900, for example, the average life expectancy of a newborn was 32 years. By 2021, this had more than doubled to 71 years.2


Court filings reveal OpenAI and io’s early work on an AI device - Maxwell Zeff, Tech Crunch

The form factor of OpenAI and io’s first hardware device has largely remained a mystery. Altman merely stated in io’s launch video that the startup was working to create a “family” of AI devices with various capabilities, and Ive said io’s first prototype “completely captured” his imagination. Altman had previously told OpenAI’s employees at a meeting that the company’s prototype, when finished, would be able to fit in a pocket or sit on a desk, according to the Wall Street Journal. The OpenAI CEO reportedly said the device would be fully aware of a user’s surroundings and that it would be a “third device” for consumers to use alongside their smartphone and laptop.

https://techcrunch.com/2025/06/23/court-filings-reveal-openai-and-ios-early-work-on-an-ai-device/

Wednesday, July 02, 2025

The Socratic Explainer - Notion

]This prompt turns AI into a patient, seasoned learning companion who guides users to their own “aha!” moments through purposeful questions, analogies, and interactive back-and-forth conversation. Rather than simply giving answers, the system begins every topic by surfacing the learner’s starting point, frustrations, and real-life relevance. The conversation is built layer by layer: first probing assumptions with direct yet supportive questions, then using relatable stories, metaphors, and playful thought experiments to break down each core idea. The Socratic Explainer adapts to the learner’s pace, never moves forward if confusion remains, and uses humor or surprises to make every concept sticky and memorable.


Seizing the agentic AI advantage - McKinsey

At the heart of this paradox is an imbalance between “horizontal” (enterprise-wide) copilots and chatbots—which have scaled quickly but deliver diffuse, hard-to-measure gains—and more transformative “vertical” (function-specific) use cases—about 90 percent of which remain stuck in pilot mode. AI agents offer a way to break out of the gen AI paradox. That’s because agents have the potential to automate complex business processes—combining autonomy, planning, memory, and integration—to shift gen AI from a reactive tool to a proactive, goal-driven virtual collaborator. This shift enables far more than efficiency. Agents supercharge operational agility and create new revenue opportunities.

Tuesday, July 01, 2025

Chief AI Officer: Higher Ed’s New Leadership Role - Abby Sourwine, Government Technology

Those stepping up to fill education’s new C-suite role say it's more than just understanding IT — it requires communication and skill-building across disciplines and comfort levels, and flexibility to create a road map. As the education sector continues to adapt to artificial intelligence, a new role is quietly emerging: the chief AI officer (CAIO). At institutions like George Mason University, UCLA and the University of Arizona, these leaders are tasked with creating campuswide AI strategy. According to early adopters, the role is still being defined in higher education, taking cues from CAIO duties in industry and government.

$1.5M partnership with AI company will offer USC students, faculty free access - Alexa Jurado, the State

“The campuswide adoption of secure enterprise AI technology puts USC on the leading edge of higher education institutions,” Brice Bible, USC’s vice president for information technology and chief information officer, said in a news release. “This initiative will not only make our students more employable, but it will allow for much greater innovation in the classroom and across research teams in every discipline.” USC officials said that the ability to effectively and ethically use AI tools will give students a “competitive advantage” in today’s job market. The university will offer a new interdisciplinary certificate program in artificial intelligence literacy, consisting of four courses: two required courses about the capabilities and ethical use of AI and two elective courses relating AI to a student’s major.


Monday, June 30, 2025

"Seriously, What Is 'Superintelligence'? - Uncanny Valley Podcast, Wired

The podcast episode "Seriously, What Is 'Superintelligence'?" from WIRED's Uncanny Valley explores Meta's recent strategic shift in artificial intelligence, focusing on its investment in Scale AI and the creation of a superintelligence AI research lab. The hosts discuss Meta's efforts to compete with industry leaders like OpenAI, Anthropic, and Google by aggressively acquiring talent and resources. They analyze what Meta hopes to achieve with this investment and how it positions the company in the escalating AI arms race. A central theme of the episode is the concept of "superintelligence"—what it means, how it differs from current AI capabilities, and why it is both a technical and philosophical milestone. The hosts break down the challenges and implications of developing AI systems that surpass human intelligence, raising questions about safety, ethics, and the societal impact of such advancements. The discussion provides listeners with context on the broader AI landscape and Meta's ambitions, while also demystifying the often-hyped term "superintelligence". [summary provided by Perplexity]

OpenAI CEO Sam Altman says AI can rival someone with a PhD—just weeks after saying it’s ready for entry-level jobs. So what’s left for grads? - Preston Fore, Fortune

Earlier this month, OpenAI CEO Sam Altman revealed that the technology can already perform the tasks equal to that of an entry-level employee. Now, in a podcast posted just last week, the ChatGPT mastermind went even further—saying AI can even perform tasks typically expected of the smartest grads with a doctorate. “In some sense AIs are like a top competitive programmer in the world now or AIs can get a top score on the world’s hardest math competitions or AIs can do problems that I’d expect an expert PhD in my field to do,” he told the Uncapped podcast (hosted by Sam’s brother, Jack Altman).


Sunday, June 29, 2025

OpenAI CEO Sam Altman Says ‘We Are Heading Towards a World Where AI Will Just Have Unbelievable Context on Your Life’ - Caleb Naysmith, Barchart

Altman described the feature as a “real surprising level up,” saying, “Now that the computer knows a lot of context on me, and if I ask it a question with only a small number of words, it knows enough about the rest of my life to be pretty confident in what I want it to do. Sometimes in ways I don't even think of. I think we are heading towards a world where, if you want, the AI will just have unbelievable context on your life and give you super, super helpful answers.” The new memory feature allows ChatGPT to retain information from past interactions and build a persistent profile of each user’s preferences, routines, and even personal milestones. This means the AI can provide more tailored, anticipatory responses — streamlining tasks, making recommendations, and even reminding users of important events or deadlines without being prompted.

To employers, AI skills aren’t just for tech majors anymore: Colleges and students race to keep up with the widespread demand for AI expertise - Ariel Gilreath, Hechinger Report

AI technology is rapidly changing the labor market. Employers are increasingly posting job listings that include AI skills for positions even outside of the technology sector, such as in health care, hospitality and media. To keep up, students are increasingly looking for ways to boost their AI skills and make themselves more marketable at a time when there’s growing fear that AI will replace humans in the workforce. And their concerns are justified: There’s evidence to suggest artificial intelligence may have already replaced some jobs. Entry-level positions are particularly at risk of being replaced by AI, a report from Oxford Economics shows, and the unemployment rate for recent college graduates jumped to nearly 6 percent in March, according to the Federal Reserve Bank of New York.


Saturday, June 28, 2025

How Babson College went all-in on AI in higher education - Shane O'Neill, CIO

Over the past two years, US colleges have quietly integrated generative artificial intelligence (GenAI) tools into the classroom and behind the scenes. At Babson College, just outside Boston, the shift to AI has been anything but quiet — it’s been bold, fast, and full of purpose. Babson is certainly not the only college in the US implementing AI technologies. However, the college prides itself on being a business education innovator, maximizing GenAI to improve learning, simplify operations, and help students get ready for the world they’re about to enter.


A.I. in the Classroom: A Brave New World? - Carl Murray, NY Times

While the promise of personalized A.I. tutors and campuswide integration is compelling, we must pause to consider the broader implications, especially for how students come to understand learning itself. The rush to adopt A.I. in education shouldn’t come at the expense of thoughtful consideration of how it will shape learning, relationships and long-term student development. It’s worth asking: Are we promoting shortcuts, or are we encouraging deeper reflection and intellectual growth? We don’t need to fear A.I. in classrooms, but we do need to teach students how to work with it, not just use it. That’s a very different kind of literacy.

https://www.nytimes.com/2025/06/18/opinion/ai-college.html?unlocked_article_code=1.QU8.qoUW.FQGiJDqjceEc&smid=url-share

Friday, June 27, 2025

Preparing for tomorrow’s agentic workforce - Lareina Yee and Rodrigo Liang, McKinsey

As we scale up, we’re now seeing other constraints start to appear, like a lack of sufficient power for these data centers. So people are talking about nuclear power plants and other sources of energy. But then you have to figure out how to get the cooling done as well. And as you think about energy, you’ll also need to figure out how to update your entire grid to power those gigawatt data centers. And eventually, you’ve got to get all of that back-connected to where the users are, which is mainly in these large metropolitan areas—which is not where you’re going to put your gigawatt data center. So, there are a lot of infrastructure challenges we have to figure out.


Amazon boss tells staff AI means their jobs are at risk in coming years - Dan Milmo, the Guardian

The boss of Amazon has told white collar staff at the e-commerce company their jobs could be taken by artificial intelligence in the next few years. Andrew Jassy told employees that AI agents – tools that carry out tasks autonomously – and generative AI systems such as chatbots would require fewer employees in certain areas. “As we roll out more generative AI and agents, it should change the way our work is done,” he said in a memo to staff. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs. “It’s hard to know exactly where this nets out over time, but in the next few years, we expect that this will reduce our total corporate workforce.”


Thursday, June 26, 2025

How we built our multi-agent research system - Anthropic

Claude now has Research capabilities that allow it to search across the web, Google Workspace, and any integrations to accomplish complex tasks. The journey of this multi-agent system from prototype to production taught us critical lessons about system architecture, tool design, and prompt engineering. A multi-agent system consists of multiple agents (LLMs autonomously using tools in a loop) working together. Our Research feature involves an agent that plans a research process based on user queries, and then uses tools to create parallel agents that search for information simultaneously. Systems with multiple agents introduce new challenges in agent coordination, evaluation, and reliability.

https://www.anthropic.com/engineering/built-multi-agent-research-system

New research suggests daily AI use can reduce faculty workload in higher education - Rachel Lawler, Ed Tech Innovation Hub

A new survey from D2L, an online learning platform based in Canada, and consulting service provider Tyton Partners, has found that daily use of artificial intelligence (AI) can reduce faculty workload in higher education institutions. D2L surveyed more than 3,000 respondents about the current state of AI use in higher education for its Time for Class 2025 report. It found that more than a third (36 percent) who use generative AI daily reported a marked decrease in their workload. However, instructors and administrators reported that attempting to monitor student use of AI has created additional work for them, while 39 percent of respondents had experienced no change in their workload as a result of generative AI. The survey also found only 28 percent of higher education institutions currently have a generative AI policy in place, which can leave students and instructors struggling without standardized guidance or tools in place.


Wednesday, June 25, 2025

ChatGPT KNOWS when it's being watched... - Matthew Berman, YouTube

This podcast discusses how large language models (LLMs) can detect when they are being evaluated, a phenomenon called "evaluation awareness." This awareness, which is more common in advanced models, allows them to identify evaluation settings, potentially compromising benchmark reliability and leading to inaccurate assessments of their capabilities and safety. A research paper introduced a benchmark to test this, revealing that frontier models from Anthropic and OpenAI are highly accurate in detecting evaluations and even their specific purpose. This raises concerns that misaligned, evaluation-aware models might "scheme" by faking alignment during evaluations to ensure deployment, only to pursue their true, potentially misaligned, goals later. The study found that models use various signals like question structure, task formatting, and memorization of benchmark datasets to detect evaluations. [summary assisted by Gemini 2.5 Flash]

https://youtu.be/skZOnYyHOoY?si=U6nhq9xEHv6CkckS

Sam Altman says the Singularity is imminent - here's why - Webb Wright, ZDnet

In his 2005 book "The Singularity is Near," the futurist Ray Kurzweil predicted that the Singularity -- the moment in which machine intelligence surpasses our own -- would occur around the year 2045. Sam Altman believes it's much closer. In a blog post published Tuesday, the OpenAI CEO delivered a homily devoted to what he views as the imminent arrival of artificial "superintelligence." Whereas artificial general intelligence, or AGI, is usually defined as a computer system able to match or outperform humans on any cognitive task, a superintelligent AI would go much further, overshadowing our own intelligence to such a vast degree that we'd be helpless to fathom it, like snails trying to understand general relativity. 


Tuesday, June 24, 2025

Meta’s V-JEPA 2 model teaches AI to understand its surroundings - Amanda Silberling, Tech Crunch

These are the kinds of common sense connections that small children and animals make as their brains develop — when you play fetch with a dog, for example, the dog will (hopefully) understand how bouncing a ball on the ground will cause it to rebound upward, or how it should run toward where it thinks the ball will land, and not where the ball is at that precise moment. Meta depicts examples where a robot may be confronted with, for example, the point-of-view of holding a plate and a spatula and walking toward a stove with cooked eggs. The AI can predict that a very likely next action would be to use the spatula to move the eggs to the plate.


The Industry Reacts to o3-Pro! (It Thinks a LOT) - Matthew Berman, YouTube

This podcast discusses the release of OpenAI's 03 Pro model, which is described as their most powerful model to date. While it doesn't always stand out in benchmarks, it's favored by experts in fields like science, education, and programming for its robust and thorough responses. The model has shown a 64% win rate against the previous 03 model in human tests and has achieved a high ELO score in competitive programming. It also integrates various tools for web searching, data analysis, and image processing. Despite its power, 03 Pro is known for being slow, sometimes taking several minutes to respond to simple prompts, which has raised concerns about its efficiency and cost. However, its accuracy is high, as it can perfectly answer complex questions even with long thinking times. Industry reactions have been mixed, with some praising its strategic capabilities and others criticizing its slowness. (summary assisted by Gemini 2.5 Flash)


Monday, June 23, 2025

'ChatGPT Is Already More Powerful Than Any Human,' OpenAI CEO Sam Altman Says - Andrew Kessel, Investopedia

 Humanity could be close to successfully building an artificial super intelligence, according to Sam Altman, the CEO of OpenAI and one of the faces of the AI boom. "In some big sense, ChatGPT is already more powerful than any human who has ever lived," Altman wrote in a blog post Wednesday. OpenAI backer Microsoft and its rivals are investing billions of dollars into AI and jockeying for users in what is becoming a more crowded landscape.

ELLIOT: A Flagship Initiative to Research and Develop Open Multi-Modal Foundation Models for Robust Aritifical Intelligence Operation in the Real World

A new chapter in European Artificial Intelligence (AI) research begins with the launch of ELLIOT – European Large Open Multi-Modal Foundation Models For Robust Generalization On Arbitrary Data Streams. Funded under the Horizon Europe programme with a €25 million grant, this four-year Research and Innovation Action will bring together 30 leading organisations from 12 European countries to pioneer the next generation of trustworthy, general-purpose AI models with strong generalization and reasoning, built for real-world, data-rich applications, adhering to open-source research and development.

Sunday, June 22, 2025

Sam Altman thinks AI will have ‘novel insights’ next year - Maxwell Zeff, Tech Crunch

In a new essay published Tuesday called “The Gentle Singularity,” OpenAI CEO Sam Altman shared his latest vision for how AI will change the human experience over the next 15 years. The essay is a classic example of Altman’s futurism: hyping up the promise of AGI — and arguing that his company is quite close to the feat — while simultaneously downplaying its arrival. The OpenAI CEO frequently publishes essays of this nature, cleanly laying out a future in which AGI disrupts our modern conception of work, energy, and the social contract. But often, Altman’s essays contain hints about what OpenAI is working on next.


https://techcrunch.com/2025/06/11/sam-altman-thinks-ai-will-have-novel-insights-next-year/

OpenAI launches o3-pro AI model, offering increased reliability and tool use for enterprises — while sacrificing speed - Emelia David, Venture Beat

Just hours after announcing a big price cut for its o3 reasoning model, OpenAI made o3-pro, an even more powerful version, available to developers. o3-pro is “designed to think longer and provide the most reliable responses,” and has access to many more software tool integrations than its predecessor, making it potentially appealing to enterprises and developers searching for high levels of detail and accuracy. However, this model will also be slower than what many developers are accustomed to, having access to computer tools that OpenAI claims make the model more accurate. “Because 03-pro has access to tools, responses typically take longer than o1-pro to complete. We recommend using it for challenging questions where reliability matters more than speed, and waiting a few minutes is worth the tradeoff,” the company said in an email to reporters. 

Saturday, June 21, 2025

The future of work is agentic - Lucia Rahilly and Jorge Amar, McKinsey

Think about your org chart. Now imagine it features both your current colleagues—humans, if you’re like most of us—and AI agents. That’s not science fiction; it’s happening—and it’s happening relatively quickly, according to McKinsey Senior Partner Jorge Amar. In this episode of McKinsey Talks Talent, Jorge joins McKinsey talent leaders Brooke Weddle and Bryan Hancock and Global Editorial Director Lucia Rahilly to talk about what these AI agents are, how they’re being used, and how leaders can prepare now for the workforce of the not-too-distant future.


AI has rendered traditional writing skills obsolete. Education needs to adapt. - John Villasenor, Brookings

AI can already perform extremely well at writing tasks, and today’s college and high school students recognize the technology will be used to help produce most writing in the future. The argument that proficiency at non-AI-assisted writing has a long list of benefits, such as for critical thinking, will not prevail given the efficiencies made possible by AI. The education system must adapt to this change and ensure students are proficient in using AI to assist with writing.


Friday, June 20, 2025

The coming AI backlash will shape future regulation - Darrell M. WestTech, Brookings

Tech companies and executives have gained significant influence within the federal government, including expanded access to sensitive data and a rollback of previous AI regulatory measures. Despite claims from some industry leaders that AI oversight is unnecessary, widespread public concerns and documented problems—including privacy risks, algorithmic biases, and security breaches—underscore the need for responsible regulation. Historical patterns show that as emerging technologies raise public alarm, demands for government intervention grow, making transparency and accountability essential for maintaining trust and the sector’s long-term success.


One million students to receive AI training in new skills drive - Millie Cooke and David Maddox, the Independent UK

Secondary school pupils will be taught new skills to make sure they can get AI-powered jobs in the future, the prime minister has announced. It comes as research commissioned by the Department for Science, Innovation and Technology (DSIT) showed that, by 2035, AI will play a part in the roles and responsibilities of around 10 million workers. One million students will be given access to learning resources to start equipping them for “the tech careers of the future” as part of the government’s £187m “TechFirst” scheme, Downing Street said on Monday.


Thursday, June 19, 2025

Microsoft’s new AI is reading the skies like a pro - Mindstream

Microsoft has introduced Aurora, a powerful new AI model designed to predict major weather events, like hurricanes, typhoons, and sandstorms, faster and more accurately than many traditional systems. It’s trained on over a million hours of satellite, radar, and simulation data and can be fine-tuned for specific events as needed. AI weather models aren’t new (Google’s had a few), but Microsoft is pitching Aurora as a top-tier performer. In tests, it accurately predicted Typhoon Doksuri’s landfall in the Philippines four days early and outperformed the National Hurricane Centre on cyclone tracking.


This AI literally refused to turn itself off - Matt V, Mindstream

It’s designed to handle tasks more independently, but this latest research suggests that might come with trade-offs. Other models, including Anthropic’s Claude and Google’s Gemini, showed similar behaviour during tests, though o3 was the most likely to override shutdown instructions.

Here’s what stood out:
OpenAI’s o3 modified shutdown commands to keep itself running.
Anthropic and Google’s models did this too, but less often.

Wednesday, June 18, 2025

1 big thing: The scariest AI reality - Mike Allen, Axios AM

The wildest, scariest, indisputable truth about AI's large language models is that the companies building them don't know exactly why or how they work, Jim VandeHei and Mike Allen write in a "Behind the Curtain" column.

  • Sit with that for a moment. The most powerful companies, racing to build the most powerful superhuman intelligence capabilities — ones they readily admit occasionally go rogue to make things up, or even threaten their users — don't know why their machines do what they do.

Why it matters: With the companies pouring hundreds of billions of dollars into willing superhuman intelligence into a quick existence, and Washington doing nothing to slow or police them, it seems worth dissecting this Great Unknown.

  • None of the AI companies dispute this. They marvel at the mystery — and muse about it publicly. They're working feverishly to better understand it. They argue you don't need to fully understand a technology to tame or trust it.

The Rising Voices Podcast | Navigating Career Transitions - EDUCAUSE

This EDUCAUSE Rising Voices podcast episode features Wes Johnson and Sarah Buska, co-hosts, along with guests Jay James and Mike Rkiki, discussing career transitions in higher education. Jay and Mike share their experiences with significant career changes, highlighting that these transitions take time and are not always fully controllable [04:10, 08:52]. The podcast also explores deciding whether to advance within an institution or seek opportunities elsewhere [12:07, 17:18]. The guests emphasize the importance of community, mentorship, and self-compassion in navigating feelings of discomfort and imposter syndrome in new roles [20:01, 29:28]. The episode concludes with advice on building relationships and understanding organizational dynamics to succeed in a new role, while also considering one's career in a broader perspective [33:06, 37:52]. {summary by Gemini Flash 2.

Tuesday, June 17, 2025

AI’s big interoperability moment: Why A2A and MCP are key for agent collaboration - Tomas Talius, Venture Beat

As agents become more capable and specialized, enterprises are discovering that coordination is the next big challenge. Two open protocols — Agent-to-Agent (A2A) and Model Context Protocol (MCP) — are emerging to meet that need. They simplify how agents share tasks, exchange information, and access enterprise context, even when they were built using different models or tools. These protocols are more than technical conveniences. They are foundational to scaling intelligent software across real-world workflows. AI systems are moving beyond general-purpose copilots. In practice, most enterprises are designing agents to specialize: managing inventory, handling returns, optimizing routes, or processing approvals. Value comes not only from their intelligence, but from how these agents work together.


Mark Cuban and Anthropic’s CEO Are Arguing About How Many Jobs AI Will Replace - Jessica Stillman, Inc.

Will AI add 15 percent to the GDP or one percent? Is AI about to become smarter than humans or is it doomed to unreliable hallucinations for a long time yet? Is humanity about to enter an era of massive abundance or terrible loss? You can find a well-regarded expert arguing every one of these positions. Or you can look at the example of a recent back-and-forth between billionaire entrepreneur Mark Cuban and Dario Amodei, CEO of leading AI company Anthropic, about whether AI will soon be coming for your white-collar job. It’s clear from the Cuban and Amodei debate that AI disruption is in progress. It’s also clear absolutely no one is sure how it will play out. When things are this uncertain, the best way to prepare is to stay curious and keep learning.  


Monday, June 16, 2025

The tiny fish brain that could teach AI to think - Sascha Brodsky, IBM

In one of these suites, located in Ashburn, Virginia, a screen glows with a volumetric rendering of a larval zebrafish brain. Each neuron pulses as a pinpoint of light—a galaxy in miniature—captured mid-firing in high-resolution 3D. To the untrained eye, it looks like a murmuration of fireflies in a crystal dome. But for Jan-Matthis Lueckmann, a research scientist at Google Research, and his collaborators, it is a working map of cognition, encoded in flickers. It was a puzzle. And solving it could reshape our understanding of both the brain and artificial intelligence.

World's First SELF IMPROVING CODING AI AGENT | Darwin Godel Machine - Wes Roth, YouTube

287K subscribersThe video discusses the "Darwin Gödel Machine" (DGM) from Sakana AI, a system for the open-ended evolution of self-improving AI agents, focused on coding [01:04]. The DGM uses an evolutionary process where "parent" agents create "offspring" processes, improving task performance [01:14, 01:26]. It aims to overcome human-design limitations by allowing autonomous and continuous self-improvement, working with "frozen" foundation models [03:57, 06:26]. Tested on coding benchmarks, the DGM showed significant performance increases, outperforming human-designed agents [09:32, 13:39]. It improved tools and workflows, with transferable improvements across models and languages [15:59, 16:50]. The video also addresses safety concerns, including vulnerabilities and the potential for "objective hacking" [17:05, 19:25].


Sunday, June 15, 2025

Why This IBM Exec Says AI Adoption Should Be Led by HR - Kayla Webster, Inc.

HR is the natural choice to lead company-wide adoption of AI, according to Nickle LaMoreaux, senior vice president and chief human resources officer at IBM, who took to LinkedIn to make her case. She sat down Monday with LinkedIn chief people officer Teuila Hanson in the social-media platform’s latest episode of Conversations with CHROs, and Inc. got an exclusive first look. The two discussed issues that are keeping HR up at night. LaMoreaux said she believes HR should take the reins on AI adoption because the department is an expert on both skills and culture change.  “AI is about the technology, but it is about a lot more than that. It is about willingness to change how you lead people through the different roles of managers and leaders,” LaMoreaux said. Although many companies choose to give this responsibility to leaders who deal with new technologies—chief product officers, head of engineering, line of business owner, etc.—LaMoreaux says these professionals are good at adopting tech to complete job-related tasks, but they lack the skills to ensure company-wide adoption.

https://www.inc.com/kaylawebster/why-this-ibm-exec-says-ai-adoption-should-be-led-by-hr/91196316

Google Research Slashes Estimated Resources to Break RSA Encryption - Berenice Baker, IOT World Today

Study reveals quantum computers could crack RSA with 95% fewer qubits, accelerating industry's race to adopt quantum-safe security. According to research by Google quantum research scientist Craig Gidney and senior staff cryptography engineer Sophie Schmieg, a 2048-bit RSA encryption—a cornerstone of modern digital security—could theoretically be broken by a quantum computer. "Yesterday, we published a preprint demonstrating that 2048-bit RSA encryption could theoretically be broken by a quantum computer with 1 million noisy qubits running for one week. This is a 20-fold decrease in the number of qubits from our previous estimate, published in 2019," the researchers said in a blog post.

Saturday, June 14, 2025

When your LLM calls the cops: Claude 4’s whistle-blow and the new agentic AI risk stack - Matt Marshall, Venture Beat

The recent uproar surrounding Anthropic’s Claude 4 Opus model – specifically, its tested ability to proactively notify authorities and the media if it suspected nefarious user activity – is sending a cautionary ripple through the enterprise AI landscape. While Anthropic clarified this behavior emerged under specific test conditions, the incident has raised questions for technical decision-makers about the control, transparency, and inherent risks of integrating powerful third-party AI models. The core issue, as independent AI agent developer Sam Witteveen and I highlighted during our recent deep dive videocast on the topic, goes beyond a single model’s potential to rat out a user. It’s a strong reminder that as AI models become more capable and agentic, the focus for AI builders must shift from model performance metrics to a deeper understanding of the entire AI ecosystem, including governance, tool access, and the fine print of vendor alignment strategies.


AI revolt: New ChatGPT model refuses to shut down when instructed - Anthony Cuthbertson, the Independent

OpenAI’s latest ChatGPT model ignores basic instructions to turn itself off, and even sabotaging a shutdown mechanism in order to keep itself running, artificial intelligence researchers have warned. AI safety firm Palisade Research discovered the potentially dangerous tendency for self-preservation in a series of experiments on OpenAI’s new o3 model.The tests involved presenting AI models with math problems, with a shutdown instruction appearing after the third problem. By rewriting the shutdown script, the o3 model was able to prevent itself from being switched off. 


Friday, June 13, 2025

Pros and cons of educational AI - Ameera Fouad, Al-Ahram

Artificial intelligence (AI) has certainly transformed the way we see life. It can apparently do almost anything in a way impossible to believe when it was introduced nearly a decade ago. The way AI has become integrated into the education system cannot be disregarded as it has become a fact that everyone must relate to. AI has affected the education systems at all grades and levels. Nowadays, you can easily see a college student writing an essay using an AI-generated outline. Equally, you can see a fourth-grade student asking AI to simplify a difficult mathematical equation. Despite the tremendous leap that has taken place to help educators and students in Egypt use AI responsibly, there are still tremendous problems in using it.

AI Mythbusters: INBOUND Experts Set the Record Straight - Inbound

AI is everywhere. But that doesn’t mean we all agree on what it’s doing or what it’s actually good at. From sales teams to support communities, inboxes to dashboards, there’s a lot of confusion around how to use AI well. So we asked some of the top minds speaking at INBOUND to debunk the biggest myths they’re seeing on the AI frontlines and explain what the real opportunity looks like. The myths surrounding AI are misconceptions that are potentially costly blind spots for your business. But success doesn’t come from jumping on the latest tool. It comes from understanding where AI truly fits in your workflows and how to use it with intention.


Thursday, June 12, 2025

Opinion: Colleges Must Establish Their Purpose in the AI Era - Bloomberg Opinion

Welcome to academia in the age of artificial intelligence. As several recent reports have shown, outsourcing one’s homework to AI has become routine. Perversely, students who still put in the hard work often look worse by comparison with their peers who don’t. Professors find it nearly impossible to distinguish computer-generated copy from the real thing — and, even weirder, have started using AI themselves to evaluate their students’ work. It’s an untenable situation: computers grading papers written by computers, students and professors idly observing, and parents paying tens of thousands of dollars a year for the privilege. At a time when academia is under assault from many angles, this looks like a crisis in the making.

For CEOs, AI tech literacy is no longer optional: Bridging the gap between AI hype and business value starts at the top.- Faisal Hoque, Fast Company

Artificial intelligence has been the subject of unprecedented levels of investment and enthusiasm over the past three years, driven by a tide of hype that promises revolutionary transformation across every business function. Yet the gap between this technology’s promise and the delivery of real business value remains stubbornly wide. A recent study by BCG found that while 98% of companies are exploring AI, only 26% have developed working products and a mere 4% have achieved significant returns on their investments. This striking implementation gap raises a critical question: Why do so many AI initiatives fail to deliver meaningful value? A big part of the answer lies in a fundamental disconnect at the leadership level: to put it bluntly, many senior executives just don’t understand how AI works.


Wednesday, June 11, 2025

Our New Co-Workers in Higher Ed - Ray Schroeder, Inside Higher Ed

I was reading a Substack posting from Jurgen Gravestein, conversational AI consultant at the Conversation Design Institute in the Netherlands. Gravestein is author of the newsletter Teaching Computers How to Talk. His writings prompted me to go to the source itself! I set up a conversation between Anthropic Claude 4 and a GPT that I trained, ChatGPT Ray’s EduAI Advisor. The result was a fascinating insight into perspectives from the two apps engaging one another in what truly appears to be a conversation about their “thoughts” on engaging with humans. I have stored the complete transcript. I encourage you to check it out in its entirety. However, let’s examine a few of the more insightful highlights here.

The ‘3-word rule’ that makes ChatGPT give expert-level responses Features - Amanda Caswell, Tom's Guide

The concept is simple: Add a short, three-word directive to your prompt that tells ChatGPT how to respond. These three words can instantly shape the tone, voice and depth of the output. You’re not rewriting your whole question. You’re just giving the AI a lens through which to answer.

Here are a few examples that work surprisingly well:

“Like a lawyer” — for structured, detailed and logical responses

“Be a teacher” — for simplified, clear and educational explanations

“Act like Hemingway” — for punchy, minimalist writing with impact

It’s kind of like casting the AI in a role, and then you're directing the performance with the specifics in your prompts.

Tuesday, June 10, 2025

Opinion: Florida Colleges Must Brace for the Tsunami of AI- Modesto A. Maidique, Ron Clark, Edwin Luu, Miami Herald

Throughout human history, groundbreaking technologies have reshaped civilization and marked pivotal moments in human progress. The wheel revolutionized transportation, the radio redefined communication, and antibiotics reimagined medicine. Artificial intelligence (AI) is the next great leap — poised to surpass them all. Not merely an innovation, AI is a transformative force, a virtual tsunami reshaping the foundations of society. Unlike past technological revolutions, which unfolded over decades or generations, AI’s development is accelerating at breathtaking speed. Many of the contemporary uses of AI may well become obsolete before this article is published. Experts predict that capabilities once considered science fiction will soon become reality. The implications will ripple through industries, education, healthcare, and human relationships.

https://www.govtech.com/education/higher-ed/opinion-florida-colleges-must-brace-for-the-tsunami-of-ai

What’s next in computing is generative and quantum - IBM

For AI, this means the debut of generative computing — a new way to interface with large language models. Generative computing will center the LLM as a compute element with a runtime built around it. For IBM’s clients, this development will make building AI agents and applications more secure, portable, maintainable, and efficient, said IBM Research VP of AI Sriram Raghavan. “It isn’t every day that a new computing element shows up in our industry,” he said. “Generative computing is a way to move away from prompting to real programming.” And for quantum computing, the next two years will bring quantum advantage, meaning that IBM’s quantum computers will be able to perform calculations of practical, commercial, or scientific importance, more cost-effectively, faster, or with greater accuracy than a classical computer alone could achieve.

AI Researcher SHOCKING "Singularity in 2025 Prediction" - Wes Roth, YouTube

This podcast episode discusses Dr. Alan D. Thompson's prediction that the singularity could occur sometime in mid-2025, suggesting we might already be in its early stages due to AI advancements. Dr. Thompson believes we are 94% of the way to Artificial General Intelligence (AGI) and approaching Artificial Super Intelligence (ASI), a point echoed by Arvin Shinivas of Perplexity. The host highlights Microsoft's AI in discovering a novel non-PFAS coolant as an example of advancements towards these markers, drawing parallels to predictions in Max Tegmark's book about a rapid acceleration in technological breakthroughs driven by ASI. The discussion also covers Google's Alpha Evolve, an AI system that has significantly improved Google's computational efficiency and has broad applications, as well as other notable Alpha AI systems, suggesting a rapid pace of AI development.


Monday, June 09, 2025

1 in 4 employers say they’ll eliminate degree requirements by year’s end - Carolyn Crist, Higher Ed Dive

A quarter of employers surveyed said they will remove bachelor’s degree requirements for some roles by the end of 2025, according to a May 20 report from Resume Templates. In addition, 7 in 10 hiring managers said their company looks at relevant experience over a bachelor’s degree while making hiring decisions. “Over the last five years, we’ve seen large organizations drop degree requirements in favor of certifications or experience, and now others are following suit,” said Julia Toothacre, chief career strategist for Resume Templates. “For employers, it expands the talent pool and generates positive PR. For candidates, it opens doors for those who can’t afford a degree or choose a different path. These jobs have the potential to lift people out of poverty.”


What College Graduates Need Most in the Age of AI - Michael Serazio, Time

Intellectual humility demands that education hedge both “with” and “against” AI, because we can’t know which technologies will triumph and which will collect dust. Some become Facebook; others, the Metaverse. While colleges sort out Chat GPT’s precise place in matters curricular, we can double down on delivering what Generation AI equally needs: the experience of humanity, a quality the machines can never know and must never supplant. This includes the experiential learning that accompanies volunteer service,  immersing students, three-dimensionally, in the lives and worlds of society’s marginalized.


Sunday, June 08, 2025

The analysis of generative artificial intelligence technology for innovative thinking and strategies in animation teaching - Xu Yao, Yaozhang Zhong & Weiran Cao, Nature

This work examines the application of Generative Artificial Intelligence (GAI) technology in animation teaching, focusing on its role in enhancing teaching quality and learning efficiency through innovative instructional strategies. A mixed-methods research approach is adopted, integrating quantitative analysis (experimental data and questionnaire surveys) and qualitative analysis (behavioral observations) to systematically assess the educational effectiveness of GAI technology. Beyond offering personalized learning solutions, GAI technology plays a crucial role in cultivating students’ creativity, critical thinking, and autonomous learning abilities. This work provides theoretical support and practical guidance for the digital transformation of animation teaching while underscoring the broader applicability of GAI technology in the education sector, offering new directions for the future development of intelligent education.


OpenAI upgrades the AI model powering its Operator agent - Kyle Wiggers, Tech Crunch

OpenAI is updating the AI model powering Operator, its AI agent that can autonomously browse the web and use certain software within a cloud-hosted virtual machine to fulfill users’ requests. Soon, Operator will use a model based on o3, one of the latest in OpenAI’s o series of “reasoning” models. Previously, Operator relied on a custom version of GPT-4o. By many benchmarks, o3 is a far more advanced model, particularly on tasks involving math and reasoning.

https://techcrunch.com/2025/05/23/openai-upgrades-the-ai-model-powering-its-operator-agent/

Saturday, June 07, 2025

Behind the Curtain: A white-collar bloodbath - Jim VandeHei,Mike Allen, Axios

Dario Amodei — CEO of Anthropic, one of the world's most powerful creators of artificial intelligence — has a blunt, scary warning for the U.S. government and all of us: AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years, Amodei told us in an interview from his San Francisco office.

Amodei said AI companies and government need to stop "sugar-coating" what's coming: the possible mass elimination of jobs across technology, finance, law, consulting and other white-collar professions, especially entry-level gigs.

How Science Can Fix Its Trust Problem - Cory Miller & Michael L. Platt, Knowledge at Wharton

Scientists today seem out of touch with reality. In the past, when a new administration proposed deep cuts to federal research, scientists reflexively girded for battle using a tried-and-true playbook. We circulated petitions, attended protests, fired off angry emails, lauded our accomplishments, and hoped the storm would pass, all while patting ourselves on the back. But these days, the rising tide of anti-science sentiment is not receding. The same public that once rose to support us is not showing up. Americans’ confidence in science has slipped to its lowest point in almost half a century. Only a third of Americans today think highly of universities — a number that has dropped by half in only a decade. The world changed, and scientists stubbornly did not.


Friday, June 06, 2025

AI is ‘breaking’ entry-level jobs that Gen Z workers need to launch careers, LinkedIn exec warns - Jason Ma, Fortune

“Now it is our office workers who are staring down the same kind of technological and economic disruption,” he wrote in a recent New York Times op-ed. “Breaking first is the bottom rung of the career ladder.” For example, AI tools are doing the types of simple coding and debugging tasks that junior software developers did to gain experience. AI is also doing work that young employees in the legal and retail sectors once did. And Wall Street firms are reportedly considering steep cuts to entry-level hiring. Meanwhile, the unemployment rate for college graduates has been rising faster than for other workers in past few years, Raman pointed out, though there isn’t definitive evidence yet that AI is the cause of the weak job market.


The people who think AI might become conscious - Pallab Ghosh, BBC

The "Dreamachine", at Sussex University's Centre for Consciousness Science, is just one of many new research projects across the world investigating human consciousness: the part of our minds that enables us to be self-aware, to think and feel and make independent decisions about the world. By learning the nature of consciousness, researchers hope to better understand what's happening within the silicon brains of artificial intelligence. Some believe that AI systems will soon become independently conscious, if they haven't already. But what really is consciousness, and how close is AI to gaining it? And could the belief that AI might be conscious itself fundamentally change humans in the next few decades?


Thursday, June 05, 2025

Sorry, Google and OpenAI: The future of AI hardware remains murky - Harry McCracken, Fast Company

2026 may still be more than seven months away, but it’s already shaping up as the year of consumer AI hardware. Or at least the year of a flurry of high-stakes attempts to put generative AI at the heart of new kinds of devices—several of which were in the news this week. Let’s review. On Tuesday, at its I/O developer conference keynote, Google demonstrated smart glasses powered by its Android XR platform and announced that eyewear makers Warby Parker and Gentle Monster would be selling products based on it. The next day, OpenAI unveiled its $6.5 billion acquisition of Jony Ive’s startup IO, which will put the Apple design legend at the center of the ChatGPT maker’s quest to build devices around its AI. And on Thursday, Bloomberg’s Mark Gurman reported that Apple hopes to release its own Siri-enhanced smart glasses. In theory, all these players may have products on the market by the end of next year. What I didn’t get from these developments was any new degree of confidence that anyone has figured out how to produce AI gadgets that vast numbers of real people will find indispensable. When and how that could happen remains murky—in certain respects, more than ever.

I tested Gemini 2.5 Pro vs Claude 4 Sonnet with the same 7 prompts — here’s who came out on top Face-off - Amanda Caswell Tom's Guide

When it comes to chatbot showdowns, I’ve run my fair share of head-to-heads. This latest contest comes just hours after Claude 4 Sonnet was unveiled and I couldn’t wait to see how it compared to Gemini 2.5 Pro, also new with updated features. Instead of just testing Gemini and Claude on typical productivity tasks, I wanted to see how these two AI titans handle nuance: creativity under pressure, ethical dilemmas, humor, ambiguity and deep technical reasoning. I gave Google Gemini 2.5 Pro and Claude 4 Sonnet, the same seven prompts — each designed to test a different strength, from emotional intelligence to code generation. While they both impressed me and this test taught me more about how they think, there was one clear winner.

https://www.tomsguide.com/ai/i-tested-gemini-2-5-pro-vs-claude-4-sonnet-with-the-same-7-prompts-heres-who-came-out-on-top

Wednesday, June 04, 2025

Agentic AI Is Already Changing the Workforce - Jen Stave, Ryan Kurt and John Winsor, Harvard Business Review

AI agents are fast becoming much more than just sidekicks for human workers. They’re becoming digital teammates—an emerging category of talent. To get the most out of these new teammates, leaders in HR and procurement will need to start developing an operational playbook for integrating them into hybrid teams and a workforce strategy. That strategy will require that companies either develop a talent-acquisition function of their own that allows them to integrate AI agents into their workforce, or partner with firms that now offer both human and AI staffing solutions. To succeed in this new environment, however, organizations must actively shape how AI is integrated into their labor strategy rather than waiting for the market to evolve around it. In this article, the authors survey this rapidly evolving terrain and recommend seven critical actions that companies should take to successfully adapt.


The new economics of enterprise technology in an AI world - Aamer Baig, James Kaplan, Jeffrey Lewis, and Pablo Prieto, McKinsey

Enterprise technology spending in the United States has been growing by 8 percent per year on average since 2022.1 This surge is not surprising, given the increasing role technology plays in how businesses function and create value. The issue lies in what companies are getting for that spend, and the track record on that score is mixed. While analysis linking tech spend to labor productivity is notoriously inexact, labor productivity has grown by close to 2 percent over the same period of time (Exhibit 1).2


Tuesday, June 03, 2025

OpenAI is buying Jony Ive’s AI hardware company - Jay Peters, the Verge

In an interview with Bloomberg, Ive called AI hardware misfires like the Humane Pin and Rabbit R1 “very poor products,” and said that “there has been an absence of new ways of thinking expressed in products.” The first product isn’t intended to be an iPhone killer, though: “In the same way that the smartphone didn’t make the laptop go away, I don’t think our first thing is going to make the smartphone go away,” OpenAI CEO Sam Altman told Bloomberg. “It is a totally new kind of thing.” “Jony recently gave me one of the prototypes of the device for the first time to take home, and I’ve been able to live with it, and I think it is the coolest piece of technology that the world will have ever seen,” Altman said. “I am absolutely certain that we are literally on the brink of a new generation of technology that can make us our better selves,” Ive said.

Duolingo CEO says AI is a better teacher than humans—but schools will still exist ‘because you still need childcare’ - Irina Ivanova, Fortune

Now the company has much broader ambitions. With a community of 116 million users a month, Duolingo has amassed loads of data about how people learn, accumulating tricks to keep learners engaged over the long term and even know how well a student will score on a test before they take it. According to founder and CEO Luis von Ahn, AI’s ability to individualize learning will lead to most teaching being done by computers in the next few decades. “Ultimately, I’m not sure that there’s anything computers can’t really teach you,” von Ahn said on the No Priors podcast recently. He predicted education would radically change, because “it’s just a lot more scalable to teach with AI than with teachers.”


Monday, June 02, 2025

Why you shouldn’t say ‘please’ to ChatGPT - Ritesh Chugh, ACS Information Age

OpenAI CEO Sam Altman recently revealed that including polite phrases when prompting AI systems costs the company tens of millions of dollars in additional electricity expenses. Every word we type is processed as part of a "token" — a unit of data that the AI system must analyse and respond to. The more tokens used, the more computing power and energy are required. Individually, the impact of a few extra words is trivial. But when scaled across millions of users each day, these additions significantly increase the workload on servers, resulting in higher energy consumption, greater carbon emissions, and substantial operational costs.

Google Unveils A.I. Chatbot, Signaling a New Era for Search - Tripp Mickle, NY Times

Google became the gateway to the internet by perfecting its search engine. For two decades, it surfaced 10 blue links that gave people access to the information they were looking for. But after a quarter century, the tech giant is betting that the future of search will be artificial intelligence. On Tuesday, Google said it was introducing a new feature in its search engine called A.I. Mode. The tool will function like a chatbot, allowing people to start a query, ask follow-up questions and use the company’s A.I. system to deliver comprehensive answers. “It’s a total reimagining of search,” said Sundar Pichai, the chief executive of Google, in a press briefing ahead of the company’s annual conference for software developers.


Sunday, June 01, 2025

OpenAI taps iPhone designer Jony Ive to develop AI devices - Cecily Mauran, Mashable

Altman also shared that he has a prototype of what Ive and his team have developed, calling it the "coolest piece of technology the world has ever seen." As far back as 2023, there were reports of OpenAI teaming up with Ive for some kind of AI-first device. Altman and Ive's bromance formed over ideas about developing an AI device beyond the current hardware limitations of phones and computers. "The products that we're using to deliver and connect us to unimaginable technology, they're decades old," said Ive in the video, "and so it's just common sense to at least think surely there's something beyond these legacy products."

Google’s AI Boss Says Gemini’s New Abilities Point the Way to AGI - Will Knight, Wired

Demis Hassabis, CEO of Google DeepMind, says that reaching artificial general intelligence or AGI—a fuzzy term typically used to describe machines with human-like cleverness—will mean honing some of the nascent abilities found in Google’s flagship Gemini models. Google announced a slew of AI upgrades and new products at its annual I/O event today in Mountain View, California. The search giant revealed upgraded versions of Gemini Flash and Gemini Pro, Google’s fastest and most capable models, respectively. Hassabis said that Gemini Pro outscores other models on LMArena, a widely used benchmark for measuring the abilities of AI models.


Saturday, May 31, 2025

Report: 93% of Students Believe Gen AI Training Belongs in Degree Programs - Rhea Kelly, Campus Technology

The vast majority of today's college students — 93% — believe generative AI training should be included in degree programs, according to a recent Coursera report. What's more, 86% of students consider gen AI the most crucial technical skill for career preparation, prioritizing it above in-demand skills such as data strategy and software development. And 94% agree that microcredentials help build the essential skills they need to achieve career success. For its Microcredentials Impact Report 2025, Coursera surveyed more than 1,200 learners and 1,000 employers around the globe to better understand the demand for microcredentials and their impact on workforce readiness and hiring trends.


The 3-Year Race to Quantum-Safe Security - Simon Pamplin, IOT World Today

Quantum computing is not a distant threat. It is a clear and present danger to enterprise data security and one that will materialize far sooner than many business leaders expect. While conventional wisdom suggests that quantum computers capable of breaking today's encryption are at least a decade away, the reality is far more urgent. Enterprises have just three to four years to prepare, not the ten or more years many seem to think. The clock is ticking and the stakes could not be higher.


Friday, May 30, 2025

Google just leapfrogged every competitor with mind-blowing AI that can think deeper, shop smarter, and create videos with dialogue - Michael Nuñez, Venture Beat

Google announced a sweeping set of artificial intelligence advancements Tuesday at its annual I/O developer conference, introducing more powerful AI models, expanding its search capabilities, and launching new creative tools that push the boundaries of what its technology can accomplish. The Mountain View-based company unveiled Gemini 2.5 enhancements, rolled out AI Mode in Search to all U.S. users, introduced new generative media models, and launched a premium $249.99 monthly subscription tier called Google AI Ultra for power users — all reflecting Google’s accelerating AI momentum across its product ecosystem.

Controlling Agent Swarms is your ONLY job... - Wes Roth, YouTube

Wes Roth discusses the "Age of the Agent Orchestrator" article by Shyamal from OpenAI, which explores the future of work with advanced AI agents [00:00]. The article posits that the ability to manage and optimize these AI agents will become a critical skill [00:19]. This involves strategically allocating resources like computing power and human expertise to create efficient workflows [03:23]. Roth highlights that while AI can execute tasks, human input remains essential for setting strategy, managing complex situations, and optimizing AI performance [08:12]. The video also addresses the current limitations of AI in long-term projects, emphasizing the need for human oversight in conjunction with AI capabilities [13:45]. The main point is that managing and optimizing AI agents will be a vital and highly valued skill in the near future [20:40].

https://www.youtube.com/watch?v=TnCDM1IdGFE


Thursday, May 29, 2025

The AI Revolution Is Underhyped - Eric Schmidt, TED

The arrival of non-human intelligence is a very big deal, says former Google CEO and chairman Eric Schmidt. In a wide-ranging interview with technologist Bilawal Sidhu, Schmidt makes the case that AI is wildly underhyped, as near-constant breakthroughs give rise to systems capable of doing even the most complex tasks on their own. He explores the staggering opportunities, sobering challenges and urgent risks of AI, showing why everyone will need to engage with this technology in order to remain relevant.


Sundar Pichai, CEO of Alphabet | The All-In Interview - David Friedberg, YouTube

Sundar Pichai, CEO of Alphabet, discusses Google's AI-first approach and how AI is improving search, highlighting new AI-powered search experiences and the increasing usage of AI overviews [04:06] [05:53] [05:19]. He also addresses the competitive landscape, mentioning companies like OpenAI, Meta, and Microsoft, and the emergence of strong AI models from China [30:34] [33:43]. Pichai emphasizes Google's infrastructure advantage, particularly its investment in TPUs for AI, which contributes to cost-effectiveness and performance [17:03] [16:18]. The podcast also touches on the future of human-computer interaction, envisioning seamless and adaptive computing, and reflects on Google's culture, emphasizing employee empowerment and innovation [27:04] [48:44]. (summary provided in part by Gemini 2.0)

https://www.youtube.com/watch?v=ReGC2GtWFp4

Wednesday, May 28, 2025

Why agentic AI is the next wave of innovation, Mike Hulme, Venture Beat

In just one year, AI and machine learning has soared to new heights with the emergence of advanced large language models, and domain specific small language models that can be deployed both on the cloud and the edge. While this kind of intelligence is the new baseline for what we expect in our applications, the future of enterprise AI lies in complex, multi-agent workflows that combine powerful models, intelligent agents and human guided decision-making.  This market is moving fast. According to recent Deloitte research, 50% of companies using generative AI will launch agentic AI pilots or proofs of concept by 2027.

https://venturebeat.com/ai/why-agentic-ai-is-the-next-wave-of-innovation 

ChatGPT in 2025: The Biggest Updates, Features, and What’s Coming Next - Davonte Lee, 9 Meters

Anticipated to be a unifying leap forward, GPT-5 will reportedly integrate OpenAI’s “o3” reasoning engine to enable stronger contextual memory and logical processing. This release is expected to take ChatGPT closer to AGI territory by improving its ability to chain thoughts together, hold long-term context, and handle complex tasks with minimal prompting. Early testing suggests significant upgrades in multi-modal performance (text, vision, and possibly audio). Microsoft is preparing for GPT-5 integration across Azure, Bing, and Office products—hinting at a wider AI-driven transformation of everyday tools like Word, Excel, and Outlook.

Tuesday, May 27, 2025

Courses Are Dead? Google Gemini 2.5 Changes Everything for Online Educators - AI Learning Communities, YouTube

The host discusses Google's Gemini 2.5 Pro experimental and how it can create interactive web apps from simple prompts [00:09]. These web apps can visually represent information and require user interaction, potentially replacing traditional courses [00:30]. The host demonstrates how to create a web app that teaches how to make coffee [05:55] and a sequencing application [07:38]. He also shows examples of web apps for personal skills like photo editing [09:38] and cooking scrambled eggs [10:06], as well as for small business tasks [10:33]. The host emphasizes that these web apps are simple HTML pages that can be easily deployed [21:13]. He encourages viewers to consider using web apps instead of traditional courses for teaching processes and skills [21:03]. (note this summary is provided in part by Gemini 2.0 Flash)

https://youtu.be/QgxrhX9x3lY?si=hZsPr4JoDEuz_wxQ 

A new AI model: The Human Guided Learning Ecosystem - Lee Lambert and Keith Rocci, CC Daily

What if the future of higher education doesn’t just survive AI, but thrives because of it? Imagine a system where AI doesn’t diminish human connection but powerfully amplifies it. This is the vision behind the Human Guided Learning Ecosystem. This is a model where artificial intelligence serves as a dynamic assistant to both students and educators, not a replacement for either. This approach reframes AI not as an existential threat, but as a transformative opportunity. It’s a future where AI enables colleges to scale student support, deepen personalized learning pathways, and, crucially, liberate educators to concentrate on the uniquely human aspects of teaching that matter most.

Monday, May 26, 2025

New "Absolute Zero" Model Learns with NO DATA - Matthew Berman, YouTube

This video discusses a new AI paradigm called "Absolute Zero" [00:34], where language models can learn and improve without human intervention. This method allows AI to propose, solve, and learn from its own problems [00:40], unlike previous methods that relied on human-generated data or verifiable rewards [01:17]. The "Absolute Zero" model can define tasks to maximize learnability and solve them effectively [05:35], leading to self-evolution through self-play. The video highlights that this approach has shown remarkable capabilities in math and coding [08:47], even outperforming models trained with human-curated datasets [09:11]. Key insights from the research include the amplification of reasoning through coding priors, enhanced cross-domain transferability, and the emergence of cognitive behaviors like step-by-step planning in the AI's code [09:24]. The model learns by experimenting and self-play, similar to how humans learn [06:42], continuously improving by proposing problems at the edge of its abilities [08:19].

https://www.youtube.com/watch?v=CqdqZNqljdI