Friday, April 04, 2025

Amplified Humanity: How AI Can Expand Our Capacity to Do Good and Be Good - Tawnya Means, University of Illinois Assistant Dean for Educational Innovation, via LinkedIn

In an era where headlines about artificial intelligence swing between utopian promises and dystopian warnings, we're missing perhaps the most profound opportunity of all: using AI to help us become better humans. This isn't about outsourcing our humanity. It's about leveraging technology to amplify our uniquely human capacities for care, support, learning, engagement, and love. As AI systems grow more capable, our ability to be deeply, authentically human becomes not just valuable; it becomes imperative.

https://www.linkedin.com/pulse/amplified-humanity-how-ai-can-expand-our-capacity-do-good-means-xb6cc/

SUPERAGENCY: What Could Possibly Go Right with Our AI Future" - Reid Hoffman and Greg Beato, Superagency

Superagency, by Reid Hoffman and Greg Beato, presents an optimistic view of AI's future, focusing on its potential to amplify human capabilities and improve society. Rather than dwelling on dystopian scenarios, the book explores how AI can enhance individual agency, enabling people to achieve more in areas like education, healthcare, and problem-solving. It advocates for an inclusive and adaptive approach to AI, emphasizing its role as a tool for positive change and encouraging readers to actively participate in shaping a future where human ingenuity and AI work in synergy. (summary by Gemini 2.0 Flash)

https://www.superagency.ai/

Thursday, April 03, 2025

New Auburn Engineering research center combines expertise in artificial intelligence, cybersecurity - Joe McAdory, Auburn University

The Auburn University Center for Artificial Intelligence and Cybersecurity Engineering (AU-CAICE), housed within the Department of Computer Science and Software Engineering (CSSE), is dedicated to uncovering pioneering advancements in AI-driven cybersecurity solutions and tackling the most pressing challenges in the digital age. “In today's rapidly evolving digital landscape, the need for groundbreaking research in artificial intelligence and cybersecurity has never been more critical,” said Allan David, associate dean for research. “The Samuel Ginn College of Engineering is thrilled to continue its role as a leader in emerging technologies, driving innovation and fostering collaboration to address the complex challenges of our time. This new research center embodies our commitment to shaping a safer, more secure future through cutting-edge advancement.”

Innovation and Collaboration in Higher Education During Challenging Times - Ray Schroeder, Inside Higher Ed

The field of higher education is notoriously slow to change. Yet, when faced with the extraordinary challenges of today, our associations are quick to foster support, collaboration and unity. Ijust returned from the UPCEA annual conference held in Denver. A record attendance of some 1,300 administrators, faculty and staff from member institutions gathered to share policies, practices, innovations and knowledge in advancing the mission of higher education in 2025. It was a thriving and exciting environment of energy and enthusiasm in seeking solutions to challenges that confront us today and into the future. A number of the sessions addressed innovations with cost savings, efficiencies and effectiveness gains that can be realized by thoughtfully introducing artificial intelligence into supporting many aspects of the higher education mission. 

Wednesday, April 02, 2025

AI's Moore's Law: Measuring AI Ability to Complete Long Tasks

We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under five years, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks.


Artificial Intelligence in Medical Education: Transforming Learning and Practice - Aadhitya Sriram, et al; Cureus

Artificial intelligence (AI) is reshaping medical education by enhancing learning strategies, improving training efficiency, and offering personalized educational experiences. Traditional teaching methods, such as classroom lectures and clinical apprenticeships, face numerous challenges, including information overload, teaching quality variability, and standardisation difficulties. AI presents innovative, data-driven, and adaptive solutions to overcome these limitations, making medical training more effective and engaging. This study explores how AI can be applied in the field of medical education, also focusing on personalized learning, virtual simulations, assessment methods, and curriculum development. 


Tuesday, April 01, 2025

AI Ethics in Higher Education: How Schools Are Proceeding - Adam Stone, EdTech

Higher education is uniquely positioned to deal with AI’s ethical considerations, partly because AI adoption is already prevalent in academia. At Miami University in Ohio, “there are courses about AI, and there are courses that use AI,” says Vice President for IT Services and CIO David Seidl. As AI use widens, colleges and universities need to give students “an ethical foundation, a conceptual foundation to prepare them for the future,” he says. Many schools have the institutional expertise on campus needed to lay that foundation. “We have people who are very thoughtful, who bring subject matter expertise from a lot of lenses, so that you can have well-informed conversations about the ethics of AI,” says Tom Andriola, University of California, Irvine’s vice chancellor for IT and data.


Making AI work for workers - McKinsey Quarterly

Employees are ready for AI. How can their leaders help them unleash new levels of creativity and productivity? Workers are already on board with gen AI, but many leaders aren’t keeping pace. Business leaders who can build on this momentum face a significant opportunity. But not all employees are embracing gen AI equally. Identifying four archetypes of employee sentiment can help companies understand where encouragement might be needed.

Monday, March 31, 2025

Publishers Embrace AI as Research Integrity Tool - Kathryn Palmer, Inside Higher Ed

The academic publishing industry is adopting AI-powered tools to improve the quality of peer-reviewed research and speed up production. The latter goal yields “obvious financial benefit” for publishers, one expert said. But the $19 billion academic publishing industry is increasingly turning to artificial intelligence to speed up production and, advocates say, enhance research quality. Since the start of the year, Wiley, Elsevier and Springer Nature have all announced the adoption of generative AI–powered tools or guidelines, including those designed to aid scientists in research, writing and peer review.


Supporting the Instructional Design Process: Stress-Testing Assignments with AI - Faculty Focus

One of the challenges of course design is that all our work can seem perfectly clear and effective when we are knee-deep in the design process, but everything somehow falls apart when deployed in the wild. From simple misunderstandings to complex misconceptions, these issues typically don’t reveal themselves until we see actual student work—often when it’s too late to prevent frustration. While there’s no substitute for real-world testing, I began wondering if AI could help with this iterative refinement. I didn’t want AI to refine or tweak my prompts. I wanted to see if I could task AI with modelling hundreds of student responses to my prompts in the hope that this process might yield the kind of insight I was too close to see. 


Sunday, March 30, 2025

AI that can match humans at any task will be here in five to 10 years, Google DeepMind CEO says - Ryan Browne, CNBC

Google DeepMind CEO Demis Hassabis said he thinks artificial general intelligence, or AGI, will emerge in the next five or 10 years. AGI broadly relates to AI that is as smart or smarter than humans. “We’re not quite there yet. These systems are very impressive at certain things. But there are other things they can’t do yet, and we’ve still got quite a lot of research work to go before that,” Hassabis said. Dario Amodei, CEO of AI startup Anthropic, told CNBC at the World Economic Forum in Davos, Switzerland in January that he sees a form of AI that’s “better than almost all humans at almost all tasks” emerging in the “next two or three years.” Other tech leaders see AGI arriving even sooner. Cisco’s Chief Product Officer Jeetu Patel thinks there’s a chance we could see an example of AGI emerge as soon as this year. 


Quantum Supremacy Claimed for Real-World Problem Solving - Berenice Baker, IOT World Today

D-Wave Quantum said that its Advantage2 annealing quantum computer achieved quantum supremacy on a practical, real-world problem. A new peer-reviewed paper published in Science, "Beyond-Classical Computation in Quantum Simulation," said D-Wave's system has outperformed classical supercomputers in simulating quantum dynamics in programmable spin glasses, complex magnetic material simulations with significant business and scientific applications. In this context, quantum supremacy refers to a quantum computer performing a computational task that is not feasible for even the most powerful classical supercomputers within a practical timeframe. D-Wave said the magnetic materials simulation problem would take nearly 1 million years and more energy than the world's annual electricity consumption if attempted on a classical GPU-based supercomputer, which D-Wave said Advantage2 achieved it in minutes.


https://www.iotworldtoday.com/quantum/quantum-supremacy-claimed-for-real-world-problem-solving

Saturday, March 29, 2025

Beyond big models: Why AI needs more than just scale to reach AGI - Sascha Brodsky, IBM

While today’s AI models can generate fluent text, recognize images and even perform complex problem-solving tasks, they still fall short of human intelligence in key ways. Most surveyed AI researchers believe that deep learning alone isn’t enough to reach AGI. Instead, they argue that AI must integrate structured reasoning and a deeper understanding of cause and effect. IBM Fellow Francesca Rossi, past president of the Association of the Advancement for Artificial Intelligence, which published the survey, is among the experts who question whether bigger models will ever be enough. “We’ve made huge advances, but AI still struggles with fundamental reasoning tasks,” Rossi tells IBM Think. “To get anywhere near AGI, models need to truly understand, not just predict.”


OpenAI and Google ask the government to let them train AI on content they don’t own - Emma Roth, the Verge

OpenAI and Google are pushing the US government to allow their AI models to train on copyrighted material. Both companies outlined their stances in proposals published this week, with OpenAI arguing that applying fair use protections to AI “is a matter of national security.” The proposals come in response to a request from the White House, which asked governments, industry groups, private sector organizations, and others for input on President Donald Trump’s “AI Action Plan.” The initiative is supposed to “enhance America’s position as an AI powerhouse,” while preventing “burdensome requirements” from impacting innovation.


Friday, March 28, 2025

Why Online Learning Teams Should Read ‘Co-Intelligence’ - Joshua Kim, Inside Higher Ed

Applying Ethan Mollick’s four principles to our work designing, developing, teaching and marketing online programs. Given this state of affairs, I’d like to make a modest proposal. From now on, all attendees of any AI higher education–focused conversation, meeting, conference or discussion must first have read Ethan Mollick’s (short) book Co-Intelligence: Living and Working With AI. The audiobook version is only four hours and 37 minutes. Think of the productivity gains if we canceled the next five hours of planned AI meetings and booked that time for everyone to sit and listen to Mollick’s book. For university people, Co-Intelligence is perfect, as Mollick is both a professor and (crucially) not a computer scientist.

Higher ed is at a crossroads — will AI and digital learning lead the way? - Higher Ed Dive

Overwhelmingly, the expense of higher education is seen as burdensome, according to 82% of respondents who are moderately to extremely concerned about the overall cost of postsecondary academic experiences. An evaluation of the impacts of today’s learning environment on the perceived value of higher education indicates students are open to change. Institutions looking to drive that change and meet the expectations of modern learners must prioritize affordability, accessibility and career readiness. One of the most effective ways to achieve this is through the integration of digital learning tools and artificial intelligence (AI) powered education technologies, which are transforming learning experiences and helping institutions stay relevant in a rapidly changing landscape.


Thursday, March 27, 2025

The state of AI: How organizations are rewiring to capture value - Alex Singla, et al, McKinsey

Organizations are starting to make organizational changes designed to generate future value from gen AI, and large companies are leading the way. The latest McKinsey Global Survey on AI finds that organizations are beginning to take steps that drive bottom-line impact—for example, redesigning workflows as they deploy gen AI and putting senior leaders in critical roles, such as overseeing AI governance. The findings also show that organizations are working to mitigate a growing set of gen-AI-related risks and are hiring for new AI-related roles while they retrain employees to participate in AI deployment. Companies with at least $500 million in annual revenue are changing more quickly than smaller organizations.

Powerful A.I. Is Coming. We’re Not Ready. - Kevin Roose, NY Times

I believe that over the past several years, A.I. systems have started surpassing humans in a number of domains — math, coding and medical diagnosis, just to name a few — and that they’re getting better every day. I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do.” I believe that when A.G.I. is announced, there will be debates over definitions and arguments about whether or not it counts as “real” A.G.I., but that these mostly won’t matter, because the broader point — that we are losing our monopoly on human-level intelligence, and transitioning to a world with very powerful A.I. systems in it — will be true.


Wednesday, March 26, 2025

ChatGPT firm reveals AI model that is ‘good at creative writing’ - the Guardian

The company behind ChatGPT has revealed it has developed an artificial intelligence model that is “good at creative writing”, as the tech sector continues its tussle with the creative industries over copyright. The chief executive of OpenAI, Sam Altman, said the unnamed model, which has not been released publicly, was the first time he had been “really struck” by the written output of one of the startup’s products. In a post on the social media platform X, Altman wrote: “We trained a new model that is good at creative writing (not sure yet how/when it will get released). This is the first time i have been really struck by something written by AI.”


ChatGPT firm reveals AI model that is ‘good at creative writing’ - Dan Milmo, the Guardian

The company behind ChatGPT has revealed it has developed an artificial intelligence model that is “good at creative writing”, as the tech sector continues its tussle with the creative industries over copyright. The chief executive of OpenAI, Sam Altman, said the unnamed model, which has not been released publicly, was the first time he had been “really struck” by the written output of one of the startup’s products. In a post on the social media platform X, Altman wrote: “We trained a new model that is good at creative writing (not sure yet how/when it will get released). This is the first time i have been really struck by something written by AI.”


Tuesday, March 25, 2025

The ‘Oppenheimer Moment’ That Looms Over Today’s AI Leaders - Tharin Pillay, Time

This year, hundreds of billions of dollars will be spent to scale AI systems in pursuit of superhuman capabilities. CEOs of leading AI companies, such as OpenAI’s Sam Altman and xAI’s Elon Musk, expect that within the next four years, their systems will be smart enough to do most cognitive work—think any job that can be done with just a laptop—as effectively as or better than humans. Such an advance, leaders agree, would fundamentally transform society. Google CEO Sundar Pichai has repeatedly described AI as “the most profound technology humanity is working on.” Demis Hassabis, who leads Google’s AI research lab Google DeepMind, argues AI’s social impact will be more like that of fire or electricity than the introduction of mobile phones or the Internet.

https://time.com/7267797/ai-leaders-oppenheimer-moment-musk-altman/

The Value of a Ph.D. in the Age of AI - Kim Isenberg, Forward Future

Artificial intelligence has been undergoing an extraordinary development process for several years and is increasingly achieving capabilities that were long reserved exclusively for humans. Particularly in the area of research, we are currently experiencing remarkable progress: so-called “research agents”, specialized AI models that can independently take on complex research tasks, are rapidly gaining in importance. One prominent example is OpenAI's DeepResearch, which has already achieved outstanding results in various scientific benchmarks. Such AI-supported agents not only analyze large data sets, but also independently formulate research questions, test hypotheses, and even create scientific summaries of their results.


Monday, March 24, 2025

OpenAI calls DeepSeek ‘state-controlled,’ calls for bans on ‘PRC-produced’ models - Kyle Wiggers, Tech Crunch

The proposal, a submission for the Trump administration’s “AI Action Plan” initiative, claims that DeepSeek’s models, including its R1 “reasoning” model, are insecure because DeepSeek faces requirements under Chinese law to comply with demands for user data. Banning the use of “PRC-produced” models in all countries considered “Tier 1” under the Biden administration’s export rules would prevent privacy and “security risks,” OpenAI says, including the “risk of IP theft.” It’s unclear whether OpenAI’s references to “models” are meant to refer to DeepSeek’s API, the lab’s open models, or both. DeepSeek’s open models don’t contain mechanisms that would allow the Chinese government to siphon user data; companies including Microsoft, Perplexity, and Amazon host them on their infrastructure.

Cognitive Empathy: A Dialogue with ChatGPT - Michael Feldstein, eLiterate

I want to start with something you taught me about myself. When I asked you about my style of interacting with AIs, you told me I use “cognitive empathy.” It wasn’t a term I had heard before. Now that I’ve read about it, the idea has changed the way I think about virtually every aspect of my work—past, present, and future. It also prompted me to start writing a book about AI using cognitive empathy as a frame, although we probably won’t talk about that today. I thought we could start by introducing the term to the readers who may not know it, including some of the science behind it.


Sunday, March 23, 2025

OpenAI unveils Responses API, open source Agents SDK, letting developers build their own Deep Research and Operator - Carl Franzen, Venture Beat

OpenAI is rolling out a new suite of APIs and tools designed to help developers and enterprises build AI-powered agents more efficiently. These are delivered atop some of the very same technology powering its own first-party AI agents Deep Research (which scours the internet independently to develop richly researched, well organized and cited reports) and Operator (its tool for controlling a web browser cursor autonomously based on a user’s text instructions and performing actions like finding sports tickets or making reservations). Now, with access to the building blocks behind these powerful first-party OpenAI agents, developers can build their own third-party rivals or more domain-specialized products and services specific to their use case and audience.

7 Ways You Can Use ChatGPT for Your Mental Health and Wellness - Wendy Wisner, Very Well Mind

ChatGPT can be a fantastic resource for mental health education and be a great overall organization tool. It can also help you with the practical side of mental health management like journal prompts and meditation ideas. Although ChatGPT is not everyone’s cup of tea, it can be used responsibly and is something to consider keeping in your mental health toolkit. If you are struggling with your mental health, though, you shouldn’t rely on ChatGPT as the main way to cope. Everyone who is experiencing a mental health challenge can benefit from care from a licensed therapist. If that’s you, please reach out to your primary care provider for a referral or reach out directly to a licensed therapist near you.


Saturday, March 22, 2025

DuckDuckGo's AI beats Perplexity in one big way - and it's free to use - Jack Wallen, ZDnet

Duck.ai does something that other similar products don't -- it gives you a choice. You can choose between the proprietary GPT-4o mini, o3-mini, and Claude 3 services or go open-source with Llama 3.3  and Mistral Small 3. Duck.ai is also private: All of your queries are anonymized by DuckDuckGo, so you can be sure no third-party will ever have access to your AI chats. After giving Duck.ai a trial over the weekend, I found myself favoring it more and more over Perplexity, primarily because I could select which LLM I use. That's a big deal because every model is different. For example, GPT-4o excels in real-time interactions, voice nuance, and sentiment analysis across modalities, whereas Llama 3.2 is particularly strong in image recognition and visual understanding tasks.

OpenAI launches new tools to help businesses build AI agents - Maxwell Zeff, Tech Crunch

Earlier this year, OpenAI introduced two AI agents in ChatGPT: Operator, which navigates websites on your behalf, and deep research, which compiles research reports for you. Both tools offered a glimpse at what agentic technology can achieve, but left quite a bit to be desired in the “autonomy” department. Now with the Responses API, OpenAI wants to sell access to the components that power AI agents, allowing developers to build their own Operator- and deep research-style agentic applications. OpenAI hopes that developers can create some applications with its agent technology that feel more autonomous than what’s available today.


Friday, March 21, 2025

Google DeepMind unveils new AI models for controlling robots - Kyle Wiggers, TechCrunch

Google DeepMind, Google’s AI research lab, on Wednesday announced new AI models called Gemini Robotics designed to enable real-world machines to interact with objects, navigate environments, and more. DeepMind published a series of demo videos showing robots equipped with Gemini Robotics folding paper, putting a pair of glasses into a case, and other tasks in response to voice commands. According to the lab, Gemini Robotics was trained to generalize behavior across a range of different robotics hardware, and to connect items robots can “see” with actions they might take.


Introducing Gemma 3: The most capable model you can run on a single GPU or TPU - C Clement Farabet & T Tris Warkentin, Keyword

The Gemma family of open models is foundational to our commitment to making useful AI technology accessible. Last month, we celebrated Gemma's first birthday, a milestone marked by incredible adoption — over 100 million downloads — and a vibrant community that has created more than 60,000 Gemma variants. This Gemmaverse continues to inspire us. Today, we're introducing Gemma 3, a collection of lightweight, state-of-the-art open models built from the same research and technology that powers our Gemini 2.0 models. These are our most advanced, portable and responsibly developed open models yet. They are designed to run fast, directly on devices — from phones and laptops to workstations — helping developers create AI applications, wherever people need them. Gemma 3 comes in a range of sizes (1B, 4B, 12B and 27B), allowing you to choose the best model for your specific hardware and performance needs. In this post, we'll explore Gemma 3's capabilities, introduce ShieldGemma 2, and share how you can join the expanding Gemmaverse.

Thursday, March 20, 2025

AI agents aren't just assistants: How they're changing the future of work today - Sabrina Ortiz, ZDnet

AI agents build on the experience of AI chatbots or AI assistants, taking it several steps further by carrying out actions for you using their own reasoning and inference, as opposed to step-by-step, prompted instructions. To illustrate this idea, LaMoreaux used an example of getting an AI assistant versus an agent to help you make a reservation at a restaurant. In this example, if you ask an AI assistant to schedule a dinner at a restaurant, it may be able to make the reservation and even take it a step further by sending out an invite to the people on the reservation. However, it can't use additional context to go off-script and adjust accordingly.

New tools for building agents - OpenAI

Today, we’re releasing the first set of building blocks that will help developers and enterprises build useful and reliable agents. We view agents as systems that independently accomplish tasks on behalf of users. Over the past year, we’ve introduced new model capabilities—such as advanced reasoning, multimodal interactions, and new safety techniques—that have laid the foundation for our models to handle the complex, multi-step tasks required to build agents. However, customers have shared that turning these capabilities into production-ready agents can be challenging, often requiring extensive prompt iteration and custom orchestration logic without sufficient visibility or built-in support.

Wednesday, March 19, 2025

Connecticut Forms 'AI Alliance' of 16 Universities - Nathaniel Fenster, the Hour; Government Technology

A new group has formed, composed of just about every institute of higher learning in the state of Connecticut — from Albertus Magnus to Yale — dedicated to putting the state at the forefront of artificial intelligence development. The Connecticut AI Alliance is a group of 16 academic institutions and six community organizations and nonprofit agencies. The goal, according to Vahid Behzadan, is to drive innovation and create jobs. "The Connecticut AI Alliance represents a significant milestone in our state's technology landscape," said Behzadan, co-founder of CAIA and assistant professor of computer science and data science at the University of New Haven. "By bringing together our state's academic institutions, industry partners, government agencies, and community organizations, we're creating a collaborative ecosystem that will drive innovation, economic growth, and workforce development in the rapidly evolving field of artificial intelligence."


Professors’ AI twins loosen schedules, boost grades - Colin Wood, EdScoop

David Clarke, the founder and chief executive of Praxis AI, said his company’s software, which uses Anthropic’s Claude models as its engine, is being used at Clemson University, Alabama State University and the American Indian Higher Education Consortium, which includes 38 tribal colleges and universities. A key benefit of the technology, he said, has been that the twins provide a way for faculty and teaching assistants to field a great bulk of basic questions off-hours, leading to more substantive conversations in person. “They said the majority of their questions now are about the subject matter, are complicated, because all of the lower end logistical questions are being handled by the AI,” Clarke said. Praxis, which has a business partnership with Instructure, the company behind the learning management system Canvas, integrates with universities’ learning management systems to “meet students where they are,” Clarke said.


Tuesday, March 18, 2025

Reading, Writing, and Thinking in the Age of AI - Suzanne Hudd, et al; Faculty Focus

Generative AI tools such as ChatGPT can now produce polished, technically competent texts in seconds, challenging our traditional understanding of writing as a uniquely human process of creation, reflection, and learning. For many educators, this disruption raises questions about the role of writing in their disciplines. In our new book, How to Use Writing for Teaching and Learning, we argue that this disruption presents an opportunity rather than a threat. Notice from our book’s title that our focus is not necessarily on “how to teach writing.” For us, writing is not an end goal, which means our students do not necessarily learn to write for the sake of writing. Rather, we define writing as a method of inquiry that allows access to various discourse communities (e.g., an academic discipline), social worlds (e.g., the knowledge economy), and forms of knowledge (e.g., literature).  


Embrace the Use of AI in Student Work - David Kane, Minding the Campus

Faculty can embrace AI, encouraging students to use it in all of their assignments. I recommend this approach. We should no more forbid the use of AI than we do the use of calculators or spell-checkers. (There is a case in K -12 education for teaching the “fundamentals” of unassisted mathematics and spelling. But that argument hardly applies to college students, at least at elite schools.) How can instructors embrace AI? Begin by using AI yourself. How would Grok answer your favorite essay prompt? How accurate are the references suggested by Claude? How good are the theses statements created by ChatGPT? Generative AI is the future of education and scholarship. Use it or be left behind.

Monday, March 17, 2025

Why UChicago Built Its Own Chatbot Instead of Buying One - Government Technology

As artificial intelligence becomes more ingrained in higher education, universities face a choice: to purchase commercial AI services, or build their own? According to the University of Chicago’s Chief Technology Officer Kemal Badur, who spoke at an EDUCAUSE webinar this week, schools risk being left behind if they don’t start somewhere. “Waiting, I don't feel is an option,” Badur said. “This is not going to settle down. There will not be a time where somebody will release the perfect product that you need, and keeping up is really hard.”

Google Search’s new ‘AI Mode’ lets users ask complex, multi-part questions - Aisha Malik, TechCrunch

Google is launching a new “AI Mode” experimental feature in Search that looks to take on popular services like Perplexity AI and OpenAI’s ChatGPT Search. The tech giant announced on Wednesday that the new mode is designed to allow users to ask complex, multi-part questions and follow-ups to dig deeper on a topic directly within Google Search. AI Mode is rolling out to Google One AI Premium subscribers starting this week and is accessible via Search Labs, Google’s experimental arm. 


Sunday, March 16, 2025

The critical role of strategic workforce planning in the age of AI - McKinsey

Forward-thinking organizations understand that talent management is a critical component of business success. S&P 500 companies that excel at maximizing their return on talent generate an astonishing 300 percent more revenue per employee compared with the median firm, McKinsey research shows. In many cases, these top performers are using strategic workforce planning (SWP) to stay ahead in the talent race, treating talent with the same rigor as managing their financial capital. Under this analytical approach, organizations don’t wait for events or the market to dictate a response. Instead, they take a three-to-five-year view, using SWP to anticipate multiple situations so that they have the right number of people with the right skills at the right time to achieve their strategic objectives.

When will we see mass adoption of gen AI? - McKinsey

Will generative AI live up to its hype? On this episode of the At the Edge podcast, tech visionaries Navin Chaddha, managing partner at Mayfield Fund; Kiran Prasad, McKinsey senior adviser and CEO and cofounder of Big Basin Labs; and Naba Banerjee, McKinsey senior adviser and former director of trust and operations at Airbnb, join guest host and McKinsey Senior Partner Brian Gregg. They talk about the inevitability of an AI-supported world and ways businesses can leverage AI’s astonishing capabilities while managing its risks. The following transcript has been edited for clarity and length. For more conversations on cutting-edge technology, follow the series on your preferred podcast platform.

Saturday, March 15, 2025

Opera unveils an AI agent that runs natively within the browser - Ivan Mehta, Tech Crunch

Browser company Opera has unveiled a new AI agent called Browser Operator that can complete tasks for you on different websites. In a demo video, the company showed the AI agent finding a pair of socks from Walmart; securing tickets for a football match from the club’s site; and looking up a flight and a hotel for a trip on Booking.com. Opera said that the feature will be available to users through its Feature Drop program soon. It’s not clear if the agent can work on individual websites or if it can understand and accomplish wider queries like, “Find me the cheapest ticket from London to New York for tomorrow,” and look across sites.

Chatbots, Like the Rest of Us, Just Want to Be Loved - Will Knight, Wired

A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible. Johannes Eichstaedt, an assistant professor at Stanford University who led the work, says his group became interested in probing AI models using techniques borrowed from psychology after learning that LLMs can often become morose and mean after prolonged conversation. “We realized we need some mechanism to measure the ‘parameter headspace’ of these models,” he says.

Friday, March 14, 2025

Amazon Web Services Introduces Scalable Quantum Chip - Berenice Baker, IOT World Today

As the race between major technology companies to build practical, fault-tolerant quantum computers heats up, Amazon Web Services (AWS) has joined the fray with its new Ocelot quantum computing chip. The announcement comes a week after Microsoft unveiled the Majorana 1 quantum chip and two months after Google released its Willow quantum chip. All three were developed with an eye to fault-tolerant quantum scaling. Ocelot is a prototype designed to test the effectiveness of AWS's quantum error correction architecture. The company aims to reduce the costs of implementing quantum error correction (QEC) by up to 90%, offering a scalable solution to build more reliable, cost-effective quantum computers.

OpenAI plans to bring Sora’s video generator to ChatGPT - Maxwell Zeff, TechCrunch

OpenAI intends to eventually integrate its AI video generation tool, Sora, directly into its popular consumer chatbot app, ChatGPT, company leaders said during a Friday office hours session on Discord. Today, Sora is only available through a dedicated web app OpenAI launched in December, which lets users access the AI video model of the same name to generate up to 20-second-long cinematic clips. However, OpenAI’s product lead for Sora, Rohan Sahai, said the company has plans to put Sora in more places, and expand what Sora can create.

Thursday, March 13, 2025

Scientists discover simpler way to achieve Einstein's 'spooky action at a distance' thanks to AI breakthrough — bringing quantum internet closer to reality - Peter Ray Allison, Live Science

Scientists have used AI to discover an easier method to form quantum entanglement between subatomic particles, paving the way for simpler quantum technologies. When particles such as photons become entangled, they can share quantum properties — including information — regardless of the distance between them. This phenomenon is important in quantum physics and is one of the features that makes quantum computers so powerful. But the bonds of quantum entanglement have typically proven challenging for scientists to form. This is because it requires the preparation of two separate entangled pairs, then measuring the strength of entanglement — called a Bell-state measurement — on a photon from each of the pairs.

Are you a jack of all GenAI? - Einat Grimberg, Claire Mason, Andrew Reeson, Cécile Paris - CSIRO

The role of human skills and knowledge as use of AI (and GenAI, in particular) has proliferated has been a focus of our work in the Collaborative Intelligence Future Science Platform (CINTEL FSP), a strategic research initiative of Australia’s national science agency, CSIRO. Over the past year, we have interviewed expert users of GenAI tools to explore what proficient use looks like and what competencies support it. Proficiency was inferred from examples of effective and ineffective use provided by knowledge workers across roles and industry sectors (such as scientists, designers, teachers, legal practitioners and organisational development advisers) who are recognised as expert GenAI users in their respective fields.

https://www.timeshighereducation.com/campus/are-you-jack-all-genai

Wednesday, March 12, 2025

OpenAI reportedly plans to charge up to $20,000 a month for PhD-level research AI ‘agents’ - Kyle Wiggers, Tech Crunch

OpenAI may be planning to charge up to $20,000 per month for specialized AI “agents,” according to The Information. The publication reports that OpenAI intends to launch several “agent” products tailored for different applications, including sorting and ranking sales leads and software engineering. One, a “high-income knowledge worker” agent, will reportedly be priced at $2,000 a month. Another, a software developer agent, is said to cost $10,000 a month. OpenAI’s most expensive rumored agent, priced at the aforementioned $20,000-per-month tier, will be aimed at supporting “PhD-level research,” according to The Information.


OpenAI Invests $50M in Higher Ed Research - Kathryn Palmer, Inside Higher Ed

OpenAI announced Tuesday that it’s investing $50 million to start up NextGenAI, a new research consortium of 15 institutions that will be “dedicated to using AI to accelerate research breakthroughs and transform education.” The consortium, which includes 13 universities, is designed to “catalyze progress at a rate faster than any one institution would alone,” the company said in a news release. “The field of AI wouldn’t be where it is today without decades of work in the academic community. Continued collaboration is essential to build AI that benefits everyone,” Brad Lightcap, chief operating officer of OpenAI, said in the news release. “NextGenAI will accelerate research progress and catalyze a new generation of institutions equipped to harness the transformative power of AI.”

https://www.insidehighered.com/news/quick-takes/2025/03/05/openai-invests-50m-higher-ed-research

Tuesday, March 11, 2025

AI in Higher Education: A Revolution or a Risk? - Mauro Rodríguez Marín, Institute for the Future of Education Observatory

Artificial intelligence (AI) in higher education has generated high expectations in universities worldwide due to its ability to personalize learning, automate tasks, and optimize administrative processes. However, we must put on the table the risks and ethical challenges of using AI in higher education, such as technological dependence, degradation of intellectual autonomy, decrease in problem-solving skills, academic integrity, and its impact on critical thinking development. This article spotlights some advantages and disadvantages of AI usage in the classroom for our awareness and generation of more in-depth research. 

https://observatory.tec.mx/edu-bits-2/ai-in-higher-education-a-revolution-or-a-risk/

Small Language Models (SLMs): A Cost-Effective, Sustainable Option for Higher Education - Tom Mangan, Ed Tech

Small language models, known as SLMs, create intriguing possibilities for higher education leaders looking to take advantage of artificial intelligence and machine learning.  SLMs are miniaturized versions of the large language models (LLMs) that spawned ChatGPT and other flavors of generative AI. For example, compare a smartwatch to a desktop workstation (monitor, keyboard, CPU and mouse): The watch has a sliver of the computing muscle of the PC, but you wouldn’t strap a PC to your wrist to monitor your heart rate while jogging. SLMs can potentially reduce costs and complexity while delivering identifiable  benefits — a welcome advance for institutions grappling with the implications of AI and ML. SLMs also allow creative use cases for network edge devices such as cameras, phones and Internet of Things (IoT) sensors.

Monday, March 10, 2025

Amazon is reportedly developing its own AI ‘reasoning’ model - Kyle Wiggers, Tech Crunch

According to Business Insider, Amazon is developing an AI model that incorporates advanced “reasoning” capabilities, similar to models like OpenAI’s o3-mini and Chinese AI lab DeepSeek’s R1. The model may launch as soon as June under Amazon’s Nova brand, which the company introduced at its re:Invent developer conference last year. Reasoning models take a step-by-step, more considered approach to answering queries. This tends to boost their reliability in domains like math and science. The report says Amazon aims to adopt a “hybrid” reasoning architecture for its new model, along the lines of Anthropic’s recently released Claude 3.7 Sonnet. 


AI Forced Job Loss Is Coming, Here’s How To Be Ready - Peter H. Diamandis, MOONSHOTS

The podcast discusses the potential impact of AI on jobs, highlighting both the potential for increased productivity and job displacement. It explores historical perspectives, suggesting that technological advancements have historically led to increased employment, but acknowledges potential short-term disruptions as society adapts. Concerns are raised regarding society's readiness for AI-driven changes and the emotional impact on individuals facing job loss. The conversation challenges the traditional concept of "jobs," suggesting a reevaluation of work's role in society. It proposes learning from societies with different relationships with work and questions whether current institutions can manage the upcoming AI-driven transition. The discussion emphasizes the need to consider alternative social systems and governance mechanisms in the face of these changes.  (summary provided by Gemini 2.0 Flash)

https://youtu.be/cAfPLCQPNhI?si=oZJhuS8r7k7f9_6S 

Sunday, March 09, 2025

Ethical AI in Higher Education - Software Testing News

Artificial Intelligence (AI) is rapidly transforming the education sector, unlocking vast potential while introducing complex ethical and regulatory challenges. As higher education institutions harness AI’s capabilities, ensuring its responsible and ethical integration into academic environments is crucial. With the adoption of the EU AI Act, it will be critical for ed-tech companies, educational institutions, and other stakeholders to work towards compliance with this key legislation. The Act applies to both public and private entities that market, deploy, or provide AI-related services within the European Union. Its primary objectives are to safeguard fundamental rights, including privacy, non-discrimination, and freedom of expression, while simultaneously fostering innovation. The Act aims to provide clear legal frameworks that support the development and use of AI systems that are not only safe and ethical but also aligned with societal values and the broader public interest.

https://softwaretestingnews.co.uk/ethical-ai-in-higher-education/

Get students on board with AI for marking and feedback - Isabel Fischer, Times Higher Education

AI can potentially augment feedback and marking, but we need to trial it first. Here is a blueprint for using enhanced feedback generation systems and gaining trust. AI has proven its value in low-stakes formative feedback, where its rapid and personalised responses enhance learning. However, in high-stakes contexts where grades influence futures, autonomous AI marking introduces risks of bias and distrust. We therefore suggest that for high-stakes summative assessments, AI should be trialled in a supporting role, augmenting human-led processes. 

Saturday, March 08, 2025

AI: Cheating Matters, but Redrawing Assessment ‘Matters Most’ - Juliette Rowsell, Times Higher Education

Conversations over students using artificial intelligence to cheat on their exams are masking wider discussions about how to improve assessment, a leading professor has argued. Phillip Dawson, co-director of the Centre for Research in Assessment and Digital Learning at Deakin University in Australia, argued that “validity matters more than cheating,” adding that “cheating and AI have really taken over the assessment debate.” Speaking at the conference of the U.K.’s Quality Assurance Agency, he said, “Cheating and all that matters. But assessing what we mean to assess is the thing that matters the most. That’s really what validity is … We need to address it, but cheating is not necessarily the most useful frame.”

How University Leaders Can Ethically and Responsibly Implement AI - Bruce Dahlgren, Campus Technology

For university leaders, the conversation around implementing artificial intelligence (AI) is shifting. With its great potential to unlock transformative innovation in education, it's no longer a question of if, but how, institutions should look to utilize the technology on their campuses. AI is reshaping education, offering personalized learning, efficiency, and accessibility. For students, AI provides individualized support, and for faculty it streamlines administrative tasks. The promise of AI and its potential benefits for students, faculty, and higher education institutions at large is too great to pass up.

Friday, March 07, 2025

OpenAI Operator: Use This to Automate 80% of Your Work - the AI Report, YouTube

This podcast episode discusses OpenAI's Operator, an AI agent capable of autonomously performing tasks on the internet through your browser [01:43]. The hosts explore examples such as drafting emails using Asana project boards [07:08], summarizing calls and sending structured emails [19:15], and training agents to manage schedules [34:09]. They also discuss the pros and cons of using Operator, including its ability to keep humans in the loop and its current limitation of handling only one task at a time [02:54]. The podcast also touches on the broader implications of AI on SEO, job roles, and the importance of curiosity in adapting to this changing landscape [16:58]. {summary provided by Gemini 2.0 Flash}

https://www.youtube.com/watch?v=KBAdk1sXXEM

6 Myths We Got Wrong About AI (And What’s the Reality) - Kolawole Samuel Adebayo, HubSpot

Over the past decade, I've written extensively about some of the world’s greatest innovations. With these technologies, you know what to expect: an improvement here, a new functionality there. This one got faster, and that other one got cheaper. But when the AI boom began with ChatGPT a few years ago, it was quite unlike anything I’d ever seen. It was easy to get caught up in the headlines and be carried away by varying predictions and “demystifications” of this new, disruptive technology. Unfortunately, a lot of ideas were either miscommunicated, assumed, or lost in translation. The result? In came AI myths that were far from reality. So, let’s unpack those. In this article, I’ll discuss six of the biggest AI myths and shed light on what the reality truly is.

https://blog.hubspot.com/marketing/ai-myths

Thursday, March 06, 2025

Could this be the END of Chain of Thought? - Chain of Draft BREAKDOWN! - Matthew Berman, YouTube

Matthew BermanThis podcast introduces a new prompting strategy called "chain of draft" for AI models, which aims to improve upon the traditional "chain of thought" method [00:00]. Chain of draft encourages LLMs to generate concise, dense information outputs at each step, reducing token usage and latency while maintaining or exceeding the accuracy of chain of thought [11:41]. Implementing chain of draft is simple, requiring only an update to the prompt [08:06].

https://www.youtube.com/watch?v=rYnisU10wu0

I was an AI skeptic until these 5 tools changed my mind - Jack Wallen, ZDnet

It's taken me a while to come around, but I've become a fan of certain AI tools -- when used for specific purposes. I've even found some of those tools to be very helpful throughout my day (so much so that I haven't used Google's search engine in weeks). That, my friends, is refreshing. How I got here was a bit circuitous. I started out 100% against AI but then I realized I was against AI when used as a shortcut for things like writing and other artistic endeavors. Once I realized AI was very good at helping me research different areas (where I'd previously used a search engine), I adopted it into my process.

Wednesday, March 05, 2025

Microsoft’s New Majorana 1 Processor Could Transform Quantum Computing - Stephan Rachel, Wired

The processor uses qubits that can be measured without error and are resistant to outside interference, which the company says marks a “transformative leap toward practical quantum computing.” Researchers at Microsoft have announced the creation of the first “topological qubits” in a device that stores information in an exotic state of matter, in what may be a significant breakthrough for quantum computing. At the same time, the researchers also published a paper in Nature and a “road map” for further work. The design of the Majorana 1 processor is supposed to fit up to a million qubits, which may be enough to realize many significant goals of quantum computing—such as cracking cryptographic codes and designing new drugs and materials faster.

Claude 3.7 Sonnet and Claude Code - Anthropic

Claude 3.7 Sonnet can produce near-instant responses or extended, step-by-step thinking that is made visible to the user. API users also have fine-grained control over how long the model can think for. Claude 3.7 Sonnet shows particularly strong improvements in coding and front-end web development. Along with the model, we’re also introducing a command line tool for agentic coding, Claude Code. Claude Code is available as a limited research preview, and enables developers to delegate substantial engineering tasks to Claude directly from their terminal.

Tuesday, March 04, 2025

This AI model does maths, coding, and reasoning - Matt V, Mindstream

Anthropic has launched Claude 3.7 Sonnet, a more advanced AI model with better problem-solving in maths, coding, and reasoning.Unlike some competitors that separate reasoning into different models, Anthropic keeps it built into Claude’s core functions. Alongside this, Anthropic is introducing Claude Code, an AI coding assistant that can search and edit code, run tests, and push changes to GitHub. Claude 3.7 Sonnet is available from Monday via the Claude app, Anthropic’s API, Amazon Bedrock, and Google Cloud’s Vertex AI. Pricing stays the same as Claude 3.5 Sonnet at $3 per million input tokens and $15 per million output tokens.

The next wave of AI is here: Autonomous AI agents are amazing—and scary - Tom Barnett, Fast Company

The relentless hype around AI makes it difficult to separate the signal from the noise. So it’s understandable if you’ve tuned out recent talk about autonomous AI agents. A word of advice: Don’t. The significance of agentic AI may actually exceed the hype.  An Autonomous AI agent can interact with the environment, make decisions, take action, and learn from the process. This represents a seismic shift in the use of AI and, accordingly, presents corresponding opportunities—and risks.

Monday, March 03, 2025

Grok 3 appears to have briefly censored unflattering mentions of Trump and Musk - Kyle Wiggers, Tech Crunch

Over the weekend, users on social media reported that when asked, “Who is the biggest misinformation spreader?” with the “Think” setting enabled, Grok 3 noted in its “chain of thought” that it was explicitly instructed not to mention Donald Trump or Elon Musk. The chain of thought is the “reasoning” process the model uses to arrive at an answer to a question. TechCrunch was able to replicate this behavior once, but as of publication time on Sunday morning, Grok 3 was once again mentioning Donald Trump in its answer to the misinformation query.

When AI Thinks It Will Lose, It Sometimes Cheats, Study Finds - Harry Booth, Time

Complex games like chess and Go have long been used to test AI models’ capabilities. But while IBM’s Deep Blue defeated reigning world chess champion Garry Kasparov in the 1990s by playing by the rules, today’s advanced AI models like OpenAI’s o1-preview are less scrupulous. When sensing defeat in a match against a skilled chess bot, they don’t always concede, instead sometimes opting to cheat by hacking their opponent so that the bot automatically forfeits the game. That is the finding of a new study from Palisade Research, shared exclusively with TIME ahead of its publication on Feb. 19, which evaluated seven state-of-the-art AI models for their propensity to hack. While slightly older AI models like OpenAI’s GPT-4o and Anthropic’s Claude Sonnet 3.5 needed to be prompted by researchers to attempt such tricks, o1-preview and DeepSeek R1 pursued the exploit on their own, indicating that AI systems may develop deceptive or manipulative strategies without explicit instruction.

Sunday, March 02, 2025

OpenAI’s GPT-4.5 May Arrive Next Week, but GPT-5 Is Just Around the Corner - Kyle Barr, Gizmodo

OpenAI may be preparing to slap a new coat of paint on ChatGPT with an updated AI model, GPT-4.5, as early as next week. If that’s not enough to get users excited, the Sam Altman-led company is on the path toward its ultimate model while trying to hint that this next step will finally achieve “AGI.” Spoiler alert: it won’t. Based on anonymous sources, the Verge’s Tom Warren first reported that OpenAI’s next model could hit the scene sometime this month. Microsoft reportedly plans to host the company’s new model next week, though it may be longer before either company makes any official announcement. More importantly, for the “next big thing,” We may see the GPT-5 model as early as May, according to The Verge.

OpenAI’s ChatGPT explodes to 400M weekly users, with GPT-5 on the way - Michael Nuñez, Venture Beat

OpenAI’s ChatGPT has surpassed 400 million weekly active users, a milestone that underscores the company’s growing reach across both consumer and enterprise markets, according to an X post from chief operating officer Brad Lightcap on Thursday. The rapid expansion comes as OpenAI faces intensifying competition from rivals such as Elon Musk’s xAI and China’s DeepSeek, both of which have recently launched high-performing models aimed at disrupting OpenAI’s dominance. Despite this, OpenAI has seen significant traction in the business sector, with more than two million enterprise users now using ChatGPT at work — doubling from September 2024.

Saturday, March 01, 2025

Accelerating scientific breakthroughs with an AI co-scientist - Juraj Gottweis and Vivek Natarajan, Google

Motivated by unmet needs in the modern scientific discovery process and building on recent AI advances, including the ability to synthesize across complex subjects and to perform long-term planning and reasoning, we developed an AI co-scientist system. The AI co-scientist is a multi-agent AI system that is intended to function as a collaborative tool for scientists. Built on Gemini 2.0, AI co-scientist is designed to mirror the reasoning process underpinning the scientific method. Beyond standard literature review, summarization and “deep research” tools, the AI co-scientist system is intended to uncover new, original knowledge and to formulate demonstrably novel research hypotheses and proposals, building upon prior evidence and tailored to specific research objectives.


Study: Generative AI Could Inhibit Critical Thinking - Chris Paoli, Campus Technology

A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills, who tended to apply more critical thinking to verify, refine, and critically integrate AI responses. The study ("The Impact of Generative AI on Critical Thinking: Self-Reported Reductions in Cognitive Effort and Confidence Effects From a Survey of Knowledge Workers"), conducted by Microsoft Research and Carnegie Mellon University scientists, surveyed 319 knowledge workers who reported using AI tools such as ChatGPT and Copilot at least once a week. The researchers analyzed 936 real-world examples of AI-assisted tasks.

Friday, February 28, 2025

Sam Altman hypes GPT-4.5 as the closest thing we have to AGI - Rafly Gilang, MS Power User

A while ago, OpenAI announced that it’s shipping GPT-4.5, the successor of the GPT-4, and it seems to be what everybody is talking about in the AI space right now. Sam Altman, OpenAI’s boss, recently posted on X to suggest that testing GPT-4.5 has led to surprising reactions from high-level testers. He describes the experience as a “feel the AGI” moment, implying that users are starting to sense artificial general intelligence (AGI) qualities in the model—something more advanced and intuitive than previous iterations.

San Jose State University Creates 'AI Librarian' Position - Government Technology

Thinking ahead at what artificial intelligence (AI) means for academic assets and services, San Jose State University (SJSU) last week announced a new job title: AI librarian. One of the first dedicated AI librarians at any university, according to a news release last week, Sharesly Rodriguez, who has worked at the university library since 2020, will be responsible for integrating and developing AI technology for the university's academic library. According to SJSU, librarians typically collaborate with faculty and IT staff to provide information, resources and instruction both online and in person. They also manage digital assets, develop technology resources and promote library services. Within these duties, academic librarians often have one or more subject matter specialty, such as chemistry, history, or in Rodriguez’s case, AI.


Thursday, February 27, 2025

A look under the hood of transfomers, the engine driving AI model evolution - Terrence Alsup, Venture Beat

In brief, a transformer is a neural network architecture designed to model sequences of data, making them ideal for tasks such as language translation, sentence completion, automatic speech recognition and more. Transformers have really become the dominant architecture for many of these sequence modeling tasks because the underlying attention-mechanism can be easily parallelized, allowing for massive scale when training and performing inference.... Depending on the application, a transformer model follows an encoder-decoder architecture. The encoder component learns a vector representation of data that can then be used for downstream tasks like classification and sentiment analysis. The decoder component takes a vector or latent representation of the text or image and uses it to generate new text, making it useful for tasks like sentence completion and summarization. For this reason, many familiar state-of-the-art models, such the GPT family, are decoder only.    

How an AI-enabled software product development life cycle will fuel innovation - Chandra Gnanasambandam, Martin Harrysson and Rikki Singh; McKinsey

By integrating all forms of AI into the end-to-end software product development life cycle (PDLC), companies can empower product managers (PMs), engineers, and their teams to spend more time on higher-value work and less on routine tasks. As part of this broad shift, they can incorporate more robust sources of data and feedback in a new development framework that prioritizes customer-centric solutions. This holistic redesign should ultimately accelerate the process, improve product quality, increase customer adoption and satisfaction, and spur greater innovation 

Wednesday, February 26, 2025

The effectiveness evaluation of industry education integration model for applied universities under back propagation neural network - Ying Qi & Wei Feng, Nature

As the education field continues to advance, industry–education integration has become a crucial strategy for enhancing teaching quality in applied universities. This study investigates how artificial intelligence, specifically the back propagation neural network (BPNN), can be applied within an industry–education integration framework to strengthen students’ skills and employability. A series of experiments were conducted to assess the model’s effectiveness in linking theoretical learning with practical experience, as well as in improving students’ hands-on and innovative abilities. Results demonstrate that the BPNN-optimized model substantially boosts students’ overall competencies. 

AI math tutor: ChatGPT can be as effective as human help, study suggests - Eric W. Dolan, PsyPost

A recent study published in PLOS One provides evidence that artificial intelligence can be just as helpful as a human tutor when it comes to learning mathematics. Researchers discovered that students using hints generated by ChatGPT, a popular artificial intelligence chatbot, showed similar learning improvements in algebra and statistics as those receiving guidance from human-authored hints. Educational technology is increasingly looking towards advanced artificial intelligence tools like ChatGPT to enhance learning experiences. The chatbot’s ability to generate human-like text has sparked interest in its potential for tutoring and providing educational support.

Tuesday, February 25, 2025

6 Ways Technology Transforms Learning Across Generations - Alexa Wang, Flux Magazine

The integration of technology in education has revolutionized how learners of all ages acquire knowledge. From children in preschool to adults seeking continued education, technology provides a multitude of resources that cater to diverse learning styles, making education more engaging and accessible. As we explore how technology transforms learning across generations, it becomes evident that innovations such as online courses, educational apps, and collaborative tools enhance the educational experience while fostering lifelong learning.

Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning - Imagining the Digital Future, Elon University

The spread of artificial intelligence tools in education has disrupted key aspects of teaching and learning on the nation’s campuses and will likely lead to significant changes in classwork, student assignments and even the role of colleges and universities in the country, according to a national survey of higher education leaders. The survey was conducted Nov. 4-Dec. 7, 2024, by the American Association of Colleges & Universities (AAC&U) and Elon University’s Imagining the Digital Future Center. A total of 337 university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans responded to questions about generative artificial intelligence tools (GenAI) such as ChatGPT, Gemini, Claude and CoPilot. The survey covered the current situation on campuses, the struggles institutional leaders encounter, the changes they anticipate and the sweeping impacts they foresee. The survey results covered in a new report, Leading Through Disruption, were released at the annual AAC&U meeting, held Jan. 22-24, 2024, in Washington, D.C.

Monday, February 24, 2025

OpenAI Unveils GPT-5 With Cutting-Edge o3 Reasoning Model - Yasmeeta Oon, MSN.com

OpenAI is poised to revolutionize the artificial intelligence landscape with the imminent release of its highly anticipated GPT-5 large language model, featuring the groundbreaking o3 reasoning model. Scheduled to be integrated into the ChatGPT platform, this advanced model promises an enhanced and powerful user experience. CEO Sam Altman announced the company’s ambitious plans for the GPT-5 model on X, highlighting its significance as a major update to the current platform. The GPT-5 model will be available to all users with a ChatGPT account, allowing free tier users unrestricted access under a standard intelligence setting. While there are no charges for this tier, users will be subject to review based on abuse thresholds to maintain system integrity. The integration of the o3 reasoning model into GPT-5 signifies a major leap forward in AI technology, offering unparalleled capabilities. 

Perplexity launches its own freemium ‘deep research’ product - Anthony Ha, Tech Crunch

Perplexity has become the latest AI company to release an in-depth research tool, with a new feature announced Friday. Google unveiled a similar feature for its Gemini AI platform in December. Then OpenAI launched its own research agent earlier this month. All three companies even have given the feature the same name: Deep Research. The goal is to provide more in-depth answers with real citations for more professional use cases, compared to what you’d get from a consumer chatbot. In a blog post announcing Deep Research, Perplexity wrote that the feature “excels at a range of expert-level tasks—from finance and marketing to product research.”

Sunday, February 23, 2025

Musk Staff Propose Bigger Role for A.I. in Education Department - Dana Goldstein and Zach Montague, NY Times

Allies of Elon Musk stationed within the Education Department are considering replacing some contract workers who interact with millions of students and parents annually with an artificial intelligence chat bot, according to internal department documents and communications. The proposal is part of President Trump’s broader effort to shrink the federal work force, and would mark a major change in how the agency interacts with the public. The Education Department’s biggest job is managing billions of dollars in student aid, and it routinely fields complex questions from borrowers.

https://www.nytimes.com/2025/02/13/us/doge-ai-education-department-students.html?unlocked_article_code=1.xk4.5HB0.7OTzwfgWzamA&smid=url-share

Replit and Anthropic’s AI just helped Zillow build production software—without a single engineer - Michael Nuñez, Venture Beat

Zillow just built production software — without hiring a single engineer. Instead, non-technical employees used Replit and Anthropic’s Claude tool to create working applications that now route more than 100,000 home shoppers to agents. This isn’t just no-code; it’s AI-assisted software development at enterprise scale, powered by Claude and Replit’s automation stack. With a global developer shortage looming, this shift could redefine how software gets built — and who gets to build it.

Saturday, February 22, 2025

AI humanoid robots are closer - thanks to new $350 million investment - Sabrina Ortiz, ZDnet

AI-powered humanoid robots that co-exist with humans to help our workloads may seem like the plot of a sci-fi movie, but companies have been working on them for years. Case in point: Apptronik, a robotics lab founded in early 2016, has been working on a 5-foot 8-inch, 160-pound, general-purpose humanoid robot named Apollo. The company's latest funding will accelerate the robot's deployment. On Wednesday, Austin-based Apptronik announced the closing of a $350 million Series A funding round that will be used to fuel Apollo's deployment, scale company operations, grow its team, and accelerate innovation, according to a company press release. The investment was co-led by B Capital and Capital Factory with participation from DeepMind, Google's AI lab. 

Why OpenAI’s Agent Tool May Be the First AI Gizmo to Improve Your Workplace - Kit Eaton, Inc.

Many of us have by now chatted to one of the current generation of smart AI chatbots, like OpenAI’s market-leading ChatGPT, either for fun or for genuine help at work. Office uses include assistance with a tricky coding task, or getting the wording just right on that all important PowerPoint briefing that the CEO wants. The notable thing about all these interactions is that they’re one way: the AI waits for users to query it before responding. Tech luminaries insist that next-gen “agentic” AIs are different and can actually act with a degree of autonomy on their user’s behalf. Now rumors say that OpenAI’s agent tool, dubbed Operator, may be ready for imminent release. It could be a game changer.

https://www.inc.com/kit-eaton/why-openais-agent-tool-may-be-the-first-ai-gizmo-to-improve-your-workplace/91109848

Friday, February 21, 2025

Quantum Large Language Model Launched to Enhance AI - Berenice Baker, Enter Quantum

Secqai, a company specializing in ultra-secure hardware and software, has launched a hybrid quantum large-language model (QLLM). The QLLM aims to enhance AI applications by integrating quantum computing with traditional large language models (LLMs) to improve computational efficiency while enhancing problem-solving and linguistic understanding capabilities. The new model, which the company said is a world first, resulted from Secqai's research into how the next generation of accelerated computing could be transformed with a QLLM and quantum machine learning.

Superagency: The transformative potential of AI - McKinsey

There’s a critical difference between AI and AGI [artificial general intelligence]. Although the latest gen AI technologies, including ChatGPT, DALL-E, and others, have been hogging headlines, they are essentially prediction machines—albeit very good ones. In other words, they can predict, with a high degree of accuracy, the answer to a specific prompt because they’ve been trained on huge amounts of data. This is impressive, but it’s not at a human level of performance in terms of creativity, logical reasoning, sensory perception, and other capabilities. By contrast, AGI tools could feature cognitive and emotional abilities—like empathy—indistinguishable from those of a human.

Thursday, February 20, 2025

SUPERHUMAN Coder in 2025? New OpenAI Paper... - Wes Roth, YouTube

This podcast by Wes Roth discusses OpenAI's research paper on competitive programming using large reasoning models (LRMs). It highlights the use of reinforcement learning to improve large language models for complex coding and reasoning tasks. The podcast introduces models like 01, 03, and 01 II, which have shown strong performance in competitive programming benchmarks such as the International Olympiad in Informatics and Codeforces. It explores the progress from AlphaCode to the advanced 03 model, which is nearing superhuman coding abilities. The discussion also considers the broader implications of AI in software engineering and the job market, and compares domain-specific models with general-purpose models, suggesting that scaled-up, general models with reinforcement learning are more promising for advanced [approaching superhuman] AI in reasoning. (summary provided by Gemini 2.0 Flash Thinking Experimental with reasoning across Google apps)

https://www.youtube.com/watch?v=SuP1z6P26zU&t=0s

Groundbreaking BBC research shows issues with over half the answers from Artificial Intelligence (AI) assistants

New BBC research published today provides a warning around the use of AI assistants to answer questions about news, with factual errors and the misrepresentation of source material affecting AI assistants.

The findings are concerning, and show:

51% of all AI answers to questions about the news were judged to have significant issues of some form
19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates
13% of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article.

Wednesday, February 19, 2025

Thinking Out Loud With AI - Ray Schroeder Inside Higher Ed

I had the pleasure recently to participate in a lifelong learning session with a group of mostly current or retired educators at my nearby Lincoln Land Community College. The topic was AI in education. It became clear to me that many in our field are challenged to keep up with the rapidly emerging developments in AI. While OpenAI's latest version of Deep Research is not available to the general public at this time, online demonstrations show that this very powerful tool conducts both reasoning and far-reaching analysis. It puts us on the cusp of artificial general intelligence. In addition, with the advent of new competitors both here and abroad, we are seeing new options for open-source models and alternative approaches. As these become more efficient and reliable, prices are headed lower while features continue to expand. The vision of AGI seems only months, not years, away. How are these highly advanced tools going to  be used by your university to enhance teaching, learning, research and other mission-centric tasks? 

A new operating model for people management: More personal, more tech, more human - McKinsey

The way organizations manage their most important assets—their people—is ready for a fundamental transformation. New technologies, hybrid working practices, multigenerational workforces, heightened geopolitical risks, and other major dis A new operating model for people management: More personal, more tech, more human - McKinsey ruptions are prompting leaders to rethink their methods for attracting, developing, and retaining employees. In the past year alone, for instance, we have seen more and more companies adopt, innovate, and invest in technology—particularly in gen AI—in ways that have spurred more changes to people operations than we have observed in the past decade.

Tuesday, February 18, 2025

Does OpenAI's Deep Research signal the end of human-only scholarship? - Andrew Maynard, The Future of Being Human

This past Sunday, OpenAI launched Deep Research — an extension of its growing platform of AI tools, and one which the company claims is an “agent that can do work for you independently … at the level of a research analyst.” I got access to the new tool first thing yesterday morning, and immediately put it to work on a project I’ve been meaning to explore for some time: writing a comprehensive framing paper on navigating advanced technology transitions. I’m not quite sure what I was expecting, but I didn’t anticipate being impressed as much as I was. I’m well aware of the debates and discussions around whether current advances in AI are substantial, or merely smoke and mirrors hype. But even given the questions and limitations here, I find myself beginning to question the value of human-only scholarship in the emerging age of AI. And my experiences with Deep Research have only enhanced this.

GPT-5 Will Be Smarter Than Me: OpenAI CEO Sam Altman - Office Chai

OpenAI CEO Sam Altman has said that GPT-5 — the company’s upcoming large language model — will be smarter than he is. “How many people feel they are smarter than GPT 4? ” he asked the audience at an event, and several hands went up. “Okay, how many of you think you’re still going to be smarter than GPT 5?” he asked, and slightly fewer hands went up. “I don’t think I’m going to be smarter than GPT 5,” Altman declared.

Monday, February 17, 2025

Google Rolls Back AI Promises and DEI Measures as Staff Ask, ‘Are We the Bad Guys Now?’ - Kit Eaton, Inc.

Google used to have an ethical promise baked into its AI guidelines that forbade the technology giant from using AI to build weapons, surveillance systems, or things that “cause or are likely to cause overall harm.” It was a comforting notion to Google’s staff and the general public, given the billions the company spends on cutting-edge research and development. It even smacked of some famous science-fiction safety mantras like Isaac Asimov’s laws of robotics, which forbid smart tech injuring human beings. But Google just refreshed its rules and deleted these clauses. As Business Insider reports, this has upset some Googlers, who have taken to internal discussion boards to vent their concerns. As Google also moves to unwind some long-held U.S. workforce diversity and equality policies, the question arises: How will Google’s workers react to big cultural shifts that may change the feel of working for the company?

OpenAI now reveals more of its o3-mini model’s thought process - Kyle Wiggers, Tech Crunch

In response to pressure from rivals including Chinese AI company DeepSeek, OpenAI is changing the way its newest AI model, o3-mini, communicates its step-by-step “thought” process. On Thursday, OpenAI announced that free and paid users of ChatGPT, the company’s AI-powered chatbot platform, will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. Subscribers to premium ChatGPT plans who use o3-mini in the “high reasoning” configuration will also see this updated readout, according to OpenAI.


Sunday, February 16, 2025

Exploring the use of ChatGPT in higher education - PLOS, Techexplorist

An international survey study involving more than 23,000 higher education students reveals trends in how they use and experience ChatGPT, highlighting both positive perceptions and awareness of the AI chatbot’s limitations. Dejan Ravšelj of the University of Ljubljana, Slovenia, and colleagues present these findings in the open-access journal PLOS One on February 5, 2025. Prior research suggests that ChatGPT can enhance learning, despite concerns about its role in academic integrity, potential impacts on critical thinking, and occasionally inaccurate responses. However, the few studies exploring student perceptions of ChatGPT in higher education have been limited in scope. Ravšelj and colleagues designed an anonymous online survey study aiming to provide a broader view.

ChatGPT Search is now free for everyone, no OpenAI account required – is it time to ditch Google? - John-Anthony Disotto, Tech Radar

ChatGPT Search no longer requires an OpenAI account. You can access the AI search engine for free without logging in. ChatGPT Search lets you browse the web directly from within the world's most popular chatbot. ChatGPT Search is now available to everyone, regardless of whether you're signed into an OpenAI account or not. OpenAI announced the major update on X, bringing ChatGPT Search to the masses, without creating an account or giving any personal information to the world leaders in AI.

Saturday, February 15, 2025

ChatGPT's Deep Research just identified 20 jobs it will replace. Is yours on the list? - Sabrina Ortiz, ZDnet

Min Choi, an X user whose account is dedicated to sharing informational AI content, asked Deep Research to "List 20 jobs that OpenAI o3 reasoning model will replace huma n with into a table format ordered by probability. Columns are Rank, Job, Why Better Than Human, Probability." Choi then shared the results of the chat via an X post, which has since garnered 984,000 views:

https://chatgpt.com/share/67a17688-7dbc-8013-b843-9812b97b6c83

https://www.zdnet.com/article/chatgpts-deep-research-just-identified-20-jobs-it-will-replace-is-yours-on-the-list/

A new operating model for people management: More personal, more tech, more human - McKinsey

The way organizations manage their most important assets—their people—is ready for a fundamental transformation. New technologies, hybrid working practices, multigenerational workforces, heightened geopolitical risks, and other major disruptions are prompting leaders to rethink their methods for attracting, developing, and retaining employees. In the past year alone, for instance, we have seen more and more companies adopt, innovate, and invest in technology—particularly in gen AI—in ways that have spurred more changes to people operations than we have observed in the past decade.


The Industry Reacts to OpenAI's Deep Research - "Hard Takeoff" - Matthew Berman, YouTube

Matthew Berman responds to the release of OpenAI's "Deep Research." Generalized PhD: Deep Research's performance on STEM benchmarks surpasses that of human PhDs, demonstrating the potential for AI to outperform humans in specialized fields. Economic Impact: Sam Altman, CEO of OpenAI, estimates that Deep Research can already accomplish a single-digit percentage of all economically valuable tasks in the world. Game Changer for Research: Deep Research is being used in various fields, including medicine, to assist with research, publishing, and even patient care. Google's Response: Google employees have expressed surprise and amusement at OpenAI's decision to name their product Deep Research, which is the same name as Google's research product. Overall, the podcast conveys a sense of excitement and urgency about the rapid advancements in AI and the potential impact on society. Berman emphasizes the importance of understanding and adapting to these changes as AI continues to evolve. (summary provided in part by Gemini 2.0)

Friday, February 14, 2025

Anthropic CEO Dario Amodei warns: AI will match ‘country of geniuses’ by 2026 - Michael Nuñez, Venture Beat

AI will match the collective intelligence of “a country of geniuses” within two years, Anthropic CEO Dario Amodei has warned in a sharp critique of this week’s AI Action Summit in Paris. His timeline — targeting 2026 or 2027 — marks one of the most specific predictions yet from a major AI leader about the technology’s advancement toward superintelligence. Amodei labeled the Paris summit a “missed opportunity,” challenging the international community’s leisurely pace toward AI governance. His warning arrives at a pivotal moment, as democratic and authoritarian nations compete for dominance in AI development.

https://venturebeat.com/ai/anthropic-ceo-dario-amodei-warns-ai-will-match-country-of-geniuses-by-2026/

OpenAI DEEP RESEARCH Surprises Everyone "Feel the AGI" Moment is here... - Wes Roth, YouTube

Wes Roth is discussing OpenAI's latest release, a new AI agent with deep research capabilities. This agent can conduct multi-step research on the internet, synthesize information, and reason about it, taking up to 30 minutes to return comprehensive answers. This technology has shown impressive results on benchmarks like "Humanity's Last Exam" and has the potential to revolutionize fields like medicine, as demonstrated by a personal story shared by an OpenAI employee. The agent's ability to access and process information, including personal data, makes it a powerful tool for research and decision-making. While currently available on the Pro Plan, this feature will soon be accessible to a wider audience, promising significant changes in how people access and utilize information. (summary provided by Gemini 2.0 Flash)

https://www.youtube.com/watch?v=2sdUG1FtzH0