Tuesday, May 05, 2026

Musk and Altman’s bitter feud over OpenAI to be laid bare in court - the Guardian

The bitter rivalry between two of the tech world’s most powerful men arrives in court this week, as Elon Musk’s lawsuit against Sam Altman and OpenAI heads to trial in Oakland, California. The case is set to feature some of the biggest names in Silicon Valley, and its outcome could affect the course of the AI boom. Musk’s suit, filed in 2024, focuses on the formative years of OpenAI when he, Altman and others co-founded the artificial intelligence company as a nonprofit with a grand purpose. “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return,” reads the company’s mission statement, published in late 2015.

College students are changing course in search of ‘AI-proof’ majors. But no one knows what they are - JOCELYN GECKER and LINLEY SANDERS, Associated Press

Two years ago, Josephine Timperman arrived at college with a plan. She declared a major in business analytics, figuring she’d learn niche skills that would stand out on a resume and help land a good job after college. But the rise of artificial intelligence has scrambled those calculations. The basic skills she was learning in things like statistical analysis and coding can now easily be automated. “Everyone has a fear that entry-level jobs will be taken by AI,” said the 20-year-old at Miami University in Ohio. A few weeks ago, Timperman switched her major to marketing. Her new strategy is to use her undergraduate studies to build critical thinking and interpersonal skills — areas where humans still have an edge.


Monday, May 04, 2026

How AI is Reshaping the Future of Work - Stanford University Graduate School of Business

Artificial intelligence is reshaping how work happens. It is changing daily workflows, influencing how teams make decisions, and pushing leaders to rethink how organizations are structured. As with any major shift, the impact depends less on the technology itself and more on the conditions leaders create around it. As AI becomes more common across industries, the future of work will depend on leaders who can integrate these tools responsibly. AI introduces new capabilities, but leadership determines how they are applied. 

Penn State launches AI literacy course for employees - EdScoop

The AI Essentials training program is designed to provide "the knowledge, skills and ethical grounding" needed to use AI responsibly. “By organizing the course into modules focused on technical knowledge, ethical awareness, critical thinking and practical application, we are empowering students, faculty and staff to engage with AI as informed, responsible participants both within the University and beyond,” Executive Vice President and Provost Fotis Sotiropoulos said in the announcement. “By aligning our AI literacy programming with the release of a new enterprise service, we are positioning Penn State at the forefront of institutions embedding comprehensive AI literacy into the undergraduate experience and in preparing our community to lead thoughtfully in an evolving technological landscape. I want to thank the AI Coordinating Council for their ongoing leadership and the instructional designers who developed this curriculum, with the support of subject matter experts, for our community.”

Sunday, May 03, 2026

GPT 5.5: Autonomous Intelligence Breakthrough - AI Revolution

OpenAI’s GPT 5.5 marks a transition toward autonomous intelligence designed for complex, long-horizon tasks, focusing on functional output over simple capability upgrades [00:24]. A key technical highlight is its efficiency: the model matches the latency of its predecessor despite its larger scale and even assisted in optimizing its own inference infrastructure during training [01:11]. Its effectiveness is demonstrated in benchmarks like Terminal Bench 2.0 and OSWorld Verified, where it scored significantly higher than competitors in navigating real computer environments and managing command-line workflows [01:33]. 
In specialized applications, GPT 5.5 shows advanced reasoning by contributing to new mathematical proofs and accelerating complex genomic research [10:00]. Users emphasize its "conceptual clarity" in coding and its ability to stay persistent on long-running engineering projects without premature failure [07:19]. Although its API pricing is double that of the previous version, its ability to build functional, data-driven applications from single prompts suggests a high value for enterprise-level automation and scientific discovery [07:43]. (Gemini 3 Thinking provided assistance in the summary of this review)

Meta to Cut 10 Percent of Work Force - Mike Isaac, NY Times

Meta plans to cut 10 percent of its work force, or roughly 8,000 employees, and close another 6,000 open roles, according to an internal memo on Thursday, as the company spends heavily on developing artificial intelligence. Meta, which owns Facebook, Instagram and WhatsApp, employed more than 78,000 people at the end of 2025. Mark Zuckerberg, Meta’s chief executive, has said he expects much of the work being done in the technology industry to eventually be overtaken by A.I. powered systems, including coding assistants that help engineers write software.


Saturday, May 02, 2026

Research cuts are now having a chilling effect on academia - Alcino Donadel, University Business


Some experts see early and dire consequences for the science and education research community. “We’ve been hearing about the cuts coming down, but this spring, you’re really starting to see the effects,” says Chenjerai Kumanyika, assistant professor at New York University and council member for the American Association of University Professors. In February, Congress passed a fiscal year 2026 spending package that rejected Trump’s proposed 40% cuts to the National Institutes of Health, the National Science Foundation, NASA, and the Department of Energy. While the agencies saw most of their budgets restored, the Trump administration has stalled in releasing the funds.As of March 24, the NIH has only awarded 15% of its nearly $40 billion budget in academic research to institutions, according to a report from the Association of Medical Colleges. 

https://universitybusiness.com/research-cuts-are-now-having-a-chilling-effect-on-academia/


College Students Are More Polarized Than Ever. Can AI Help? - Kathryn Palmer, Inside Higher Ed

Over the past few years, higher education institutions have adopted emerging artificial intelligence tools in an effort to enhance nearly every aspect of campus life—not just teaching and learning but also admissions, alumni networks, fundraising and advising. Now some are even experimenting with AI’s ability to advance one of the hottest trends on college campuses: fostering constructive dialogue among students, who are more divided over politics now than at any point in the past 40 years. To help bridge those divides, colleges are increasingly partnering with organizations aimed at promoting civil dialogue, including Braver Angels, BridgeUSA, the Institute for Citizens and Scholars, and the Constructive Dialogue Institute. And lately, AI is becoming part of the conversation.

Friday, May 01, 2026

This is the fastest-growing job for young workers, LinkedIn says - Mary Cunningham, CBS News

As the rise of artificial intelligence stirs anxiety over the technology taking people's jobs, AI is also opening pathways to new careers, according to LinkedIn. The fastest-growing job title for young workers on the networking platform is "AI engineer," a recent report from the company found. LinkedIn analyzed millions of member profiles to determine the number of entry-level workers hired over the last three years and the roles they were hired to fill. "It's measuring momentum for these job titles," said Kory Kantenga, the head of economics, Americas, at LinkedIn. "Companies are just gorging on AI talent."

US security agency is using Anthropic's Mythos despite blacklist, Axios reports - Reuters

The United States National Security Agency is ​using Anthropic's Mythos Preview AI tool despite ‌the Pentagon hitting the company with a formal supply-chain risk designation, Axios reported on Sunday.
The Mythos Preview model ​was being used more widely within the ​department, Axios said, citing sources. Reuters could ⁠not immediately verify the report. Anthropic, the NSA and ​the Department of Defense did not immediately respond ​to requests for comment outside regular business hours. The NSA is part of the Defense Department.

Thursday, April 30, 2026

Feasibility of implementing a multicultural curriculum through artificial intelligence: perspectives of educational science experts - Huijuan Qin & Zijian Zhou, Nature

Across the revised thematic structure, feasibility was constructed through four interrelated domains: conditional pedagogical feasibility, epistemic and ethical risk, human mediation and institutional governance, and democratic co-construction of multicultural learning. Educational science experts regard AI-mediated multicultural curricula as possible but structurally fragile. Feasibility depends on value-led curriculum design, robust governance, and critical human mediation that aligns AI with the transformative ambitions of multicultural education. More specifically, feasibility depends on the alignment of explicit multicultural curricular intent, recognition of epistemic and ethical risk, strong institutional and pedagogical mediation, and students’ active participation in critical inquiry.

AI fears drive some young adults to grad school — ‘people shelter in higher education,’ expert says - Jessica Dickler, CNBC

Typically, enrollment in graduate school increases during recessions as workers seek to advance or to move to another industry with better career prospects or pay. Today, more people in a survey said they plan to go back to school within a year, even though the economy is doing well. Experts say young adults are exploring this option largely because they are worried about their job prospects despite the economy.

Wednesday, April 29, 2026

Is Your AI Ethical, Human-Centered and Pro-Social? - Ray Schroeder, Inside Higher Ed

Many of us utilize AI daily in our higher education work, yet we may not have assessed the ethical and human-centered nature of the tool we have selected and trained through our prompts. AI tools are no longer a relatively simple search engine that is driven by marketing metrics to help us conduct our research. Rather, with AI we are using more sophisticated tools that conduct research and seek answers to our prompting while making source-selection decisions, contextual settings and semantic subtleties that impact the values expressed in the results. Before we look at the default values and orientations inherent in some of the leading AI models, let me remind you that in crafting your prompt, you can encourage the tool to put an emphasis on generating responses that include orientations and perspectives that address ethical considerations. Your prompt can direct the model to provide results that explore, highlight or emphasize pro-social or human-centered solutions and examples.

White House Directs Banks to Use Anthropic Mythos - Let's Data Science

The White House has encouraged major U.S. banks to test Anthropic's Claude Mythos model to identify security gaps. Several large banks have begun in-house evaluations after a Treasury and Federal Reserve meeting with Wall Street executives that emphasized using the model to uncover vulnerabilities. The administration frames the engagement as part of an ongoing AI security taskforce, and Anthropic has opened an early-access partner program for Claude Mythos alongside its Claude Managed Agents rollout. The guidance positions Claude Mythos explicitly for defensive cybersecurity work, including red teaming and proactive vulnerability discovery, and signals closer public-private coordination on AI risk remediation for critical financial infrastructure.