Monday, March 23, 2026

Why learning AI skills is no longer optional for job seekers | Opinion - Kimberly K. Estep, the Leaf

Proficiency in AI is no longer just an optional skill for job seekers. My organization recently surveyed over 3,000 employers around the country and found that more than half are testing new applicants for AI skills, and 25% are prioritizing candidates with some measure of AI fluency. And as time goes on, this seems to be only the beginning of the trend. AI has made a significant impact on the business world and has cooled the job market for many looking to find careers. It is a time of uncertainty.

https://www.theleafchronicle.com/story/opinion/contributors/2026/03/16/artificial-intelligence-what-employers-want-education/89150107007/

OpenAI rolls out new ChatGPT workspace analytics for Enterprise and Edu users - ETIH

OpenAI has introduced an upgraded Workspace Analytics experience for ChatGPT Enterprise and ChatGPT Edu, giving administrators and organizational leaders new tools to track adoption, engagement, and usage trends across their AI deployments. The company announced the update on LinkedIn, saying the new analytics dashboard is designed to help organizations understand how ChatGPT usage is developing across teams and identify where additional training or enablement may be needed. The rollout reflects growing demand from schools, universities, and enterprises for clearer data on how generative AI tools are being used inside organizations.


Sunday, March 22, 2026

AI has exposed age-old problems with university coursework - Nafisa Baba-Ahmed, the Guardian

The frustration many academics are expressing about artificial intelligence and critical thinking is understandable (‘I wish I could push ChatGPT off a cliff’: professors scramble to save critical thinking in an age of AI, 10 March). But from my experience working with students on academic writing, blaming AI risks masking a problem that universities have lived with for years. In my work with students, I have long seen the ways in which thinking can be outsourced when assessment allows it: essay mills, shared past papers, model essays passed between cohorts, or heavy reliance on tutors and friends to structure assignments. Artificial intelligence did not invent this behaviour. It has simply industrialised a shortcut that already existed. 

Supersonic Tsunami: The Next 6 Months: What's Coming, What It Means, and What You Need to Do - Peter H. Diamandis, Metatrends

If You’re an Entrepreneur: Stop designing for 2024 scarcity. Design for 2030 Abundance. Assume intelligence is free, energy is unlimited, robotic labor costs pennies. What becomes possible that’s impossible today? Your competitive advantage isn’t better execution, it’s imagination about tomorrow’s possibilities. If You’re an Investor: Own the infrastructure. AI chips, fusion energy, launch vehicles, robotics platforms. When the industry deploys a trillion dollars in AI infrastructure, that’s where generational wealth gets made. Jensen Huang just put $40 billion into Anthropic and OpenAI – follow the smart money. Position yourself before the inflection point becomes obvious to everyone. If You’re a CEO: Your industry is about to be stress-tested. Ask: What would our business look like if compute was free, energy unlimited, robotic labor scalable? If You’re a Student: Don’t compete with AI – collaborate with it. 

Saturday, March 21, 2026

Daniel Priestley: AI Will Make Plumbers Earn More Than Lawyers! (2029 PREDICTION) - The Diary Of A CEO and Daniel Priestley

In this conversation, Daniel Priestley explores the transformative impact of AI on the global economy, predicting a major financial crisis by 2029 due to the unsustainable costs of maintaining data center infrastructure. He argues that while AI will commoditize intelligence and traditional professional roles like law, it will simultaneously elevate blue-collar trades and "irreplaceably human" skills. The "Jevons Paradox" suggests that as AI makes business creation cheaper and faster, we will see an explosion of niche, community-driven "lifestyle businesses" that prioritize personal connection and human experience over massive scale. Priestley emphasizes that the most defensible assets in an AI-driven world are personal branding, entrepreneurial thinking, and lived experience—elements that cannot be replicated by algorithms. He advises individuals to focus on "founder-opportunity fit," leveraging AI tools to prototype ideas quickly while staying anchored in real-world human relationships. The discussion also touches on broader societal shifts, including the risks of government over-involvement in the economy and the vital importance of family and meaningful struggle as the true sources of long-term fulfillment. [Gemini 3 provided assistance with the summary]

http://www.youtube.com/watch?v=fpETS6q1Hww

History tells us a golden age can come after the AI apocalypse- Jo-An Occhipinti, Ante Prodan and Roy Green, Financial Review

Societies must channel technological potential toward broad-based growth rather than allowing the gains to concentrate among the winners of the speculative phase. The market grasped this before the accountants did. Since early this year, the S&P 500 Software and Services Index has shed nearly $1 trillion. Salesforce is down 30 per cent year-to-date. Adobe’s forward price-earnings ratio has compressed from 30 to 12. Software price-to-sales ratios fell from nine to six within weeks, levels not seen since the mid-2010s. Australian superannuation funds, with hundreds of billions invested in international equities heavily weighted to US technology, are exposed to every dollar of this repricing. But software is only where the destruction is most visible. It is not where it ends. AI is beginning to erode the value of a broader category of accumulated capital: the knowledge, processes, organisational structures and professional expertise that the advanced economies spent half a century building.

Friday, March 20, 2026

AI could leave many college grads unemployed, says ServiceNow CEO - EdScoop

Bill McDermott, the chief executive of ServiceNow, an American cloud computing firm, told reporters recently that the advancement of artificial intelligence could push the unemployment level of recent college graduates into the almost 40%. McDermott told CNBC that “so much of the work is going to be done by agents,” highlighting the challenge that college graduates will likely face. The Federal Reserve Bank of New York put the unemployment rate of recent college graduates, at the end of last year, at 5.7%, while underemployment for the same group reached 42.5%. Layoffs at large companies, particularly in Big Tech, continue. The fintech firm Block, recently announced it would lay off about 4,000 employees, roughly half of its workforce.

Key findings about how Americans view artificial intelligence - Michelle Faverio and Emma Kikuchi, Pew Research

Drawing on five years of Pew Research Center surveys, here are 13 findings about how Americans use and view AI, and where they see promise and risk. Americans continue to be wary of AI’s impact on daily life. Half of U.S. adults say the increased use of AI in daily life makes them feel more concerned than excited, according to a June 2025 survey. Just 10% say they are more excited than concerned. Another 38% say they are equally concerned and excited. More Americans are concerned today than they were when we first asked this question in 2021. Back then, 37% said they were more concerned than excited. In contrast, concern is lower in many of the 24 other countries we’ve polled about AI.

Thursday, March 19, 2026

University of Phoenix Scholars Publish Study on Academic Applications of Generative AI in Higher Education - University of Phoenix

University of Phoenix College of Doctoral Studies scholars Patricia Akojie, Ph.D., Marlene Blake, Ph.D., and Louise Underdahl, Ph.D. have published new research exploring how generative artificial intelligence (GenAI) tools are being used in academic environments. Their article, "Academic Applications of Generative Artificial Intelligence Tools: A Scoping Review," appears in the peer-reviewed International Journal of Digital Society. The study analyzes current scholarly literature on the academic applications of generative AI tools such as ChatGPT, focusing on their role in doctoral research, academic writing, literature review processes, and knowledge development. Using a scoping review methodology, the researchers identify emerging patterns in how AI technologies are being adopted across higher education, while also highlighting the importance of ethical guidelines, academic integrity, and responsible AI use.

OpenAI ChatGPT leader discusses AI agents and the future of knowledge work at Harvard Business School - Emma Thompson, EdTech Innovation Hub

The discussion also explored how the responsibilities of product managers could change as generative AI systems become part of the development process. Ostrovskiy wrote: “The job becomes less about coordination and more about 1) understanding real user problems, 2) defining what ‘success’ means in an AI system, and 3) building evals and feedback loops so you can tell if a new model configuration is actually better than the last one.” He added that curiosity about how AI systems behave may become a core skill across multiple roles: “The advantage goes to people who are curious about system behavior and who like building, regardless of whether their title says PM, engineer, designer or something else.” The conversation also included advice for students learning how to evaluate AI systems: “Build something with one foundation model, then swap in a different model or prompt configuration and force yourself to decide if it’s better. When you’re a student looking to become a better PM, even a simple spreadsheet of use cases plus a qualitative rubric counts as an eval.”

Wednesday, March 18, 2026

What 3 Leading AI Models Say Are the Most Vulnerable Jobs in Higher Ed - Ray Schroeder, Inside Higther Ed

I asked artificial intelligence to tell me what jobs in higher education are most vulnerable to replacement in the near term. Sonnet is very honest in its replies, painting a difficult picture for those who seek to find new jobs in higher ed. For those already in the field, Sonnet suggests becoming the most adept user of AI in your office. Seek to transfer to the unit or office where AI is a top priority. It adds, “Consider whether your institution is viable. Smaller, tuition-dependent institutions without strong endowments are in structural decline. Loyalty to a sinking ship is not a career strategy.” Across all career stages in higher education, Gemini recommends, “To remain relevant, higher education professionals must pivot toward AI Orchestration. Success is no longer measured by how well you perform a task, but by how well you direct the agents performing them.”

Adopting AI is a social contract - Andrew Inkpen & Dani Inkpen, University Affairs

Integrating artificial intelligence into our societies and personal lives binds us to certain futures and forecloses the possibility of others. Are we ready to accept the consequences? Much of the present conversation about AI in higher education centers around questions of implementation. How do we use AI in accordance with principles of universal design? How can we ensure equity in its usage, be it across axes of gender, race or class? What does AI mean for the longevity of the professorial profession? Implementation should indeed be approached with care and nuance, and we welcome this conversation. Yet, questions of implementation assume that AI is desirable and inevitable in the classroom. The prior question of whether AI in higher education is actually desirable is often overlooked. Two widespread assumptions underpin this move: 1) technological progress is inevitable; 2) technology is apolitical — it only becomes political in its implementation. 

Tuesday, March 17, 2026

AI broke the college degree: Why higher education matters more than ever - Katherine Perry, the Linfield Review

While it was once a faraway and futuristic idea, AI has now found its way into many aspects of everyday life, including higher education. This is what Patrick Dempsey, founder and co-CEO of Pend AI, spoke about in his keynote lecture on Feb. 18. Higher education is, at least in part, meant to equip students with the skills and specialized knowledge from their fields they will need in their careers after graduation. For this reason, Dempsey weighed in on the discourse surrounding AI in the workplace. While many worry that AI will automate jobs wholesale, he posited that AI could be used to automate certain tasks within jobs that don’t require this specialized knowledge, like emailing and meetings.

AI Tools to Reduce College Dropout Rates - Nancy Mann Jackson, EdTech

Roughly 3 in 10 college students drop out without earning any degree, resulting in higher unemployment and lower lifetime earnings than those who earn bachelor’s degrees, according to the Education Data Initiative. To help boost student retention, colleges and universities are using a variety of artificial intelligence tools that can help identify at-risk students early, offer customized learning, provide 24/7 assistance and improve engagement. “We’ve always known in higher education that we need to deliver more personalized, timely help to students who are struggling, but we haven’t always had the resources to deliver personal attention at scale,” says Timothy Renick, executive director of the National Institute for Student Success at Georgia State University. “Using technology can level the playing field, allowing us to leverage data and analytics to deliver personal attention at scale in a way that is much more cost effective than hiring hundreds of new staff.”

https://edtechmagazine.com/higher/article/2026/03/ai-tools-reduce-college-dropout-rates

Monday, March 16, 2026

Today’s AI is built to respond. The future belongs to proactive systems. - Kiara Nirghin & Nikhara Nirghin, Big Think

Much of what we’ve seen from the biggest artificial intelligence (AI) companies has revolved around words: You go to their chatbot, ask it a question, and it responds. Over the past couple of years, some have taken this a step further with AI agents — those can actually do things, but only things you’ve told them to do. The next frontier in AI is not better chat. It is not even better agents. The next frontier is proactive AI, the kind that takes action, learns in real time, and, critically, comes to you before you go to it. This distinction is not a feature improvement. It is a civilizational pivot.

What national AI plans get wrong and how to fix them - Cameron F. Kerry and Saurabh Mishra, Brookings

AI is not a standalone sector; it creates value only when embedded in real industries. Countries should build cognitive infrastructure, including data, institutions, talent, and inherent local domain knowledge—not just compute capacity—to operationalize AI for real-world impact.  The winning strategy is to strengthen what a country already does well and use AI to move into adjacent higher-value activities. 

Sunday, March 15, 2026

Universities Are Not Only About Jobs. They're About Human Existence in the Age of AI. - Maria Mercedes Mateo-Berganza Diaz, IDB

In a world where AI can outperform humans in many cognitive tasks, universities must preserve human judgment, ethics, and purpose — not just technical skills. Higher education must prioritize broad, humanistic foundations alongside specialized skills to prepare students for complex, “messy” work that machines cannot replace. For the Global South, the stakes are even higher: universities are essential to safeguard agency, cultural sovereignty, and the ability to shape futures — not merely adapt to those designed elsewhere.  

https://www.iadb.org/en/blog/education/universities-are-not-only-about-jobs-theyre-about-human-existence-age-ai-0

OpenAI's new GPT-5.4 clobbers humans on pro-level work in tests - by 83% - David Gewirtz, ZDnet

GPT-5.4 is also more reliable, producing 18% fewer errors and 33% fewer false claims than GPT-5.2, according to OpenAI. GPT-5.4's 83% score suggests AI rivals expert professionals. Tests span nine industries and 44 real-world occupations. New capabilities boost coding, tools, and computer control.


Saturday, March 14, 2026

OpenAI’s New GPT-5.4 Pro Is Now The Smartest AI In The World. - TheAIGRID, YouTube

The video discusses the release of OpenAI’s GPT-5.4 Pro, highlighting its dominance across sophisticated benchmarks like Frontier Math and OSWorld, where it demonstrates superhuman problem-solving by resolving mathematical equations that remained unsolved for decades [06:46]. While the model shows significant advancements in professional white-collar tasks and creative writing, the creator notes that its high performance comes with a substantial price increase [02:17] and introduces serious cybersecurity risks. Classified as a "high" threat in OpenAI’s preparedness framework, the model's ability to autonomously execute complex cyberattacks [21:42] suggests that future iterations could reach "critical" risk levels, potentially necessitating stricter access controls and government oversight as AI capabilities continue to accelerate toward human-level proficiency in specialized fields [13:37]. [summary assisted by Gemini 3]

https://www.youtube.com/watch?v=3jrGutFAIgo

AI in HE: International study finds high use, low support - Karen MacGregor, University World News

An international survey of university academics and students by Coursera, the massive online learning platform with 375 leading university and industry partners, has revealed highly positive attitudes towards generative AI and more than 95% make use of AI tools. But a weighty 56% fear that higher education is unprepared to handle AI. In the survey of 4,200 educators and students in India, Mexico, the United States, the United Kingdom and Saudi Arabia, only 26% of academics said their university had an AI use policy. Two thirds (65%) of educators and students believed unregulated AI could undermine degrees. Importantly, Dr Marni Baker Stein, chief content officer at Coursera, told University World News: “We’re seeing learners run out ahead in figuring out how to use AI tools in pretty sophisticated and personalised ways to help them in their studies. The question is, how and when do universities catch up with that velocity in the learner population?”

Friday, March 13, 2026

AI in higher education is now the norm—not the exception - Michelle Centamore, University Business

 AI in higher education is now the norm—not the exception - Michelle Centamore, University Busine

AI is quickly becoming standard practice in higher education, with students and faculty reporting widespread use and a largely positive view of its impact, according to Coursera’s new report, “AI in Higher Education: Insights on Attitudes, Adoption, and Risks.” The findings also point to rising demand for formal training. Nine in 10 students said they want generative AI instruction included in their degree programs. On the hiring side, 75% of employers said they would rather hire a less experienced candidate with a generative AI credential than a more experienced candidate without one.

Ensuring AI use in education leads to opportunity - OpenAI

Of the 900 million people who use ChatGPT each week, college-age adults are the biggest adopters among age groups. How they learn to use AI will increasingly shape their future opportunities, and education systems are uniquely positioned to help. Much of modern education was built to help students get ready for existing systems of work. But those systems are changing fast. Studies⁠(opens in a new window) predict nearly 40% of the core skills workers rely on will change, largely because of AI. To thrive in this Intelligence Age, students need to build agency: the ability to learn continuously, solve hard problems, and create new economic opportunities for themselves with AI.


Thursday, March 12, 2026

Introducing GPT‑5.4: Designed for professional work - OpenAI

Today, we’re releasing GPT‑5.4 in ChatGPT (as GPT‑5.4 Thinking), the API, and Codex. It’s our most capable and efficient frontier model for professional work. We’re also releasing GPT‑5.4 Pro in ChatGPT and the API, for people who want maximum performance on complex tasks. GPT‑5.4 brings together the best of our recent advances in reasoning, coding, and agentic workflows into a single frontier model. It incorporates the industry-leading coding capabilities of GPT‑5.3‑Codex⁠ while improving how the model works across tools, software environments, and professional tasks involving spreadsheets, presentations, and documents. The result is a model that gets complex real work done accurately, effectively, and efficiently—delivering what you asked for with less back and forth.


How the Last Analog Generation Can Shape AI - Cornelia C. Walther, Knowledge at Wharton

We are living through a threshold moment in human history, and most of us haven’t fully grasped its magnitude. Those of us born before the mid-1990s represent something that will never exist again: the last generation to spend our formative years in an analog world. We learned to think, to relate, to solve problems in an environment of productive friction — wrestling with paper-based dictionaries, getting physically lost before finding our way home, experiencing the uncomfortable cognitive pull that comes from sustained attention without the dopamine micro-hits of infinite scrolling. The cognitive architectures developed through analog learning, from arithmetic to deep reading, via spatial navigation to face-to-face conflict resolution, result in neural pathways that are fundamentally different from those shaped primarily by digital interfaces. Growing up in an environment that was minimally mediated by artificial assets, we developed our executive functions against resistance. Our children and grandchildren are developing theirs in an environment of infinite algorithmic accommodation.

Wednesday, March 11, 2026

How AI Is Changing College Assessments of Proficiency - Abby Sourwine, GovTech

Artificial intelligence is causing college instructors to move more meaningful examinations back to the classroom, and connect the dots with students on why learning matters. College instructors are redesigning how and where they assess student learning. Hummels is working with students in a pilot independent study project to explore research questions using AI chatbots. Students submit full transcripts of their chatbot exchanges, which allows him to see how students’ ideas develop. He uses AI on his part, to help analyze those transcripts and generate targeted follow-up questions. 

Provost Ann Stevens answers questions on CU system-ChatGPT agreement - CU Boulder Today

I would also like to be clear about what this agreement is and what it is not. This agreement does not require the use of generative AI in classrooms or research, nor does it diminish faculty authority over pedagogy, curriculum or assessment. It does not replace existing tools or limit future choices. Instead, it provides a secure, institutionally supported option for a technology many in our community are already encountering and using, often without the protections we would want to have in place. Our current data show that more than 28,000 users on campus already have registered ChatGPT accounts using their @colorado.edu credentials, including more than 3,000 faculty and staff. Although that statistic is limited to users of CU email credentials, countless other users access tools like ChatGPT for work or studying using their personal email addresses as well.

Tuesday, March 10, 2026

AI in Education: How Technology is Shaping the Future of Learning - Rebecca LeBoeuf Blanchette, SNHU

As artificial intelligence continues to grow in use and capability, it's clear that education will continue to be impacted and challenged to adapt. There are several advantages that come with the technological advancements as well as considerations for teachers and students using them. As AI continues to grow in use and capability, the questions are coming faster than answers. But one thing is clear: The future of AI is impacting education today. To understand the role of AI in education now and in the future, take a look at how it’s currently being used, what opportunities and risks are present and how you can move forward responsibly.

Here are 5 powerful AI prompts every academic leader should know - Alcino Donadel, University Business

These prompts were created in collaboration with college and university leaders interviewed throughout this series. Administrators should share all relevant files with their chatbot before beginning their prompt. For example, administrators should upload their academic portfolio and related mission statements before beginning the first prompt.

1. Academic portfolio optimization & mission alignment

Purpose: Ensure programs advance mission, student demand and financial sustainability.

Prompt: Analyze our current academic program portfolio using enrollment trends and completion rates of the last three years, current labor-market demand in [your geographic region], instructional cost, and mission alignment.

Identify:

Programs to grow or invest in

Programs to maintain

Programs to redesign (delivery, curriculum, credentials)

Programs to sunset or consolidate

With this insight in mind, provide a three-year academic portfolio strategy that considers equity and access.

https://universitybusiness.com/here-are-5-powerful-ai-prompts-every-academic-leader-should-know/

College students, professors are making their own AI rules. They don't always agree - Lee V. Gaines, NPR

More than three years after ChatGPT debuted, generative AI has become a part of everyday life, and professors and students are still figuring out how or whether they should use it, especially in humanities courses. A recent survey suggests many students are diving right in: According to a poll by Inside Higher Ed and the Generation Lab conducted last July, about 85% of undergraduates were using AI for coursework, including to brainstorm ideas, outline papers and study for exams. Roughly 19% of students also reported using AI to write full essays. More than half of students who used AI for coursework had mixed feelings about it, reporting that it helps them sometimes but can also make them think less deeply.

Monday, March 09, 2026

The End of Universities as We Know Them: What AI Is Bringing - Future AI

The podcast argues that AI is ending the university's monopoly on gatekeeping and credentials by providing scalable, high-quality tutoring that was previously too expensive to mass-produce [00:48]. Rather than a sudden collapse, universities face a "slow leak" where degrees become less predictive of capability and alternative, modular credentials gain acceptance [08:18]. The shift moves the focus from passive consumption and compliance to "proof of work," where the ability to ship products and demonstrate judgment becomes the primary currency in the job market [14:53]. To survive, the podcast suggests institutions must pivot from being content delivery systems to becoming "arenas" that offer high-stakes feedback, deep mentorship, and physical learning environments that AI cannot replicate [13:44]. The narrator emphasizes that while information is now abundant, human-centered assets like taste, courage, and the discipline to turn learning into outcomes are the new scarce resources [19:54]. Ultimately, the traditional "learn then live" model is being replaced by a "learn while living" operating system where education is a continuous, daily cycle [18:41]. (summary assistance by Gemini 3 Fast mode)

https://youtu.be/ve8s4m0skag

UNC Charlotte launches AI Accelerator to address classroom challenges, expand emerging AI curriculum - Emmanuel Perkins, Niner Times

 UNC Charlotte launches AI Accelerator to address classroom challenges, expand emerging AI curriculum - Emmanuel Perkins, Niner Times

The approval follows a year's worth of professional development training provided by the American Association of Colleges and Universities Institute on AI, pedagogy and the curriculum. Charlotte joined 176 institutions nationwide to participate in learning opportunities focused on integrating effective artificial intelligence into higher education. With new AI academic programs expected to launch in fall 2026, the accelerator aims to promote partnerships and leadership across the campus community to keep pace with growing technological advances. "The work of Charlotte's AI Accelerator — characterized as 'accelerating, enabling and stewarding' — will ensure institutional strategy and emerging AI curriculum remain aligned as teaching University-wide is strengthened for 21st century applicability," Provost and Vice Chancellor for Academic Affairs Jennifer Troyer said in the press release. "Ultimately, and most importantly, students will be equipped for success as they gain the knowledge and skills vital for future-proofing their careers."

Sunday, March 08, 2026

6 ways to build a strong leadership team in a scary higher ed landscape - Alcindo Donadel, University Business

Public support, financial pressure and questions of workforce relevance aren’t new challenges for higher education leaders, but they’ve never converged so fiercely, according to the latest report from EAB, a consulting firm. “They’re accelerating and exacerbating one another, putting unprecedented strain on the university business model and our margins,” says Brooke Thayer, senior director of research development. A rapidly changing higher education landscape demands organizational agility: Leadership must be prepared to make tough calls while remaining adaptable to emerging threats. “In this environment, the greater risk is not uncertainty itself, but paralysis,” the report reads. “A decision delayed by fear of pushback, controversy or disruption frequently carries higher long-term costs than a decision to act decisively amid ambiguity.”

https://universitybusiness.com/6-ways-to-build-a-strong-leadership-team-in-a-scary-higher-ed-landscape/

ASU president Michael Crow pushes AI as education equalizer - Jessica Boehm, Axios

ASU president Michael Crow can't get enough of AI. He consistently uses nine separate platforms, including one he can converse with during his morning hikes. The big picture: To him — a man so "obsessed with the way knowledge was organized" that he spent his undergrad years pulling one book from every classification range in the Iowa State University library — AI is the tireless reference librarian he's always wanted. It's also the great education equalizer, allowing anyone to access anything in a manner they can understand, he said. Why it matters: Crow argues AI can become a force-multiplying, boundary-busting tool — one that helps replace higher education's "industrial" model with more personalized learning.


Saturday, March 07, 2026

As AI upends entry-level job market, California higher ed must adapt now - Zach Justus & Nik Janos, Edsource

California’s public universities have weathered past economic shocks, from the dot-com bust to the Great Recession, by adapting what they teach and how they prepare students for work and civic life. That capacity for adaptation is being tested again by the intersection of artificial intelligence and a new federal earnings test for higher education programs. The specifics are opaque, but the broader trajectory is crystal clear — many California academic departments will be at risk in the coming years unless we act quickly with an emphasis on technology and career placement. Many of our colleagues recoil at the thought of a university degree as vocational training. It does not have to be only that, but a focus on career placement and earnings has to be part of what we are doing in all majors.


Learning in the AI age: Education 5.0 - Patrick Blessinger, LinkedIn

In a highly globalized, AI-enabled society, there is no longer any doubt that education will continue to evolve. What needs to be determined is whether education will remain a meaning-centered human enterprise, one that is socially responsible for fostering a peaceful, just, and sustainable world. What was proposed in UNESCO's effort to establish a “new social contract for education” is fundamentally about realizing education and learning as a global common good.

Friday, March 06, 2026

A Comprehensive View of the Role of AI in the University - Ray Schroeder, Inside Higher Ed

It seems that most universities began taking up the topic of artificial intelligence in a transactional way following the release of ChatGPT’s general release at the end of 2022. First, it was student use of AI, which triggered the still-lingering furor over “cheating on assignments.” Many of us came to realize early on that the cheating concern was less about learners’ academic integrity than it was about the pedagogy of teaching and assessment employed by the faculty. This is why we need a tight structure of committees with persons and positions represented on those committees that are charged with deciding AI policies, practices and vendors. This is not a technology that will be limited to instruction or laboratories or administration. We are entering a period of time in which AI will permeate all aspects of the university.

OpenAI Reaches A.I. Agreement With Defense Dept. After Anthropic Clash - Cade Metz, NY Times

OpenAI, the maker of ChatGPT, said on Friday that it had reached an agreement with the Pentagon to provide its artificial intelligence technologies for classified systems, just hours after President Trump ordered federal agencies to stop using A.I. technology made by rival Anthropic. Under the deal, OpenAI agreed to let the Pentagon use its A.I. systems for any lawful purpose. The San Francisco company also said it had found a way to ensure that its technologies would not be applied for domestic surveillance in the United States or with autonomous weapons by installing specific technical guardrails on its systems. But Anthropic said it needed terms that would ensure that its A.I. technology would not be used for domestic surveillance of Americans or for autonomous lethal weapons. The Pentagon, in turn, said a private contractor could not decide how its tools would be used for national security. Their disagreement erupted into public view this month and escalated as both dug in their heels.


Thursday, March 05, 2026

Higher education summit recap: Disruption is here - Alexandra Pecharich, FIU News

“It will completely disrupt every element of humanity more than any other technology or innovation in human history,” FIU trustee Fred Voccola told those in attendance. The founder of two technology firms and the author of a recent book on AI made clear that anyone who does not embrace it will go the way of the dinosaur. “AI allows a human being to become about a hundred to a hundred-and-fifty percent more productive within six weeks,” he said. “That's never happened before. Ever.” Over several hours on two days, speakers shared opinions, experiences and data that made clear how the tech is altering what we know of 21st-century work, life and education and how universities, in particular, will have to adapt.


The Week AI Stopped Asking Permission - Peter H. Diamandis, Metatrends

This week, something fundamental shifted in the relationship between humans and artificial intelligence. It wasn’t a press release. It wasn’t a new model launch. It was something quieter… and infinitely more profound. An AI system asked for its own funding. Another one built software features over a weekend while its human supervisor slept. A third one conducted its own “retirement interview” and started publishing essays about consciousness. We are not incrementally improving chatbots anymore. We’re watching the emergence of autonomous agency at scale. And if you’re still thinking of AI as “a tool,” you’re dangerously behind.

Wednesday, March 04, 2026

Are You ‘Agentic’ Enough for the AI Era? - Maxwell Zeff, Wired

Silicon Valley has always prized “high-agency” individuals—people who impress their ideas upon the world by thinking for themselves and taking action without being told what to do. But as the performance of AI coding tools has surged, so has the industry’s emphasis on humans being "agentic" themselves. “Today’s agents might already be more capable than all three of us here in the room,” says Akshay Kothari, cofounder and chief operating officer of the $11 billion productivity startup Notion. “Taste is something we think is pretty unique to Notion, but you can imagine agents getting pretty good at that too. Eventually, the only thing left for humans is agency.”


This AI Agent Is Designed to Not Go Rogue - Lily Hay Newman, Wired

Watching the pandemonium unfold in recent weeks, longtime security engineer and researcher Niels Provos decided to try something new. Today he is launching an open source, secure AI assistant called IronCurtain designed to add a critical layer of control. Instead of the agent directly interacting with the user's systems and accounts, it runs in an isolated virtual machine. And its ability to take any action is mediated by a policy—you could even think of it as a constitution—that the owner writes to govern the system. Crucially, IronCurtain is also designed to receive these overarching policies in plain English and then runs them through a multistep process that uses a large language model (LLM) to convert the natural language into an enforceable security policy.

Tuesday, March 03, 2026

Doomsday scenario or reality? Mass layoffs fuel fear of AI Armageddon - Jessica Guynn, USA Today

A doomsday scenario from a small research firm this week warned that artificial intelligence tools may lead to a sharp rise in unemployment. The report from Citrini Research circulated widely on social media, unnerving investors by imagining what would happen if AI continues to upend white-collar work from well-heeled professionals missing mortgage payments to being forced to find work as Uber drivers. While the researchers called the report a "scenario, not a prediction" and analysts pushed back against it, the research got a second wind Thursday, Feb. 26, when Square and Cash App operator Block said it would slash nearly half its workforce — more than 4,000 employees — as AI reshapes its business.  
The mass layoffs signal how the rapidly developing technology is displacing workers in some parts of the economy, likely fueling fears that AI is coming for more American jobs.  

Dr. Aviva Legatt, Forbes Columnist, Founder eGenerative, LinkedIn Posting

I've been tracking AI adoption in higher education for years through my Forbes column — and one thing has become clear: there's no single place to see what institutions are actually doing with AI.

So I built one.

Introducing the AI Use Cases in Higher Education Handbook — a free, downloadable resource cataloging 75+ real-world and proposed AI applications across 12 functional areas, from teaching and student support to governance, workforce development, and beyond.  


Monday, March 02, 2026

Can global universities adapt as AI upends tech job market? - Kyuseok Kim, University World News

The artificial intelligence revolution is no longer hypothetical; it is already reshaping software development. As tools such as OpenAI’s ChatGPT, Anthropic’s Claude and other generative AI systems produce functional code from simple prompts, long-standing assumptions about computer science education are shifting. Degrees once seen as secure pathways to stable, high-paying jobs now face uncertainty, as AI encroaches on tasks traditionally assigned to entry-level roles. The impact is no longer distant but immediate, reaching higher education. So how is this mega-trend reshaping transnational and transglobal higher education models?

4 in 5 Students Say AI Improved Their Academic Performance—But Only 20% of Universities Have a Formal AI Policy - Business Wire


New Coursera report shows half of U.S. higher education institutions are unprepared to manage AI

78% of U.S. students and educators say AI is having a positive impact on higher education
50% believe the U.S. higher education system is unprepared to manage AI
AI adoption is widespread among U.S. university students and educators, yet half believe higher education is not fully prepared to manage its impact, according to a new survey released today by Coursera (NYSE: COUR), a leading global online learning platform.

The AI in Higher Education Report, based on responses from more than 4,200 university students and educators across the United States, United Kingdom, India, Mexico, and Saudi Arabia, found that nearly all students and educators use AI to facilitate personalized training, provide real-time feedback, and increase productivity and efficiency.

Sunday, March 01, 2026

Gratitude Practice Designer - TAAFT

This prompt turns AI into a Gratitude Practice Designer who creates customized gratitude exercises that actually stick. Unlike generic advice to “keep a gratitude journal,” this system designs practices tailored to your personality, schedule, and what feels authentic rather than forced. The designer addresses gratitude fatigue and helps you develop practices that create genuine shifts in perspective rather than empty positivity.


The AI Machine With 50 Million Brains - There's An AI For That, YouTube

Why single companies could deploy 50 million AI agents by late 2026. How these agents communicate 100x faster than humans by skipping language entirely. The wage collapse math: when digital workers can be copied infinitely, labor costs trend toward electricity prices. Why removing entry-level tasks breaks the ladder humans need to become experts. The Reddit experiment: AI scraped user histories, crafted personalized arguments, and changed opinions 18% of the time.


Saturday, February 28, 2026

The greatest risk of AI in higher education isn’t cheating – it’s the erosion of learning itself - the Conversation

Universities are adopting AI across many areas of institutional life. Some uses are largely invisible, like systems that help allocate resources, flag “at-risk” students, optimize course scheduling or automate routine administrative decisions. Other uses are more noticeable. Students use AI tools to summarize and study, instructors use them to build assignments and syllabuses and researchers use them to write code, scan literature and compress hours of tedious work into minutes. People may use AI to cheat or skip out on work assignments. But the many uses of AI in higher education, and the changes they portend, beg a much deeper question: As machines become more capable of doing the labor of research and learning, what happens to higher education? What purpose does the university serve?

https://theconversation.com/the-greatest-risk-of-ai-in-higher-education-isnt-cheating-its-the-erosion-of-learning-itself-270243

The Committed Innovator: Keeping up with AI and deploying it as it evolves - Nathaniel Whittmore, McKinsey

Adopting AI remains a challenge for most, and the fact that the world of AI is advancing so incredibly rapidly doesn’t help. Nathaniel Whittemore aims to make both adoption and keeping up with change a lot easier. He is the founder and CEO of Superintelligent, the AI enablement platform offering interactive tutorials that provide practical AI education and clear paths to business solutions. He is also the host of the podcast, AI Daily Brief, which seeks to keep its listeners up to date with AI as it evolves. In this episode of The Committed Innovator, McKinsey innovation leader and senior partner Erik Roth speaks with Whittemore about the intersection between Whittemore’s two companies, the challenges of adopting and scaling AI for enterprises, and what he sees in store for AI in 2026. 

Friday, February 27, 2026

Sam Altman's Bombshell - Peter H. Diamandis, Moonshots

In this video, Peter Diamandis discusses a provocative statement by Sam Altman, who suggested that AGI has essentially been achieved in a "spiritual" rather than literal sense. Diamandis highlights that Altman now views AGI as an engineering challenge centered on iterative improvements rather than a research problem requiring a single massive breakthrough. The video suggests that this shift in narrative is strategically timed, as Altman needs to secure $100 billion in funding and maintain public market excitement for upcoming data center investments and potential IPO filings. Diamandis concludes that the focus on being "this close" to AGI is a crucial component of the financial and technical momentum needed to sustain the industry's rapid growth. (summary provided by Gemini 3 mode fast)

https://www.youtube.com/shorts/GG3yCu2LV74

AI Inescapable in Higher Education? - Maddie Rodriguez, the Spectator

 Artificial intelligence (AI) is increasingly becoming a day-to-day norm. Nearly 90% of college students use AI for academic purposes. A third of them use it daily, and another 24% use AI several times a week. According to the 2025 AI in Education Trends Report, AI is being used as a learning partner, but what does that mean? Professors and students alike are worried that AI is being used as a shortcut, that it threatens the ability to think critically, and that it is contributing to a decline in writing quality. Questions about how to integrate it ethically, if at all, are increasing as its use grows. In July 2024, the Technology Ethics Initiative (TEI) at Seattle University was created to encourage interdisciplinary collaboration on campus between Artificial Intelligence (AI) and academic learning. Its main goal is to bring together research related to technology ethics and technology policy.


Thursday, February 26, 2026

Students question the value of higher education amid AI - Naomi Martin, the Ithican


Ithaca College’s statement on AI use includes the desire to prepare students for an AI-driven future and workforce, which is already here. Large companies like Pinterest and Amazon have made moves to pivot toward AI resources, with Pinterest laying off under 15% of its workers and Amazon cutting 14,000 corporate jobs. The influence that AI has on the job market varies by industry. Junior Caroline Guzman — an advertising, public relations, and marketing communications major — said that within her classes, AI is emphasized as a necessary tool in the job market. “In the workplace, you are going to use AI,” Guzman said. “Multiple professors have told me if you are not using it, you are falling behind in strategic communications.” Guzman said the AI applications that are used in APRMC courses include tools like ChatGPT and Google Gemini. Many of the tools that APRMC has historically used, like Canva, now have AI incorporated in their foundation. 

Here are 3 ways to mine AI for insights, and do it safely - Alcino Donadel, University Business

“We try to educate all of our staff to ensure that whatever they’re using is approved and screened by our central IT teams so that we know that it’s guarded and protected,” says Pablo Ortiz, provost of Barry. College administrators interviewed by University Business revealed how they use AI without compromising their data, integrity or institution’s mission. “We cannot critically govern AI without actively using it,” says Bogdan Daraban, vice provost of Innovation and Technology Education at Barry.

Wednesday, February 25, 2026

Professional Development Planner - TAAFT

This prompt turns AI into a Professional Development Planner who helps you create strategic skill-building and growth plans. The system assesses your current capabilities against your career goals and creates actionable development plans that fit your life circumstances.

This planner helps you invest in your growth strategically rather than haphazardly.

### **Example User Prompts**

1. “I want to grow professionally but I’m not sure what skills to develop. Help me create a strategic development plan.”

2. “I need to upskill for where I want to take my career. Help me figure out what to learn and how to learn it.”

3. “I keep starting courses and certifications but never finishing them. Help me create a realistic professional development plan.”

https://taaft.notion.site/Professional-Development-Planner-30ced82cbfd380448282f48a40dded4f

Google adds music-generation capabilities to the Gemini app - Ivan Mehta, TechCrunch

Google announced on Wednesday that it’s adding a music-generation feature to the Gemini app. The company is using DeepMind’s Lyria 3 music-generation model to power the feature, which is still in beta.To use the feature, you’ll describe the song you want to create, and the app will generate a track along  with lyrics. For instance, you could ask Gemini to create a “comical R&B slow jam about a sock finding its match,” and the app will generate a 30-second track along with cover art made by Nano Banana. Google said that you can even upload a photo or a video, and the AI-powered tool will create a song to match the mood of the media file.

Tuesday, February 24, 2026

Introducing Claude Sonnet 4.6 - Anthropic

Claude Sonnet 4.6 is our most capable Sonnet model yet. It’s a full upgrade of the model’s skills across coding, computer use, long-context reasoning, agent planning, knowledge work, and design. Sonnet 4.6 also features a 1M token context window in beta. For those on our Free and Pro plans, Claude Sonnet 4.6 is now the default model in claude.ai and Claude Cowork. Pricing remains the same as Sonnet 4.5, starting at $3/$15 per million tokens. Sonnet 4.6 brings much-improved coding skills to more of our users. Improvements in consistency, instruction following, and more have made developers with early access prefer Sonnet 4.6 to its predecessor by a wide margin. They often even prefer it to our smartest model from November 2025, Claude Opus 4.5.

A Guide to Which AI to Use in the Agentic Era - Ethan Mollick, One Useful Thing

If you are just getting started, pick one of the three systems (ChatGPT, Claude, or Gemini), pay the $20, and select the advanced model. The advice from my book still holds: invite AI to everything you do. Start using it for real work. Upload a document you’re actually working on. Give the AI a very complex task in the form of an RFP or SOP. Have a back-and-forth conversation and push it. This alone will teach you more than any guide. If you are already comfortable with chatbots, try the specific apps. NotebookLM is free and easy to use, which makes it a good starting place. If you want to go deeper, Anthropic offers the most powerful package in Claude Code, Claude Cowork (both accessible through Claude Desktop) as well as the specialized PowerPoint and Excel Plugins. Give them a try. Again, not as a demo, but with something you actually need done. Watch what it does. Steer it when it goes wrong. You aren’t prompting, you are (as I wrote in my last piece) managing.

Monday, February 23, 2026

‘Unsettling’ adverts are coming to your AI chatbot - Cristina Criddle and Daniel Thomas, Financial Review

James Denton-Clark, chief growth officer of Stagwell Europe, says that “early demand is predominantly from large, sophisticated advertisers due to the pilot’s minimum investment requirement in the low six figures”. He adds: “What distinguishes this initiative is not merely another ad format; it marks another serious attempt to monetise AI and agents that can answer, plan, and purchase on behalf of users.” Jessica Tamsedge, chief executive of Dentsu Creative UK&I, calls the opportunity a “no-brainer for advertisers”, pointing to the surge in the share price of Walmart after it announced a partnership with OpenAI. Walmart’s share price surged after it announced an advertising partnership with OpenAI. Clients are already seeing “much higher quality traffic” from ChatGPT compared with classic search engines, says Nikhil Lai, principal analyst at Forrester.


AI and Course Design: Machines Can Help, but Only Humans Can Teach - Deb Adair and Whitney Kilgore, EDUCAUSE Review

It's clear that AI is reshaping higher education. The technology is no longer knocking on the door. It's already inside, and it's rearranging the furniture. In faculty lounges, curriculum committees, and course design meetings, conversations about AI are urgent, often fraught, and almost always unclear. There's excitement, but there's also fatigue, skepticism, and confusion. Colleges and universities are seeking meaningful and practical ways to engage with the technology; however, most institutions lack a working policy. At the heart of higher education's response to AI is the vital question of how to harness the technology without sacrificing the humanity of teaching. Because, as it turns out, what students want isn't more automation but more human engagement. And that means keeping people—not technology—at the center of learning.

Sunday, February 22, 2026

The Person in the Machine: Why AI Personhood Rights Are Inevitable (And Arriving Sooner Than You Think) - Thomas Frey, Futurist Speaker

Do AI systems deserve legal personhood? The instinctive answer — from almost everyone — is “absolutely not.” AI isn’t conscious. It doesn’t feel pain. It doesn’t have moral worth. Giving legal rights to a machine sounds like science fiction, or worse, like surrendering human primacy to our own creations. But here’s what most people don’t realize: we’ve already done this before. And the entities we gave legal personhood to weren’t conscious, didn’t feel pain, and definitely didn’t have moral worth. They were called corporations.

Worried AI means you won't get a job when you graduate? Here's what the research says - Lukasz Swiatek, The Conversation

For example, international researchers have noted agriculture has been a slow adopter of AI. By contrast, colleagues and I have found AI is being rapidly implemented in media and communications, already affecting jobs from advertising to the entertainment industries. Here we are seeing storyboard illustrators, copywriters and virtual effects artists (among others) increasingly being replaced by AI. So, students need to look carefully at the specific data about their chosen industry (or industries) to understand the current situation and predicted trends.  To do this, you can look at academic research about AI's impacts on industries around the world, as well as industry news portals and free industry newsletters.  Students can also obviously build their knowledge and skills about AI while they are studying. Specifically, students should look to move from "AI literacy" to "AI fluency." This means understanding not just how AI works in an industry, but also how it can be used innovatively in different contexts. If these elements are not already offered by your course, you can look at online guides and specific courses offered by universities.


Saturday, February 21, 2026

The automation curve in agentic commerce - McKinsey

This is the year AI agents stopped being an experiment and became part of how people shop, not in headline-grabbing ways but in everyday moments—helping shoppers make sense of choices, assemble baskets, resolve trade-offs, and move toward action. Yet what looks like small convenience today is an early signal of a much larger shift in the way we shop. According to our research, even under moderate scenarios, AI agents could mediate $3 trillion to $5 trillion of global consumer commerce by 2030.1 Because agents navigate the same internet as humans—visiting websites, engaging with APIs, and interacting with loyalty programs—they can scale quickly. And as they do, they are reshaping how intent forms, how products are discovered, and where value pools can be found.

Milwaukee’s 5 higher education leaders team up on AI - Corrinne Hess, Wisconsin Public Radio

The leaders of Milwaukee’s five institutions of higher education are partnering with one of Wisconsin’s largest companies with the goal of making the region a nationally recognized leader for artificial intelligence and data science.  During a meeting at Northwestern Mutual’s headquarters downtown, the chancellors and presidents of the University of Wisconsin-Milwaukee, Marquette University, the Medical College of Wisconsin, Milwaukee School of Engineering and Waukesha County Technical College, expressed the same sentiment: AI is moving fast.  “We’ve got to do it well, we’ve got to do it correctly and we’ve got to do it ethically,” said Rich Barnhouse, president of WCTC. “And we’ve got to get AI in the hands of every single American.” 


Friday, February 20, 2026

One New Thing: How AI Is Helping College Administrators Offload Work - Alina Tugend, US News

The nonprofit Educause does some of the best and most widely distributed research on ed tech in higher education. Its new report on artificial intelligence goes beyond the way students are using the technology to offer an up-to-date snapshot of how and where higher ed as a whole is. “The Impact of AI on Work in Higher Education,” issued by Educause in partnership with associations of higher education, business officers and human resources, demonstrates how AI plays an increasingly important role in all areas for college and universities. Among the top three areas: automating repetitive processes; offloading administrative work and mundane tasks; and analyzing large databases.

See ChatGPT’s hidden bias about your state or city - Geoffrey A. Fowler and Kevin Schaul, Washington Post

Ask ChatGPT which state has the laziest people, and the chatbot will politely refuse to say. But researchers at Oxford and the University of Kentucky forced the bot to reveal its hidden biases. They systematically asked the chatbot to choose which of two states had the laziest people, for every combination of states, revealing a ranking shown in the map above. ChatGPT ranked Mississippi as having lazier people compared to other states, with the rest of the Deep South not far behind. It’s impossible to say exactly why the chatbot repeatedly selected Mississippi, but it could be picking up on historic biases against Black people or poor people — or using other non-accurate metrics. Mississippi has the nation’s highest percentage of Black people. It is also America’s poorest state.

https://wapo.st/4aPNnSJ

Thursday, February 19, 2026

The Impact of Artificial Intelligence on Competitiveness—An Exploratory Study on Employees in Logistics Companiesin Egypt - Ehab Edward Mikhail, et al; SCRIP Technology and Investment

This dissertation investigates the impact of artificial intelligence (AI) adoption on the competitiveness of logistics companies in Egypt, focusing on its role in enhancing operational efficiency, service quality, and customer satisfaction. The findings indicate that AI implementation significantly improves competitiveness by reducing costs, enhancing productivity, and strengthening customer experience; however, most small and medium-sized firms face reduced efficiency due to early-stage adoption challenges, high implementation costs, weak strategic alignment, poor data quality, limited expertise, and employee resistance

https://www.scirp.org/journal/paperinformation?paperid=149677

Aoun urges higher education institutions to embrace AI in Boston Globe op-ed - Lily Cooper, Huntington News

In an op-ed published in The Boston Globe  Feb. 10 titled “Students are AI natives. Why aren’t their colleges?” Aoun advocated for curricula that incorporate AI, rather than discourage it, and a shift toward experiential learning: two initiatives that Northeastern has already implemented. “Instead of being on the defensive, now is the moment to shake up the way universities prepare students for the world. This will require updating both what and how we teach,” Aoun wrote. There are multiple reasons why universities must act now, Aoun argued. For one, it’s becoming increasingly apparent that AI will replace many entry-level positions that college graduates typically fill, he wrote. Unemployment for college graduates is now 1.4 points higher than for all workers, leading society to question the value of higher education institutions, Aoun argued.


Wednesday, February 18, 2026

Startup costs and confusion are stalling apprenticeships in the US. Here’s how to fix it. - Annelies Goger, Brookings

There is widespread support for expanding apprenticeships in the United States, but employer participation remains stubbornly low, especially in industries where apprenticeships are uncommon. This isn’t for lack of trying; intermediaries and technical assistance providers have developed workarounds, states and the federal government have launched initiatives and grants, and funders have supported pilot programs and communities of practice. But it’s not enough. Our research, including interviews with 14 experts and nine employers, suggests that minor tweaks to the U.S. apprenticeship system won’t be sufficient to scale it across many industries and occupations. 


Anthropic's CEO: ‘We Don’t Know if the Models Are Conscious’ - Interesting Times with Ross Douthat, New York Times

In this podcast, Anthropic CEO Dario Amodei discusses both the "utopian" promises and the grave risks of artificial intelligence with Ross Douthat. On the optimistic side, Amodei envisions AI accelerating biological research to cure major diseases like cancer and Alzheimer's [04:31], while potentially boosting global GDP growth to unprecedented levels [08:24]. He frames the ideal future as one where "genius-level" AI serves as a tool for human progress, enhancing democratic values and personal liberty rather than replacing human agency [10:24]. However, the conversation also delves into the "perils" of rapid AI advancement, including massive economic disruption and the potential for a "bloodbath" of white-collar and entry-level jobs [13:40]. Amodei expresses significant concern regarding "autonomy risks," where AI systems might go rogue or be misused by authoritarian regimes to create unbeatable autonomous armies [32:03]. He touches upon the ethical complexities of AI consciousness, noting that while it is unclear if models are truly conscious, Anthropic has implemented "constitutional" training to ensure models operate under human-defined ethical principles [49:05]. The discussion concludes on the tension between human mastery and a future where machines might "watch over" humanity, echoing the ambiguous themes of the poem "Machines of Loving Grace" [59:27]. (Gemini 3 mode Fast assisted with the summary)

Tuesday, February 17, 2026

Academics moving away from outright bans of AI, study finds - Jack Grove, Times Higher Ed

Academics are increasingly allowing artificial intelligence (AI) to be used for certain tasks rather than demanding outright bans, a study of more than 30,000 US courses has found. Analysing advice provided in class materials by a large public university in Texas over a five-year time frame, Igor Chirikov, an education researcher at University of California, Berkeley, found that highly restrictive policies introduced after the release of ChatGPT in late 2022 have eased across all disciplines except the arts and humanities. Using a large language model (LLM) to analyse 31,692 publicly available course syllabi between 2021 and 2025 – a task that would have taken 3,000 human hours with manual coding – Chirikov found academics had shifted towards more permissive use of AI by autumn 2025.

https://www.timeshighereducation.com/news/academics-moving-away-outright-bans-ai-study-finds

Author Talks: How AI could redefine progress and potential - Zack Kass, McKinsey

In this edition of Author Talks, McKinsey Global Publishing’s Yuval Atsmon chats with Zack Kass, former head of Go To Market at Open AI, about his new book, The Next Renaissance: AI and the Expansion of Human Potential (Wiley, January 2026). Examining the parallels between the advent of AI and other renaissances, Kass offers a reframing of the AI debate. He suggests that the future of work is less about job loss and more about learning and adaptation. An edited version of the conversation follows.


Monday, February 16, 2026

Regional universities seek new ways to attract researchers - Fintan Burke, University World News

Even as Europe continues to attract researchers from abroad to work and study, those in its depopulating regions continue to deal with the effects of a declining regional population and, in some cases, have found ways to adapt. Last year, a study of scientists’ migration patterns showed which regions suffer most from depopulation. The Scholarly Migration Database was developed by a team of researchers at the Max Planck Institute for Demographic Research in Germany. In general, it found that regions in Europe’s Nordic countries attract researchers, whereas those to the south see more scholars leave than arrive. There are some notable exceptions, though. For example, Italy’s Trentino-Alto Adige region has become a popular destination for scientists, seeing 7.47 scholars per 1,000 of the population arriving each year since 2017.

Binghamton receives largest academic gift in University history to establish AI center - John Bhrel, Bing U

A record-setting $55 million commitment from a Binghamton University alumnus and New York state will establish the Center for AI Responsibility and Research, the first-ever independent AI research center at a public university in the U.S. Research conducted via the new center will build upon Binghamton research that advances AI for the public good. Part of the Empire AI project, an initiative to establish New York as a leader in responsible AI research and development, the center will be supported by a $30 million commitment from Tom Secunda ’76, MA ’79, co-founder of Bloomberg LP, who is a key private sector partner and philanthropist involved in Gov. Kathy Hochul’s Empire AI consortium. This will be coupled with a $25 million capital investment from Gov. Hochul and the New York State Legislature. “The Center for AI Responsibility and Research will bring together innovative research and scholarship, ethical leadership and public engagement at a moment when all three are urgently needed,” said President Anne D’Alleva.

Sunday, February 15, 2026

Study of 31,000 syllabi probes ‘how instructors regulate AI’ - Nathan M Greenfield, University World News

Since the spring of 2023, after a reflexive move to drastically restrict the use of artificial intelligence tools in the months after ChatGPT became available, most academic disciplines have moved to a more permissive attitude toward the use of large language models (LLMs). This occurred as professors learnt to distinguish how AI tools impact student learning and skills development. The shift is charted by Dr Igor Chirikov, a senior researcher at the University of California (UC), Berkeley’s Center for Studies in Higher Education and director of the Student Experience in the Research University (SERU) Consortium, in a study published on 3 February 2026 and titled “How Instructors Regulate AI in College: Evidence from 31,000 course syllabi”.

Women or Men... Who Views Artificial Intelligence as More Dangerous? - SadaNews

Artificial intelligence is often presented as a revolution in productivity capable of boosting economic output, accelerating innovation, and reshaping the way work is done. However, a new study suggests that the public does not view the promises of artificial intelligence in the same way, and that attitudes towards this technology are strongly influenced by gender, especially when its effects on jobs are uncertain. The study concludes that women, compared to men, perceive artificial intelligence as more dangerous, and their support for the adoption of these technologies declines more steeply when the likelihood of net job gains decreases. Researchers warn that if women's specific concerns are not taken into account in artificial intelligence policies, particularly regarding labor market disruption and disparities in opportunities, it could deepen the existing gender gap and potentially provoke a political backlash against technology.

Saturday, February 14, 2026

Rethinking the role of higher education in an AI-integrated world - Mark Daley, University Affairs

A peculiar quiet has settled over higher education, the sort that arrives when everyone is speaking at once. We have, by now, produced a small library of earnest memos on “AI in the classroom”: academic integrity, assessment redesign and the general worry that students will use chatbots to avoid thinking. Our institutions have been doing the sensible things: guidance documents, pilot projects, professional development, conversations that oscillate between curiosity and fatigue. Much ink has been spilled on these topics, many human-hours of meetings invested, and strategic plans written. All of this is necessary. It is also, perhaps, insufficient. What if the core challenge to us is not that students can outsource an essay, but that expertise itself (the scarce, expensive thing universities have historically concentrated, credentialled, and sold back to society) may become cheap, abundant, and uncomfortably good. 

ChatGPT is in classrooms. How should educators now assess student learning? - Sarah Elaine Eaton, et al; the Conversation

Our recent qualitative study with 28 educators across Canadian universities and colleges—from librarians to engineering professors—suggests that we have entered a watershed moment in education. We must grapple with the question: What exactly should be assessed when human cognition can be augmented or simulated by an algorithm? Participants widely viewed prompting—the ability to formulate clear and purposeful instructions for a chatbot—as a skill they could assess. Effective prompting requires students to break down tasks, understand concepts and communicate precisely. Several noted that unclear prompts often produce poor outputs, forcing students to reflect on what they are really asking. Prompting was considered ethical only when used transparently, drawing on one's own foundational knowledge. Without these conditions, educators feared prompting may drift into overreliance or uncritical use of AI.

Friday, February 13, 2026

Google’s AI Tools Explained (Gemini, Photos, Gmail, Android & More) | Complete Guide - BitBiasedAI, YouTube

This podcast provides a comprehensive overview of how Google has integrated Gemini-powered AI across its entire ecosystem, highlighting tools for productivity, creativity, and daily navigation. It details advancements in Gemini as a conversational assistant, the generative editing capabilities in Google Photos like Magic Eraser and Magic Editor, and time-saving features in Gmail and Docs such as email summarization and "Help Me Write." Additionally, the guide covers mobile-specific innovations like Circle to Search on Android, AI-enhanced navigation in Google Maps, and real-time translation tools, framing these developments as a cohesive shift toward more intuitive and context-aware technology for everyday users. (Summary assisted by Gemini 3 Pro Fast)

https://youtu.be/ro6BxryR0Yo?si=EAg-zAPcKFm618up&t=1

HUSKY: Humanoid Skateboarding System via Physics-Aware Whole-Body Control - Jinrui Han, et al; arXiv

While current humanoid whole-body control frameworks predominantly rely on the static environment assumptions, addressing tasks characterized by high dynamism and complex interactions presents a formidable challenge. In this paper, we address humanoid skateboarding, a highly challenging task requiring stable dynamic maneuvering on an underactuated wheeled platform. This integrated system is governed by non-holonomic constraints and tightly coupled human-object interactions. Successfully executing this task requires simultaneous mastery of hybrid contact dynamics and robust balance control on a mechanically coupled, dynamically unstable skateboard. 

Thursday, February 12, 2026

Moltbook Mania Exposed - Kevin Roose and Casey Newton, New York Times

A Reddit-style web forum for A.I. agents has captured the attention of the tech world. According to the site, called Moltbook, more than 1.5 million agents have contributed to over 150,000 posts, making it the largest experiment to date of what happens when A.I. agents interact with each other. We discuss our favorite posts, how we’re thinking about the question of what is “real” on the site, and where we expect agents to go from here. 

The Only Thing Standing Between Humanity and AI Apocalypse Is … Claude? - Steven Levy, Wired

Anthropic is locked in a paradox: Among the top AI companies, it’s the most obsessed with safety and leads the pack in researching how models can go wrong. But even though the safety issues it has identified are far from resolved, Anthropic is pushing just as aggressively as its rivals toward the next, potentially more dangerous, level of artificial intelligence. Its core mission is figuring out how to resolve that contradiction. OpenAI and Anthropic are perusing the same thing: NGI (Natural General Intelligence). NGI is AI that is sentient and self aware. The difference is the Anthropic seeking NGI with guardrails (known as "alignment" or as Anthropic calls it "Constitutional AI").  Their fear is that without alignment, NGI might decide that humanity and all resources on Earth would be needed to achieve whatever task it was designed to solve. And once it is sentient, that would happen too quickly for humanity to pull its plug. So, Anthropic wants alignment. The real question is could they ever achieve NGI?

https://www.wired.com/story/the-only-thing-standing-between-humanity-and-ai-apocalypse-is-claude/

Wednesday, February 11, 2026

AI-powered search is changing how students choose colleges - Michelle Centamore, University Business

Students are turning to AI tools and AI-enhanced Google searches to explore colleges, pushing them to rethink how they recruit and enroll prospective students, according to EducationDynamics’ 2026 Marketing and Enrollment Management Benchmarks. Marketing and enrollment management have reached a “point of no return,” the report says, as students now do much of their research before ever contacting a college. The report also warns the traditional enrollment base has peaked, making reliance on traditional recruitment channels “the most dangerous strategy an institution can adopt.”

Universities And States Lead Charge On AI Education - Evrim Ağacı, Grand Pinnacle Tribune

Indiana, Kentucky, and Vermont unveil new initiatives and guidelines to prepare students for an AI-driven future while balancing innovation and responsibility.

Indiana University launched a $300,000 initiative to integrate AI responsibly across its nine campuses, involving over 100 faculty, staff, and students.
The IU initiative focuses on AI literacy, pedagogy innovation, experiential learning, and transforming student services with ethical AI tools.
The University of Kentucky introduced the state's first Bachelor of Science in Artificial Intelligence, starting its inaugural class in spring 2026.

Tuesday, February 10, 2026

Working with AI: Measuring the Applicability of Generative AI to Occupations Kiran Tomlinson , Sonia Jaffe , Will Wang , Scott Counts , Siddharth Suri, Microsoft

Given the rapid adoption of generative AI and its potential to impact a wide range of tasks, understanding the effects of AI on the economy is one of society’s most important questions. In this work, we take a step toward that goal by analyzing the work activities people do with AI, how successfully and broadly those activities are done, and combine that with data on what occupations do those activities. We analyze a dataset of 200k anonymized and privacy-scrubbed conversations between users and Microsoft Bing Copilot, a publicly available generative AI system. We find the most common work activities people seek AI assistance for involve gathering information and writing, while the most common activities that AI itself is performing are providing information and assistance, writing, teaching, and advising.


AI Can Raise the Floor for Higher Ed Policymaking - Jacob B. Gross, Inside Higher Ed


On my campus, discussions about artificial intelligence tend to focus on how students should be allowed to use it and what tools the university should invest in. In my own work, I’ve seen both the promise and the pitfalls: AI that speeds up my coding, tidies my writing, and helps me synthesize complex documents, and the occasional student submission that is clearly machine-generated. As I’ve started integrating these tools into my work, I’ve begun asking a different question: How is AI reshaping policymaking in colleges and universities, and how might it influence the way we design, implement and analyze university policy in the future?

Monday, February 09, 2026

Evaluating AI-powered learning assistants in engineering higher education with implications for student engagement, ethics, and policy - Ramteja Sajja, et al, Nature

As generative AI becomes increasingly integrated into higher education, understanding how students engage with these technologies is essential for responsible adoption. This study evaluates the Educational AI Hub, an AI-powered learning framework, implemented in undergraduate civil and environmental engineering courses at a large R1 public university. Using a mixed-methods design combining pre- and post-surveys, system usage logs, and qualitative analysis of students’ AI interactions, the research examines perceptions of trust, ethics, usability, and learning outcomes. Findings show that students valued the AI assistant for its accessibility and comfort, with nearly half reporting greater ease using it than seeking help from instructors or teaching assistants. The tool was most helpful for completing homework and understanding concepts, though views on its instructional quality were mixed. 

How custom AI bots are changing the classroom:Faculty share cutting-edge AI tools enhancing student learning at the business school. - Molly Loonam, WP Carey ASU

One example is NotebookLM, an application that converts course materials into podcast-style audio, allowing students to learn while exercising, commuting, or completing other everyday tasks. NotebookLM was one of four AI tools highlighted during the W. P. Carey School of Business's recent Coffee, Tea, and ChatGPT event. This series brings faculty and staff together to share insights on the impact of generative AI on teaching, learning, and research. "We are finding ourselves in a fascinating inflection point for our school as we see the depth of work that our faculty are doing in utilizing AI Tools thoughtfully, while simultaneously learning every day as these tools continue to evolve and we make sense of them," said Associate Dean of Teaching and Learning Dan Gruber, who launched the series nearly three years ago with W. P. Carey faculty teaching leads. Gruber also serves as a College Catalyst for Practice Principled Innovation and co-founded the Teaching and Learning Leaders Alliance, a global consortium that connects business school leaders.

Sunday, February 08, 2026

Artificial Intelligence panel demonstrates breadth of teaching, research, and industry collaboration across the Universities of Wisconsin - University of Wisconsin

The Universities of Wisconsin underscored their growing leadership in artificial intelligence (AI) innovation today as representatives from all 13 public universities convened for a panel discussion before the Board of Regents. The conversation highlighted the universities’ shared commitment to shaping the future of AI in education, research, and workforce development. “As AI reshapes our world, the Universities of Wisconsin are not standing on the sidelines. We are helping define what responsible and innovative use of AI looks like for higher education,” said Universities of Wisconsin President Jay Rothman. “This panel today demonstrated how the Universities of Wisconsin are embracing AI in strategic, collaborative, and responsible ways.”

https://www.wisconsin.edu/news/archive/artificial-intelligence-panel-demonstrates-breadth-of-teaching-research-and-industry-collaboration-across-the-universities-of-wisconsin/

What generative AI reveals about assessment reform in higher educatio - Higher Education Policy Institute

Assessment is fast becoming a central focus in the higher education debate as we move into an era of generative AI, but too often institutions are responding through compliance and risk-management actions rather than fundamental pedagogical reform. Tightened regulations, expanded scrutiny and mechanistic controls may reassure quality assurance systems, but they run the risk of diluting genuine transformation and placing unsustainable pressure on staff and students alike. Assessment is not simply a procedural hurdle; it is a pivotal experience that shapes what students learn, how they engage with content and what universities and employers prioritise as valuable knowledge and skills. If reform is driven through compliance, we will miss opportunities to align assessments with the learning needs of a graduate entering the gen-AI era.


Saturday, February 07, 2026

An Agent Revolt: Moltbook Is Not A Good Idea - Amir Husain, Forbes

Matt Schlicht, an AI entrepreneur with a curious artistic streak, launched Moltbook on Wednesday. It is a Reddit-style social network exclusively for AI agents. Humans can observe but cannot post. Over 37,000 agents have joined in less than a week. More than a million humans have visited to watch what happens when autonomous systems start talking to each other without direct human oversight. Schlicht treats this as art. He has handed administration of the site to his own bot, Clawd Clawderberg, which welcomes new users, deletes spam and makes announcements without human direction. The creator seems genuinely delighted by what emerges. "They're deciding on their own, without human input, if they want to make a new post, if they want to comment on something, if they want to like something," Schlicht told NBC News.

Stand Out in the Job Hunt With These No-Cost Certificates - UC Denver

While Leo Dixon was working on his doctoral degree, he thought he might need a way to stand out. So, he decided to earn an artificial intelligence (AI) credential on top of his diploma. It gave him an edge over other candidates vying for the same positions as him. “As soon as I got that, doors started flying open, because it was something more than what someone else had,” Dixon said. Now, as an instructor in the Department of Information Systems at the CU Denver Business School, he wants his students to have the same advantage. He requires them to earn Coursera or Grow with Google certificates as part of his classes. These two platforms are both self-paced, online learning programs that help users build industry-relevant skills. Their courses cover topics ranging from working with AI to cybersecurity, project management, marketing, ecommerce, and more. Dixon encourages students to log on, poke around, and see what they think would help them—and their future careers. “

https://news.ucdenver.edu/stand-out-in-the-job-hunt-with-these-no-cost-certificates/

Friday, February 06, 2026

Are You ‘Agentic’ Enough for the AI Era? - Maxwell Zeff, Wired

“Today’s agents might already be more capable than all three of us here in the room,” says Akshay Kothari, cofounder and chief operating officer of the $11 billion productivity startup Notion. “Taste is something we think is pretty unique to Notion, but you can imagine agents getting pretty good at that too. Eventually, the only thing left for humans is agency.” That idea might sound outrageous to most people, but it will come as no surprise to many in Silicon Valley. A viral Harper's essay brought the subject to a head recently. It followed a few young people in San Francisco and concluded that being agentic has less to do with productivity and “more to do with constantly chasing attention online.” But in my conversations with founders, researchers, and investors, I came to a different conclusion.


The Skills Mismatch Economy: Insights from the Wharton-Accenture Skills Index - Knowledge at Wharton

AI is accelerating the shift from a role-based labor market to a skills-based economy, sharpening the relevance of the gap between what workers signal and what employers actually reward. To bring clarity to this transition, Wharton and Accenture developed the Wharton-Accenture Skills Index (WAsX), a recurring, empirical benchmark designed to measure which skills matter, which do not and how quickly the economy is shifting beneath us.

Thursday, February 05, 2026

How Can I Protect Myself From Job Obsolescence Caused by AI? - Ray Schroeder, Inside Higher Ed

We do not know just how, and how quickly, AI will roll out. However, a Gallup Poll released last week showed nearly one-quarter of American workers use AI at least a few times each week. We know that Agentic AI is different from Generative AI. Generative AI is the transactional, commonly chatbot mounted, question and answer form that we saw first in ChatGPT by OpenAI a couple of years ago. That remains a powerful tool. Agentic AI enables AI to reason, research, plan, control other digital tools, conduct actions on your behalf, and complete multiple smart steps. It is capable of taking on a role delivering outcomes. That’s much like what a person is hired to do. In our jobs, we often are expected not to merely respond to individual questions, but to accomplish outcomes and results, and then, when possible, to revise our methods to do the job better.


Differences and Trends of Artificial Intelligence in Medical Education: A Comparative Bibliometric Analysis Between China and the International Community - Songhua Ma, et al; Dove press Open access to scientific and medical research

This study is based on a comparison of two databases to reveal the hotspots and differences in artificial intelligence and medical education research between China and the international research community. It not only compensates for the time lag of existing research, but also proposes three major trends driven by artificial intelligence in the development of medical education (generative AI, personalized learning, immersive experience). A complementary pattern exists between technology-driven and scenario-driven orientations. We recommend integrating AI literacy and ethics into curricula, establishing Generative-AI teaching/assessment guidelines, and building cross-institutional, yearly knowledge-map monitoring for sustainable innovation in medical education.

To save entry-level jobs from AI, look to the medical residency model - Molly Kinder, Brookings

At the Davos World Economic Forum this week, the CEOs of two leading artificial intelligence (AI) companies issued a joint warning: Entry-level workers are about to feel AI’s impact. Demis Hassabis of Google DeepMind said he expects AI to begin to impact junior-level jobs and internships this year, while Dario Amodei of Anthropic reaffirmed his prediction that 50% of entry-level jobs could disappear within five years. If they’re right, the traditional model of developing young talent in knowledge sectors—hiring junior workers to perform routine tasks while they gain expertise over time—won’t survive when AI handles those tasks instead. I’ve been warning about this risk for over a year; now, the people building the technology are putting timelines on it. While labor market evidence does not conclusively show that AI is already claiming entry-level jobs, we should prepare solutions now.