Friday, February 28, 2025

Sam Altman hypes GPT-4.5 as the closest thing we have to AGI - Rafly Gilang, MS Power User

A while ago, OpenAI announced that it’s shipping GPT-4.5, the successor of the GPT-4, and it seems to be what everybody is talking about in the AI space right now. Sam Altman, OpenAI’s boss, recently posted on X to suggest that testing GPT-4.5 has led to surprising reactions from high-level testers. He describes the experience as a “feel the AGI” moment, implying that users are starting to sense artificial general intelligence (AGI) qualities in the model—something more advanced and intuitive than previous iterations.

San Jose State University Creates 'AI Librarian' Position - Government Technology

Thinking ahead at what artificial intelligence (AI) means for academic assets and services, San Jose State University (SJSU) last week announced a new job title: AI librarian. One of the first dedicated AI librarians at any university, according to a news release last week, Sharesly Rodriguez, who has worked at the university library since 2020, will be responsible for integrating and developing AI technology for the university's academic library. According to SJSU, librarians typically collaborate with faculty and IT staff to provide information, resources and instruction both online and in person. They also manage digital assets, develop technology resources and promote library services. Within these duties, academic librarians often have one or more subject matter specialty, such as chemistry, history, or in Rodriguez’s case, AI.


Thursday, February 27, 2025

A look under the hood of transfomers, the engine driving AI model evolution - Terrence Alsup, Venture Beat

In brief, a transformer is a neural network architecture designed to model sequences of data, making them ideal for tasks such as language translation, sentence completion, automatic speech recognition and more. Transformers have really become the dominant architecture for many of these sequence modeling tasks because the underlying attention-mechanism can be easily parallelized, allowing for massive scale when training and performing inference.... Depending on the application, a transformer model follows an encoder-decoder architecture. The encoder component learns a vector representation of data that can then be used for downstream tasks like classification and sentiment analysis. The decoder component takes a vector or latent representation of the text or image and uses it to generate new text, making it useful for tasks like sentence completion and summarization. For this reason, many familiar state-of-the-art models, such the GPT family, are decoder only.    

How an AI-enabled software product development life cycle will fuel innovation - Chandra Gnanasambandam, Martin Harrysson and Rikki Singh; McKinsey

By integrating all forms of AI into the end-to-end software product development life cycle (PDLC), companies can empower product managers (PMs), engineers, and their teams to spend more time on higher-value work and less on routine tasks. As part of this broad shift, they can incorporate more robust sources of data and feedback in a new development framework that prioritizes customer-centric solutions. This holistic redesign should ultimately accelerate the process, improve product quality, increase customer adoption and satisfaction, and spur greater innovation 

Wednesday, February 26, 2025

The effectiveness evaluation of industry education integration model for applied universities under back propagation neural network - Ying Qi & Wei Feng, Nature

As the education field continues to advance, industry–education integration has become a crucial strategy for enhancing teaching quality in applied universities. This study investigates how artificial intelligence, specifically the back propagation neural network (BPNN), can be applied within an industry–education integration framework to strengthen students’ skills and employability. A series of experiments were conducted to assess the model’s effectiveness in linking theoretical learning with practical experience, as well as in improving students’ hands-on and innovative abilities. Results demonstrate that the BPNN-optimized model substantially boosts students’ overall competencies. 

AI math tutor: ChatGPT can be as effective as human help, study suggests - Eric W. Dolan, PsyPost

A recent study published in PLOS One provides evidence that artificial intelligence can be just as helpful as a human tutor when it comes to learning mathematics. Researchers discovered that students using hints generated by ChatGPT, a popular artificial intelligence chatbot, showed similar learning improvements in algebra and statistics as those receiving guidance from human-authored hints. Educational technology is increasingly looking towards advanced artificial intelligence tools like ChatGPT to enhance learning experiences. The chatbot’s ability to generate human-like text has sparked interest in its potential for tutoring and providing educational support.

Tuesday, February 25, 2025

6 Ways Technology Transforms Learning Across Generations - Alexa Wang, Flux Magazine

The integration of technology in education has revolutionized how learners of all ages acquire knowledge. From children in preschool to adults seeking continued education, technology provides a multitude of resources that cater to diverse learning styles, making education more engaging and accessible. As we explore how technology transforms learning across generations, it becomes evident that innovations such as online courses, educational apps, and collaborative tools enhance the educational experience while fostering lifelong learning.

Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning - Imagining the Digital Future, Elon University

The spread of artificial intelligence tools in education has disrupted key aspects of teaching and learning on the nation’s campuses and will likely lead to significant changes in classwork, student assignments and even the role of colleges and universities in the country, according to a national survey of higher education leaders. The survey was conducted Nov. 4-Dec. 7, 2024, by the American Association of Colleges & Universities (AAC&U) and Elon University’s Imagining the Digital Future Center. A total of 337 university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans responded to questions about generative artificial intelligence tools (GenAI) such as ChatGPT, Gemini, Claude and CoPilot. The survey covered the current situation on campuses, the struggles institutional leaders encounter, the changes they anticipate and the sweeping impacts they foresee. The survey results covered in a new report, Leading Through Disruption, were released at the annual AAC&U meeting, held Jan. 22-24, 2024, in Washington, D.C.

Monday, February 24, 2025

OpenAI Unveils GPT-5 With Cutting-Edge o3 Reasoning Model - Yasmeeta Oon, MSN.com

OpenAI is poised to revolutionize the artificial intelligence landscape with the imminent release of its highly anticipated GPT-5 large language model, featuring the groundbreaking o3 reasoning model. Scheduled to be integrated into the ChatGPT platform, this advanced model promises an enhanced and powerful user experience. CEO Sam Altman announced the company’s ambitious plans for the GPT-5 model on X, highlighting its significance as a major update to the current platform. The GPT-5 model will be available to all users with a ChatGPT account, allowing free tier users unrestricted access under a standard intelligence setting. While there are no charges for this tier, users will be subject to review based on abuse thresholds to maintain system integrity. The integration of the o3 reasoning model into GPT-5 signifies a major leap forward in AI technology, offering unparalleled capabilities. 

Perplexity launches its own freemium ‘deep research’ product - Anthony Ha, Tech Crunch

Perplexity has become the latest AI company to release an in-depth research tool, with a new feature announced Friday. Google unveiled a similar feature for its Gemini AI platform in December. Then OpenAI launched its own research agent earlier this month. All three companies even have given the feature the same name: Deep Research. The goal is to provide more in-depth answers with real citations for more professional use cases, compared to what you’d get from a consumer chatbot. In a blog post announcing Deep Research, Perplexity wrote that the feature “excels at a range of expert-level tasks—from finance and marketing to product research.”

Sunday, February 23, 2025

Musk Staff Propose Bigger Role for A.I. in Education Department - Dana Goldstein and Zach Montague, NY Times

Allies of Elon Musk stationed within the Education Department are considering replacing some contract workers who interact with millions of students and parents annually with an artificial intelligence chat bot, according to internal department documents and communications. The proposal is part of President Trump’s broader effort to shrink the federal work force, and would mark a major change in how the agency interacts with the public. The Education Department’s biggest job is managing billions of dollars in student aid, and it routinely fields complex questions from borrowers.

https://www.nytimes.com/2025/02/13/us/doge-ai-education-department-students.html?unlocked_article_code=1.xk4.5HB0.7OTzwfgWzamA&smid=url-share

Replit and Anthropic’s AI just helped Zillow build production software—without a single engineer - Michael Nuñez, Venture Beat

Zillow just built production software — without hiring a single engineer. Instead, non-technical employees used Replit and Anthropic’s Claude tool to create working applications that now route more than 100,000 home shoppers to agents. This isn’t just no-code; it’s AI-assisted software development at enterprise scale, powered by Claude and Replit’s automation stack. With a global developer shortage looming, this shift could redefine how software gets built — and who gets to build it.

Saturday, February 22, 2025

AI humanoid robots are closer - thanks to new $350 million investment - Sabrina Ortiz, ZDnet

AI-powered humanoid robots that co-exist with humans to help our workloads may seem like the plot of a sci-fi movie, but companies have been working on them for years. Case in point: Apptronik, a robotics lab founded in early 2016, has been working on a 5-foot 8-inch, 160-pound, general-purpose humanoid robot named Apollo. The company's latest funding will accelerate the robot's deployment. On Wednesday, Austin-based Apptronik announced the closing of a $350 million Series A funding round that will be used to fuel Apollo's deployment, scale company operations, grow its team, and accelerate innovation, according to a company press release. The investment was co-led by B Capital and Capital Factory with participation from DeepMind, Google's AI lab. 

Why OpenAI’s Agent Tool May Be the First AI Gizmo to Improve Your Workplace - Kit Eaton, Inc.

Many of us have by now chatted to one of the current generation of smart AI chatbots, like OpenAI’s market-leading ChatGPT, either for fun or for genuine help at work. Office uses include assistance with a tricky coding task, or getting the wording just right on that all important PowerPoint briefing that the CEO wants. The notable thing about all these interactions is that they’re one way: the AI waits for users to query it before responding. Tech luminaries insist that next-gen “agentic” AIs are different and can actually act with a degree of autonomy on their user’s behalf. Now rumors say that OpenAI’s agent tool, dubbed Operator, may be ready for imminent release. It could be a game changer.

https://www.inc.com/kit-eaton/why-openais-agent-tool-may-be-the-first-ai-gizmo-to-improve-your-workplace/91109848

Friday, February 21, 2025

Quantum Large Language Model Launched to Enhance AI - Berenice Baker, Enter Quantum

Secqai, a company specializing in ultra-secure hardware and software, has launched a hybrid quantum large-language model (QLLM). The QLLM aims to enhance AI applications by integrating quantum computing with traditional large language models (LLMs) to improve computational efficiency while enhancing problem-solving and linguistic understanding capabilities. The new model, which the company said is a world first, resulted from Secqai's research into how the next generation of accelerated computing could be transformed with a QLLM and quantum machine learning.

Superagency: The transformative potential of AI - McKinsey

There’s a critical difference between AI and AGI [artificial general intelligence]. Although the latest gen AI technologies, including ChatGPT, DALL-E, and others, have been hogging headlines, they are essentially prediction machines—albeit very good ones. In other words, they can predict, with a high degree of accuracy, the answer to a specific prompt because they’ve been trained on huge amounts of data. This is impressive, but it’s not at a human level of performance in terms of creativity, logical reasoning, sensory perception, and other capabilities. By contrast, AGI tools could feature cognitive and emotional abilities—like empathy—indistinguishable from those of a human.

Thursday, February 20, 2025

SUPERHUMAN Coder in 2025? New OpenAI Paper... - Wes Roth, YouTube

This podcast by Wes Roth discusses OpenAI's research paper on competitive programming using large reasoning models (LRMs). It highlights the use of reinforcement learning to improve large language models for complex coding and reasoning tasks. The podcast introduces models like 01, 03, and 01 II, which have shown strong performance in competitive programming benchmarks such as the International Olympiad in Informatics and Codeforces. It explores the progress from AlphaCode to the advanced 03 model, which is nearing superhuman coding abilities. The discussion also considers the broader implications of AI in software engineering and the job market, and compares domain-specific models with general-purpose models, suggesting that scaled-up, general models with reinforcement learning are more promising for advanced [approaching superhuman] AI in reasoning. (summary provided by Gemini 2.0 Flash Thinking Experimental with reasoning across Google apps)

https://www.youtube.com/watch?v=SuP1z6P26zU&t=0s

Groundbreaking BBC research shows issues with over half the answers from Artificial Intelligence (AI) assistants

New BBC research published today provides a warning around the use of AI assistants to answer questions about news, with factual errors and the misrepresentation of source material affecting AI assistants.

The findings are concerning, and show:

51% of all AI answers to questions about the news were judged to have significant issues of some form
19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates
13% of the quotes sourced from BBC articles were either altered or didn’t actually exist in that article.

Wednesday, February 19, 2025

Thinking Out Loud With AI - Ray Schroeder Inside Higher Ed

I had the pleasure recently to participate in a lifelong learning session with a group of mostly current or retired educators at my nearby Lincoln Land Community College. The topic was AI in education. It became clear to me that many in our field are challenged to keep up with the rapidly emerging developments in AI. While OpenAI's latest version of Deep Research is not available to the general public at this time, online demonstrations show that this very powerful tool conducts both reasoning and far-reaching analysis. It puts us on the cusp of artificial general intelligence. In addition, with the advent of new competitors both here and abroad, we are seeing new options for open-source models and alternative approaches. As these become more efficient and reliable, prices are headed lower while features continue to expand. The vision of AGI seems only months, not years, away. How are these highly advanced tools going to  be used by your university to enhance teaching, learning, research and other mission-centric tasks? 

A new operating model for people management: More personal, more tech, more human - McKinsey

The way organizations manage their most important assets—their people—is ready for a fundamental transformation. New technologies, hybrid working practices, multigenerational workforces, heightened geopolitical risks, and other major dis A new operating model for people management: More personal, more tech, more human - McKinsey ruptions are prompting leaders to rethink their methods for attracting, developing, and retaining employees. In the past year alone, for instance, we have seen more and more companies adopt, innovate, and invest in technology—particularly in gen AI—in ways that have spurred more changes to people operations than we have observed in the past decade.

Tuesday, February 18, 2025

Does OpenAI's Deep Research signal the end of human-only scholarship? - Andrew Maynard, The Future of Being Human

This past Sunday, OpenAI launched Deep Research — an extension of its growing platform of AI tools, and one which the company claims is an “agent that can do work for you independently … at the level of a research analyst.” I got access to the new tool first thing yesterday morning, and immediately put it to work on a project I’ve been meaning to explore for some time: writing a comprehensive framing paper on navigating advanced technology transitions. I’m not quite sure what I was expecting, but I didn’t anticipate being impressed as much as I was. I’m well aware of the debates and discussions around whether current advances in AI are substantial, or merely smoke and mirrors hype. But even given the questions and limitations here, I find myself beginning to question the value of human-only scholarship in the emerging age of AI. And my experiences with Deep Research have only enhanced this.

GPT-5 Will Be Smarter Than Me: OpenAI CEO Sam Altman - Office Chai

OpenAI CEO Sam Altman has said that GPT-5 — the company’s upcoming large language model — will be smarter than he is. “How many people feel they are smarter than GPT 4? ” he asked the audience at an event, and several hands went up. “Okay, how many of you think you’re still going to be smarter than GPT 5?” he asked, and slightly fewer hands went up. “I don’t think I’m going to be smarter than GPT 5,” Altman declared.

Monday, February 17, 2025

Google Rolls Back AI Promises and DEI Measures as Staff Ask, ‘Are We the Bad Guys Now?’ - Kit Eaton, Inc.

Google used to have an ethical promise baked into its AI guidelines that forbade the technology giant from using AI to build weapons, surveillance systems, or things that “cause or are likely to cause overall harm.” It was a comforting notion to Google’s staff and the general public, given the billions the company spends on cutting-edge research and development. It even smacked of some famous science-fiction safety mantras like Isaac Asimov’s laws of robotics, which forbid smart tech injuring human beings. But Google just refreshed its rules and deleted these clauses. As Business Insider reports, this has upset some Googlers, who have taken to internal discussion boards to vent their concerns. As Google also moves to unwind some long-held U.S. workforce diversity and equality policies, the question arises: How will Google’s workers react to big cultural shifts that may change the feel of working for the company?

OpenAI now reveals more of its o3-mini model’s thought process - Kyle Wiggers, Tech Crunch

In response to pressure from rivals including Chinese AI company DeepSeek, OpenAI is changing the way its newest AI model, o3-mini, communicates its step-by-step “thought” process. On Thursday, OpenAI announced that free and paid users of ChatGPT, the company’s AI-powered chatbot platform, will see an updated “chain of thought” that shows more of the model’s “reasoning” steps and how it arrived at answers to questions. Subscribers to premium ChatGPT plans who use o3-mini in the “high reasoning” configuration will also see this updated readout, according to OpenAI.


Sunday, February 16, 2025

Exploring the use of ChatGPT in higher education - PLOS, Techexplorist

An international survey study involving more than 23,000 higher education students reveals trends in how they use and experience ChatGPT, highlighting both positive perceptions and awareness of the AI chatbot’s limitations. Dejan Ravšelj of the University of Ljubljana, Slovenia, and colleagues present these findings in the open-access journal PLOS One on February 5, 2025. Prior research suggests that ChatGPT can enhance learning, despite concerns about its role in academic integrity, potential impacts on critical thinking, and occasionally inaccurate responses. However, the few studies exploring student perceptions of ChatGPT in higher education have been limited in scope. Ravšelj and colleagues designed an anonymous online survey study aiming to provide a broader view.

ChatGPT Search is now free for everyone, no OpenAI account required – is it time to ditch Google? - John-Anthony Disotto, Tech Radar

ChatGPT Search no longer requires an OpenAI account. You can access the AI search engine for free without logging in. ChatGPT Search lets you browse the web directly from within the world's most popular chatbot. ChatGPT Search is now available to everyone, regardless of whether you're signed into an OpenAI account or not. OpenAI announced the major update on X, bringing ChatGPT Search to the masses, without creating an account or giving any personal information to the world leaders in AI.

Saturday, February 15, 2025

ChatGPT's Deep Research just identified 20 jobs it will replace. Is yours on the list? - Sabrina Ortiz, ZDnet

Min Choi, an X user whose account is dedicated to sharing informational AI content, asked Deep Research to "List 20 jobs that OpenAI o3 reasoning model will replace huma n with into a table format ordered by probability. Columns are Rank, Job, Why Better Than Human, Probability." Choi then shared the results of the chat via an X post, which has since garnered 984,000 views:

https://chatgpt.com/share/67a17688-7dbc-8013-b843-9812b97b6c83

https://www.zdnet.com/article/chatgpts-deep-research-just-identified-20-jobs-it-will-replace-is-yours-on-the-list/

A new operating model for people management: More personal, more tech, more human - McKinsey

The way organizations manage their most important assets—their people—is ready for a fundamental transformation. New technologies, hybrid working practices, multigenerational workforces, heightened geopolitical risks, and other major disruptions are prompting leaders to rethink their methods for attracting, developing, and retaining employees. In the past year alone, for instance, we have seen more and more companies adopt, innovate, and invest in technology—particularly in gen AI—in ways that have spurred more changes to people operations than we have observed in the past decade.


The Industry Reacts to OpenAI's Deep Research - "Hard Takeoff" - Matthew Berman, YouTube

Matthew Berman responds to the release of OpenAI's "Deep Research." Generalized PhD: Deep Research's performance on STEM benchmarks surpasses that of human PhDs, demonstrating the potential for AI to outperform humans in specialized fields. Economic Impact: Sam Altman, CEO of OpenAI, estimates that Deep Research can already accomplish a single-digit percentage of all economically valuable tasks in the world. Game Changer for Research: Deep Research is being used in various fields, including medicine, to assist with research, publishing, and even patient care. Google's Response: Google employees have expressed surprise and amusement at OpenAI's decision to name their product Deep Research, which is the same name as Google's research product. Overall, the podcast conveys a sense of excitement and urgency about the rapid advancements in AI and the potential impact on society. Berman emphasizes the importance of understanding and adapting to these changes as AI continues to evolve. (summary provided in part by Gemini 2.0)

Friday, February 14, 2025

Anthropic CEO Dario Amodei warns: AI will match ‘country of geniuses’ by 2026 - Michael Nuñez, Venture Beat

AI will match the collective intelligence of “a country of geniuses” within two years, Anthropic CEO Dario Amodei has warned in a sharp critique of this week’s AI Action Summit in Paris. His timeline — targeting 2026 or 2027 — marks one of the most specific predictions yet from a major AI leader about the technology’s advancement toward superintelligence. Amodei labeled the Paris summit a “missed opportunity,” challenging the international community’s leisurely pace toward AI governance. His warning arrives at a pivotal moment, as democratic and authoritarian nations compete for dominance in AI development.

https://venturebeat.com/ai/anthropic-ceo-dario-amodei-warns-ai-will-match-country-of-geniuses-by-2026/

OpenAI DEEP RESEARCH Surprises Everyone "Feel the AGI" Moment is here... - Wes Roth, YouTube

Wes Roth is discussing OpenAI's latest release, a new AI agent with deep research capabilities. This agent can conduct multi-step research on the internet, synthesize information, and reason about it, taking up to 30 minutes to return comprehensive answers. This technology has shown impressive results on benchmarks like "Humanity's Last Exam" and has the potential to revolutionize fields like medicine, as demonstrated by a personal story shared by an OpenAI employee. The agent's ability to access and process information, including personal data, makes it a powerful tool for research and decision-making. While currently available on the Pro Plan, this feature will soon be accessible to a wider audience, promising significant changes in how people access and utilize information. (summary provided by Gemini 2.0 Flash)

https://www.youtube.com/watch?v=2sdUG1FtzH0

Thursday, February 13, 2025

OpenAI launches ChatGPT for government agencies - Emma Roth, the Verge

OpenAI has launched ChatGPT Gov, a version of its flagship chatbot that’s tailored to government agencies. The company says the tool will let US government agencies securely access OpenAI’s frontier models, like GPT-4o. As noted by OpenAI, government agencies can deploy ChatGPT Gov within their own Microsoft Azure cloud instance, making it easier to manage security and privacy requirements. OpenAI says the launch could help advance the use of OpenAI’s tools “for the handling of non-public sensitive data.”

Implementing Artificial Intelligence in Academic and Administrative Processes Through Responsible Strategic Leadership in the Higher Education Institutions - Suleman Ahmad Khairullah, Frontiers in Education

 This review explores the substantial impact of integrating AI in Higher Education Institutions (HEIs), from improving education delivery to enhancing student outcomes and streamlining administrative processes and strategic leadership.By catering to the diverse learning needs of students with the help of tools that directly affect academics, monitor student engagement and performance, and provide data-driven interventions, AI offers what the HEIs have long been waiting for to revolutionise the overall Higher Education landscape. This review also highlights that with AI's ability to streamline administrative tasks by enhancing admissions and enrolment processes, academic records management system, and financial aid and scholarships processes, AI not only facilitates improving the overall processes but also makes staff and faculty members focus less on mundane and monotonous tasks, hence concentrating more on the responsibilities and strategic initiatives that require focused attention.We identified that the key to unlocking the significant potential of AI is responsible strategic leadership.

Wednesday, February 12, 2025

OPENAI ROADMAP UPDATE FOR GPT-4.5 and GPT-5: - Sam Altman, X

We want to do a better job of sharing our intended roadmap, and a much better job simplifying our product offerings. We want AI to “just work” for you; we realize how complicated our model and product offerings have gotten. We hate the model picker as much as you do and want to return to magic unified intelligence. We will next ship GPT-4.5, the model we called Orion internally, as our last non-chain-of-thought model. After that, a top goal for us is to unify o-series models and GPT-series models by creating systems that can use all our tools, know when to think for a long time or not, and generally be useful for a very wide range of tasks. In both ChatGPT and our API, we will release GPT-5 as a system that integrates a lot of our technology, including o3. We will no longer ship o3 as a standalone model. The free tier of ChatGPT will get unlimited chat access to GPT-5 at the standard intelligence setting (!!), subject to abuse thresholds. Plus subscribers will be able to run GPT-5 at a higher level of intelligence, and Pro subscribers will be able to run GPT-5 at an even higher level of intelligence. These models will incorporate voice, canvas, search, deep research, and more.

https://x.com/sama/status/1889755723078443244

Leading Through Disruption: Higher Education Leaders Assess AI’s Impacts on Teaching and Learning - Elon University and AAC&U

Higher education leaders grapple with difficult challenges as artificial intelligence tools spread on campus, but they think there will eventually be better student learning outcomes as teaching models change. The spread of artificial intelligence tools in education has disrupted key aspects of teaching and learning on the nation’s campuses and will likely lead to significant changes in classwork, student assignments and even the role of colleges and universities in the country, according to a national survey of higher education leaders. The survey was conducted Nov. 4-Dec. 7, 2024, by the American Association of Colleges & Universities (AAC&U) and Elon University’s Imagining the Digital Future Center.

Tuesday, February 11, 2025

DeepSeek R1 Replicated for $30 | Berkley's STUNNING Breakthrough Sparks a Revolution - Wes Roth, YouTube

\Researchers at UC Berkeley have replicated the core technology of DeepSeek's R1 AI model for only $30. This is a significant breakthrough that could democratize AI research. The Berkeley team was able to achieve similar results to DeepSeek's R1 model, which was trained on a massive dataset of text and code. The Berkeley team's model was able to learn how to play the game of Go without any human data, solely through self-play. This breakthrough could lead to the development of more sophisticated AI models that can be used for a variety of tasks. The research is still in its early stages, but it has the potential to revolutionize the field of AI. (summary provided by Gemini  2.0)

https://www.youtube.com/watch?v=E_h8xt0X1Kg&t=0

When Academia Meets AI: A Journey Toward Ethical Innovation - Sol Saga

The evolving landscape of global challenges, such as climate change, technological disruptions, and societal inequalities, necessitates innovative approaches to knowledge creation and dissemination. Traditional academic structures often operate within rigid disciplinary boundaries, which can hinder holistic understanding and collaboration. Interdisciplinary education and research have emerged as transformative strategies to bridge these gaps, fostering new ways of thinking, learning, and solving complex problems. This conference, “Rethinking Academia: Interdisciplinary Strategies for Knowledge Creation and Collaboration,” seeks to explore how academia can evolve to address future humanistic challenges by embracing interdisciplinary approaches. It aims to create a platform for educators, researchers, and policymakers to reimagine the role of academic institutions in preparing learners for the complexities of the 21st century.

Monday, February 10, 2025

Building Colossus: Supermicro’s groundbreaking AI supercomputer built for Elon Musk’s xAI - Venture Beat

The team at xAI, partnering with Supermicro and NVIDIA, is building the largest liquid-cooled GPU cluster deployment in the world. It’s a massive AI supercomputer that encompasses over 100,000 NVIDIA HGX H100 GPUs, exabytes of storage and lightning-fast networking, all built to train and power Grok, a generative AI chatbot developed by xAI. The multi-billion-dollar data facility, based in Memphis, TN went from an empty building, without any of the necessary power generators, transformers or multiple hall structure to a production AI supercomputer in just 122 days. To help the world understand the extraordinary achievement of the xAI Colossus cluster, VentureBeat is excited to share this exclusive detailed video tour, made possible by Supermicro, and produced by ServeTheHome.

Student-AI Relationships: The Rise of Artificial Intimacy - Kris Hendrikx, Diggit

Understanding Parasocial Relationships in the Digital Era In today’s digital age, where  influencers and celebrities are increasingly visible, and social media continuously offers access to their lives, the phenomenon of parasocial relationships is widespread. Parasocial relationships traditionally refer to one-sided connections where individuals feel a sense of intimacy or closeness with media figures through mediated communication (Bahmanmirza, 2022). With the rise of social media, interactivity – such as through comments – has somewhat increased. However, the rise of interactive AI like ChatGPT has created a situation whereusers can actually interact with the entity with which they experience a parasocial relationship. This means that the rise of artificial intelligence has added a new dynamic to parasocial relationships. 

Sunday, February 09, 2025

OpenAI launches ChatGPT for government agencies - Emma Roth, the Verge

OpenAI has launched ChatGPT Gov, a version of its flagship chatbot that’s tailored to government agencies. The company says the tool will let US government agencies securely access OpenAI’s frontier models, like GPT-4o. As noted by OpenAI, government agencies can deploy ChatGPT Gov within their own Microsoft Azure cloud instance, making it easier to manage security and privacy requirements. OpenAI says the launch could help advance the use of OpenAI’s tools “for the handling of non-public sensitive data.”

Chinese firms ‘distilling’ US AI models to create rival products, warns OpenAI - the Guardian

Chinese firms ‘distilling’ US AI models to create rival products, warns OpenAI
ChatGPT maker cites IP protection concerns amid reports DeepSeek used its model to create rival chatbot
openAI has warned that Chinese startups are “constantly” using its technology to develop competing products, amid reports that DeepSeek used the ChatGPT maker’s AI models to create a rival chatbot. OpenAI and its partner Microsoft – which has invested $13bn in the San Francisco-based AI developer – have been investigating whether proprietary technology had been obtained in an unauthorised manner through a technique known as “distillation”. The launch of DeepSeek’s latest chatbot sent markets into a spin on Monday after it topped Apple’s free app store, wiping $1trn from the market value of AI-linked US tech stocks. 

Saturday, February 08, 2025

The rise of synthetic respondents in market research - Martin Levanti and Courtenay Verret, Nielsen IQ

Synthetic respondents are artificial personas generated by machine learning models to mimic human responses. When informed by diverse datasets, these “stand-in consumers” can be used to quickly evaluate new product concepts. The overnight rush to launch synthetic feedback tools has posed a dilemma for the market research industry, primarily due to AI’s ability to produce convincing—but sometimes unsubstantiated—output. In this article, we share three characteristics of best-in-class synthetic models—and why a “fake it ‘til you make it” approach won’t suffice. [Ray's note: Imagine synthetic students to stimulate class discussions and to engage self-paced learners]

She lost her scholarship over an AI allegation — and it impacted her mental health - Rachel Hale, USA TODAY

University of North Georgia student Marley Stevens was sitting in her car when she got the email notification: Her professor had given her a zero on a paper and accused her of using artificial intelligence to cheat. Her offense? Using Grammarly, a spell check plug-in that utilizes AI, to proofread a paper. Despite the tool being listed as a recommended resource on UNG’s site, Stevens was put on academic probation after a misconduct and appeals process that lasted six months. Getting a zero on the paper impacted her GPA, and she lost her scholarship as a result. She was already taking Lexapro for diagnosed anxiety and struggling with a chronic heart condition before the ordeal. In the months during and after, her mental health plummeted.

Friday, February 07, 2025

Survey: Higher Ed Leaders Doubt Student Preparedness for AI - Luciana Perez Uribe Guinassi, The Charlotte Observer

A survey of 337 university administrators found most were optimistic about artificial intelligence, but also concerned about cheating and student readiness for work environments where AI skills will be important. Considering this, the American Association of Colleges & Universities (AAC&U) and North Carolina’s Elon University’s Imagining the Digital Future Center conducted a survey of 337 university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans on the impact of GenAI tools on campuses. The majority of leaders believed students were using AI tools to complete their coursework, with 89 percent estimating that at least half of students use the tools. Despite this, when asked how prepared they felt their spring 2024 graduates were in terms of understanding and using AI, only 1 percent thought they were “very prepared,” while 40 percent thought they were “somewhat prepared,” 53 percent thought they were “not very prepared,” and 6 percent thought they were “not at all prepared.”

Anthropic chief says AI could surpass “almost all humans at almost everything” shortly after 2027 - Benj Edwards, Ars Technica

On Tuesday, Anthropic CEO Dario Amodei predicted that AI models may surpass human capabilities "in almost everything" within two to three years, according to a Wall Street Journal interview at the World Economic Forum in Davos, Switzerland. Speaking at Journal House in Davos, Amodei said, "I don't know exactly when it'll come, I don't know if it'll be 2027. I think it's plausible it could be longer than that. I don't think it will be a whole bunch longer than that when AI systems are better than humans at almost everything. Better than almost all humans at almost everything. And then eventually better than all humans at everything, even robotics."


Thursday, February 06, 2025

AI agents may soon surpass people as primary application users - Joe McKendrick, ZDnet

Tomorrow's application users may look quite different than what we know today -- and we're not just talking about more GenZers. Many users may actually be autonomous AI agents.  That's the word from a new set of predictions for the decade ahead issued by Accenture, which highlights how our future is being shaped by AI-powered autonomy. By 2030, agents -- not people -- will be the "primary users of most enterprises' internal digital systems," the study's co-authors state. By 2032, "interacting with agents surpasses apps in average consumer time spent on smart devices."

Setting a Context for Agentic AI in Higher Ed - Ray Schroeder, Inside Higher Ed

On Jan. 23, OpenAI released a research preview of an agent called Operator, level 3, that can use its own browser to perform tasks for users. The tool is still in preview. It will require further development and refinement. Yet, this early version of a computer-using agent shows the enormous potential of the tool to enhance and enable efficiency and effectiveness in daily use in higher education teaching, learning and administration. Still to come this year is likely to be the level-4 Innovator that will mark artificial general intelligence. The AGI definition varies, but centers on an AI tool that encompasses broadly the collective knowledge and intelligence of a human. There is speculation that AGI does already exist in developmental models at the frontier AI enterprises such as OpenAI, Microsoft, Google, Anthropic, Meta and others. It may be two more years before the awe-inspiring artificial super intelligent tools are released.

Wednesday, February 05, 2025

How are colleges handling AI? An Elon University survey asked. - Luciana Perez Uribe Guinassi, NewsObserver

And while opinions on these generative artificial intelligence tools (tools that create content) such as ChatGPT, Gemini, Claude and CoPilot are mixed — one thing is clear. They’re here to stay and likely to become more and more prevalent. Considering this, the American Association of Colleges & Universities (AAC&U) and North Carolina’s Elon University’s Imagining the Digital Future Center conducted a survey of 337 university presidents, chancellors, provosts, rectors, academic affairs vice presidents, and academic deans on the impact of GenAI tools on campuses. What they found was that while a majority of leaders were optimistic about the use of this technology, many had concerns, including:

students developing an over-reliance on GenAI.

academic integrity.

exacerbating inequalities stemming from the digital divide.

DeepSeek’s Safety Guardrails Failed Every Test Researchers Threw at Its AI Chatbot - Matt Burgess, Wired

Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors. Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”


Tuesday, February 04, 2025

‘A death penalty’: Ph.D. student says U of M expelled him over unfair AI allegation - Feven Gerezgiher, MPR News

The University of Minnesota expelled a third-year health economics Ph.D. student in November after faculty accused him of using artificial intelligence on an exam. He denies their claims and, this month, filed a lawsuit accusing the U of M of violating his due process. He has also filed a defamation suit against one of his professors.  In a federal lawsuit, Haishan Yang, 33, alleges a student conduct review panel unjustly found him guilty of academic dishonesty through a process riddled with “procedural flaws, reliance on altered evidence, and denial of adequate notice and opportunity to respond.”  The review was prompted by accusations that Yang used a large language model like ChatGPT on a written preliminary exam, which doctoral students must pass to start their dissertation.  

DeepSeek R1 - o1 Performance, Completely Open-Source - Matthew Berman, YouTube

Matthew Berman, in this video discusses the release of DeepSeek R1, an open-source AI model with capabilities comparable to OpenAI's O1. The model is completely open source, including its weights, and is licensed under MIT license, allowing for free commercial and non-commercial use. The YouTuber highlights DeepSeek R1's impressive performance on various benchmarks, where it matches or even surpasses O1 in several tasks. The model's open-source nature is emphasized, with the speaker predicting a surge of similar open-source models in the near future. The video also covers DeepSeek R1's pricing, which is significantly lower than O1, showcasing the impact of open source on cost reduction and competition. The YouTuber demonstrates the model's reasoning abilities through tests like counting the 'r's in "strawberry" and tracking a marble's position after a series of movements. (summary mostly by Gemini 1.5)

Monday, February 03, 2025

For AI to make government work better, reduce risk and increase transparency - Valerie Wirtschafter, Brookings

A growing body of research highlights the benefits of using AI in the workplace. Examples from recent federal deployments of AI-enabled tools and other technological solutions show clear promise. For so-called “high impact service providers”—public-facing departments of federal agencies, such as the Internal Revenue Service or Customs and Border Protection—any AI-backed performance gains could improve Americans’ perceptions of the U.S. government’s overall competence.  However, a “move fast and break things” approach that leverages technology to improve government efficiency could also have significant consequences. 

"Super Agent" and THE END Of Human Work - Wes Roth, YouTube

Roth discusses recent rumors and developments in the field of AI, including:

  • Mark Zuckerberg's statements on replacing mid-level engineers with AI, and subsequent layoffs at Facebook.
  • OpenAI's upcoming announcement of "PhD-level super agents," AI capable of complex human tasks, and its potential impact on various sectors.
  • The US government's involvement in AI development, with a focus on national security and the AI arms race with China.
  • The potential for AI to lead to catastrophe or improve human life, and the importance of a balanced approach to AI development.
  • The role of AI in the workforce, and the potential for job displacement.
  • The importance of staying ahead in the AI race, and the potential consequences of falling behind.

Roth also discusses the views of various experts and leaders on AI, including Jake Sullivan, Mark Andreessen, and Leopold Aschenbrenner. He concludes by emphasizing the rapid pace of AI development and the potential for significant changes in the near future.

Sunday, February 02, 2025

Salesforce Founder on Why They Aren’t Hiring More Engineers - MOONSHOTS, Peter H. Diamandis

This video features a discussion with the Salesforce Founder, where they explain the company's decision to not hire more engineers despite predicting a 30% increase in productivity.

The key takeaways are:
Increased productivity through AI: The company has been able to achieve a 30% increase in productivity due to the implementation of AI and automation technologies. This has allowed them to deliver technology and capabilities faster than ever before.
Rebalancing workforce: Instead of hiring more engineers, Salesforce is rebalancing its existing workforce, including customer support engineers, into new areas of the company. This is made possible by the increased efficiency and automation brought about by AI.
Focus on Agentic platform: Salesforce is focusing on its Agentic platform, which enables companies to connect with their customers in new ways using AI. The platform has seen rapid adoption with over 1,500 paid implementations.

https://youtu.be/ey_MM1x-mu4?si=d9OjotpVCh1lI8Ed


Teen ChatGPT Usage Surges: What Does This Mean for Education? - Alex McFarland, Unite.ai

The latest Pew Research data shows 26% of teens are now using ChatGPT for schoolwork, up from 13% in 2023.
79% of teens now know about ChatGPT (up from 67%)
32% say they know a lot about it (up from 23%)
About a quarter of 9th and 10th graders are ChatGPT users

Saturday, February 01, 2025

An AI Chatbot Took A Graduate Course And Got An A. No One Noticed. - Forbes

\For nearly an entire semester last year, a student enrolled in an online Masters-level course in health administration at a University in South Carolina was doing really well, participating in class discussion boards, contributing to live online seminars, and getting very high marks on written work and quizzes. But it was not a student at all. It was an AI chatbot – ChatGPT (GPT-4) – surreptitiously enrolled in the course as part of a test by academic researchers. They wanted to see whether a chatbot could do graduate-level coursework, and whether the work of a chatbot would be noticed or caught by anyone.

Researchers STUNNED As AI Improves ITSELF Towards Superintelligence - Wes Roth, YouTube

This podcast talks about the rapid advancements in artificial intelligence (AI), particularly the development of reasoning models like OpenAI's 01 and DeepSeek's R1. These models are capable of "thinking" behind the scenes and using that data to answer questions, leading to significant improvements in AI performance. The podcast highlights the concept of knowledge distillation, where these reasoning models are used to train smaller, more efficient models like the 03 mini and DeepSeek V3. This process allows for the creation of AI models that are faster, cheaper, and even more intelligent than their predecessors. The discussion also touches on the potential for AI to recursively self-improve, leading to an intelligence explosion or singularity. This is driven by the possibility of AI automating AI research and development, allowing for rapid advancements in AI capabilities. (summary provided by Gemini 1.5)