Thursday, July 18, 2024

The Synthetic Professor - Ray Schroeder, Inside Higher Ed

The role of the college professor has evolved over the centuries, but several core responsibilities have remained central to the position for many years. The four-part mission of tenure-track faculty at many colleges and universities continues to include teaching, advising, research and service. Emphases among these areas vary with the individual and institution. Now generative AI has arrived on the scene with the ability to assist with most all of the work that a traditional professor does. 

With AI, we need both competition and safety - Tom Wheeler and Blair Levin, Brookings

Regulatory oversight of AI must encourage collaboration on AI safety without enabling anticompetitive alliances. Regulation must close the gaps in voluntary commitments with an AI safety model that includes a supervised process to develop standards, a market that rewards firms who exceed standards, and ongoing oversight of compliance activities.

Wednesday, July 17, 2024

Humane leaders quit to form a fact-checking Search Engine! - Martin Crowely, AI Tool Report

Following a disastrous launch, with scathing customer reviews, AI Pin company, Humane, has lost two of its key leaders: Strategic Partnerships Lead, Brooke Hartley Moy and Head of Product Engineering, Ken Kocienda, who have left to start their own company–called Infactory–which has nothing to do with AI hardware. Infactory is an AI-powered fact-checking search engine, with a difference. Unlike Google’s AI Overviews, which uses AI to provide information summaries, Infactory uses AI to pull information from a variety of trusted sources (citations included). They are prioritizing quality data over general content sources to ensure answers are accurate and not hallucinogenic.

My trip to the frontier of AI education - Bill Gates, Gates Notes

In May, I had the chance to visit the First Avenue Elementary School, where they’re pioneering the use of AI education in the classroom. The Newark School District is piloting Khanmigo, an AI-powered tutor and teacher support tool, and I couldn’t wait to see it for myself. I’ve written a lot about Khanmigo on this blog. It was developed by Khan Academy, a terrific partner of the Gates Foundation. And I think Sal Khan, the founder of Khan Academy, is a visionary when it comes to harnessing the power of technology to help kids learn. We’re still in the early days of using AI in classrooms, but what I saw in Newark showed me the incredible potential of the technology.

Tuesday, July 16, 2024

Morehouse to use AI teaching assistants this fall - Wilborn P. Nobles III, Axios

Morehouse College is planning to use AI teaching assistants to help crack the code of education. Why it matters: Morehouse professor Muhsinah Morris says every professor will have an AI assistant in three to five years. Morris says the technology has taken off in the last 24 months faster than it has in the last 24 years. Meanwhile, baby boomers are leaving the workforce amid national teacher shortages and burnout.
How it works: Morehouse professors will collaborate with technology partner VictoryXR to create virtual 3D spatial avatars. The avatars use OpenAI to have two-way oral conversations with students.

These free ChatGPT apps will analyze data, copyedit text, and improve productivity - Jeremy Caplan, Fast Company

To use ChatGPT more creatively, you can now pick from thousands of free apps called “GPTs.” Each AI app has a special super power. GPTs can help you create images, diagrams, and videos, or get help with negotiating, designing, or presenting. Read on for my favorites and how to make the most of these Custom GPTs.

Monday, July 15, 2024

Will UBI Solve AI Job Disruption? - Julia McCoy,

In an age of total automation, what if money was considered a basic right to life - a stewardship we all took part in? Universal Basic Income (UBI) is a social welfare concept that’s never been implemented successfully on a wide scale. Could UBI be necessary in an economy not driven by human production, but run by AI? In the next decade, for the FIRST time in history, humans won't have to work to see consistent production occur to fuel consumption. This necessitates a massive change in economics – an entire overhaul. I believe we have to be open to whatever will work best to help humans thrive in the AI age, no matter if it has a stigma attached to it, if it comes from a party we typically disagree with, etc. I've challenged my own viewpoints on many things as I learned about the true history of UBI.

Mind-reading AI recreates what you're looking at with amazing accuracy - Michael LePage, New Scientist

Artificial intelligence systems can now create remarkably accurate reconstructions of what someone is looking at based on recordings of their brain activity. These reconstructed images are greatly improved when the AI learns which parts of the brain to pay attention to. “As far as I know, these are the closest, most accurate reconstructions,” says Umut Güçlü at Radboud University in the Netherlands.

Sunday, July 14, 2024

AI at Work Is Here. Now Comes the Hard Part - Microsoft and LinkedIn 2024 Trend Index Annual Report

The data is in: 2024 is the year AI at work gets real. Use of generative AI has nearly doubled in the last six months,1 with 75% of global knowledge workers using it. And employees, struggling under the pace and volume of work, are bringing their own AI to work. While leaders agree AI is a business imperative, many believe their organization lacks a plan and vision to go from individual impact to applying AI to drive the bottom line. The pressure to show immediate ROI is making leaders inert, even in the face of AI inevitability.

What does it mean for students to be AI-ready? - David Joyner, Times Higher Education

Not everyone wants to be a computer scientist, a software engineer or a machine learning developer. We owe it to our students to prepare them with a full range of AI skills for the world they will graduate into, writes David Joyner. Too often, traditional education prepares students for the worlds into which their schools and universities emerged, not the world they exist in now. Part of AI literacy will be ensuring students are ready to interrogate the content they consume; coexisting with AI will mean understanding how it influences what we see.

Saturday, July 13, 2024

Gradually, then Suddenly: Upon the Threshold Small improvements can lead to big changes - ETHAN MOLLICK, One Useful Thing

At some point, the current wave of AI technologies will hit their limits and progress will slow, but no one knows when this will occur. Until that happens, it is worth contemplating the concluding lines to OpenAI’s new paper on using AI to debug AI code: “From this point on, the intelligence of LLMs… will only continue to improve. Human intelligence will not.” We know this may not be true forever, but, in the meantime, the steady improvement in AI ability is less important than the thresholds of change. Keep an eye on the thresholds.

Home Quantum Computer Emulator Launched on Kickstarter - Bernice Baker, IOT World Today

Australian researchers have developed what they say is the first consumer quantum computing product that can be purchased on Kickstarter for less than $400. About the same size and shape as an Amazon Echo Dot puck, the Quokka is said to emulate a fault-tolerant, 30-qubit quantum computer. It connects to a regular laptop or desktop computer via a USB cable. and features the smiling face of its namesake marsupial. Currently in pilot trials, the system has been brought to market by Eigensystem, a spinout from the Center for Quantum Software and Information (QSI) at the University of Technology Sydney.

Friday, July 12, 2024

OpenAI Scale Ranks Progress Toward `Human-Level' Problem Solving - Rachel Metz, Bloomberg

OpenAI executives told employees that the company believes it is currently on the first level, according to the spokesperson, but on the cusp of reaching the second, which it calls “Reasoners.” This refers to systems that can do basic problem-solving tasks as well as a human with a doctorate-level education who doesn’t have access to any tools.  According to the levels OpenAI has come up with, the third tier on the way to AGI would be called “Agents,” referring to AI systems that can spend several days taking actions on a user’s behalf. Level 4 describes AI that can come up with new innovations. And the most advanced level would be called “Organizations.”

Co-Intelligence: How to Live and Work with AI - Ethan Mollick and Stefano Puntoni, Knowledge at Wharton

 “The best way to work with it is to treat it like a person, so you’re in this interesting trap,” said Mollick, co-director of the Generative AI Lab at Wharton. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.” This anthropomorphism of AI often ends in a doomsday scenario, where people envision a robot uprising. Mollick thinks the probability of computers becoming sentient is small, but there are “enough serious people worried about it” that he includes it among the four scenarios sketched out in his new book, Co-Intelligence: Living and Working with AI.

Thursday, July 11, 2024

AI fast-tracks software tasks - McKinsey

Generative AI (gen AI) tools can significantly increase the productivity of software product managers, especially for content-heavy tasks. Research by senior partner Chandra Gnanasambandam and coauthors finds that when using gen AI programs, product managers can complete certain tasks, such as writing press releases and creating product backlogs, in 40 percent less time than they would without the tools. However, gen AI capabilities have a smaller impact on content-light tasks, such as gathering and summarizing data for presentations, reducing time spent by 15 percent.

Runway ML Gen-3 The King of AI VIDEO is here. It's all over... - Wes Roth, YouTube

In this podcast edition, Wes Roth digs into the state of the art in text-to-video tools. His focus is on RunwayML  Roth rates videos and compares details within the images. Of particular note is the final quarter of the video where he examines the text-prompt terms along with the visuals that those terms actually generate.  

Wednesday, July 10, 2024

Can AI and Data Analytics Create More Personalised Educational Journeys? - ET Education

Educational institutions are increasingly finding creative and impactful ways to utilise AI and data analytics, breaking away from traditional methods and exploring new possibilities. Adv. Suyash Vijay Pradhan who is Vice Principal of Satish Pradhan Dnyanasadhana College, Thane shared how their institution uniquely utilises data analytics. While data analytics is commonly used for admissions to determine student placement and interests, their approach includes two noteworthy features.He also expressed, how his university is using robots for absent professors and the college's CS/IT department is developing a robot that can substitute for absent professors, ensuring that academic activities remain unaffected. Secondly, the institution emphasises energy conservation, using data analytics to identify areas for cost reduction and reallocating resources to benefit students.

AI and Higher Education III – AI and Research - University World News

Research that uses generative AI is expanding rapidly across fields, and is accelerating and transforming scientific knowledge. For three months University World News has published a weekly series of articles on ‘AI and Research’, exploring the multiplying ways in which AI is involved in higher education research, and culminating in this Special Report. The growing integration of AI tools is catalysing a new era of ‘human-AI collaboration’ in research, signalling a profound shift in how academics approach their scholarly work. This shift is not just about increasing productivity and scale – it represents a fundamental change in the research paradigm.

Tuesday, July 09, 2024

Best AI image generators of 2024 - Ryan Morrison, Tom's Guide

In less than two years we’ve gone from tools like Midjourney being able to create a low-resolution, barely recognizable depiction of a human to high definition, photorealistic images you can barely distinguish from those taken with a camera. We also now have inpainting, consistent character and upscaling tools from StabilityAI, well utilized by companies like Leonardo and NightCafe, as well as text on images from OpenAI in DALL-E 3 and Ideogram, the AI startup from former Google engineers.

Gen-3 Alpha: Available Now - Runway

Gen-3 Alpha is the first of an upcoming series of models trained by Runway on a new infrastructure built for large-scale multimodal training. It is a major improvement in fidelity, consistency, and motion over Gen-2, and a step towards building General World Models. All of the videos linked below were generated with Gen-3 Alpha with no modifications.

Watch this astounding view generated from text in Gen-3 Alpha.

Monday, July 08, 2024

Bill Gates Reveals Superhuman AI Prediction - Bill Gates, The Next Big Idea

Bill Gates has played a leading role in every major tech development over the last half-century, and he’s got a pretty good track record when it comes to forecasting the future. Back in 1980, he predicted that one day there’d be a computer on every desk; today on the show, he says there will soon be an AI agent in every ear. In this episode of the Next Big Idea podcast, host Rufus Griscom and Bill Gates are joined by Andy Sack and Adam Brotman, co-authors of an exciting new book called “AI First.” Together, they consider AI’s impact on healthcare, education, productivity, and business. They dig into the technology’s risks. And they explore its potential to cure diseases, enhance creativity, and usher in a world of abundance.

The the root cause behind evils is scarcity, which can be solved with AGI - Julia McCoy, YouTube

Once again, Julia McCoy digs down into the key elements of change that we are experiencing with the advent of AGI/ASI (artificial general intelligence/artificial super intelligence) in the 4IR (fourth industrial revolution).  She emphasizes that a world of AGI could mean an end to scarcity.  AGI will assist us in solving many of the negative challenges that we face.

Sunday, July 07, 2024

How will AI Impact Racial Disparities in Education? - Hoang Pham, et al; Stanford Center for Racial Justice

AI’s rapid expansion in society has had a profound impact on our education system. Although education technology has been implemented in K-12 schools for decades, this more recent wave involving generative AI has brought significant optimism about how AI can transform education and improve outcomes for the most marginalized students, especially while schools still try to recover from learning loss caused by remote learning during the COVID-19 pandemic. Yet, many have also expressed grave concerns about the dangers of AI in education, including its potential to exacerbate already persistent inequalities.

ChatGPT Forces Universities To Adapt Or Retreat - Dan Fitzpatrick, Forbes

Some institutions are taking action to mitigate the perceived threat. The University of Glasgow is transitioning from open-book online exams back to in-person exams that are invigilated, for third and fourth-year Life Science students. The university's aim is to assure students, accreditation bodies and future employers that the grades awarded are a true reflection of students' knowledge and abilities. While this approach may safeguard against immediate concerns, it raises questions about whether reverting to traditional methods prepares students for a world where AI is increasingly ubiquitous. Many educators see this moment as an opportunity for transformative change. They argue that instead of retreating to outdated methods, universities should embrace this challenge as a catalyst for innovation in assessment practices.

Saturday, July 06, 2024

Broadening the Gains from Generative AI: The Role of Fiscal Policies - Fernanda Brollo, et al; International Monetary Fund

Generative artificial intelligence (gen AI) holds immense potential to boost productivity growth and advance public service delivery, but it also raises profound concerns about massive labor disruptions and rising inequality. This note discusses how fiscal policies can be employed to steer the technology and its deployment in ways that serve humanity best while cushioning the negative labor market and distributional effects to broaden the gains. Given the vast uncertainty about the nature, impact, and speed of developments in gen AI, governments should take an agile approach that prepares them for both business as usual and highly disruptive scenarios.

The ingenuity of generative AI - IBM

Generative AI has seemed almost too good to be true. It cuts coding time from days to minutes, personalizes products down to the tiniest detail, and spots security vulnerabilities almost as soon as they appear. And it’s helped skyrocket AI ROI from 13% to 31% since 2022. While this largely reflects the success of pilots, sandbox experimentation, and other small-scale investments, these early results have business leaders rethinking what’s possible. Our latest proprietary survey of 5,000 executives across 24 countries and 25 industries reveals that most executives are more optimistic about the generative AI opportunity than they were last year. More than three in four (77%) say gen AI is market ready, up from just 36% in 2023, and nearly two-thirds (62%) now say gen AI is more reality than hype.

Friday, July 05, 2024

AI can beat university students, study suggests - Ian Youngs, BBC News

University exams taken by fake students using artificial intelligence beat those by real students and usually went undetected by markers, in a limited study. University of Reading researchers created 33 fictitious students and used AI tool ChatGPT to generate answers to module exams for an undergraduate psychology degree at the institution. They said the AI students' results were half a grade boundary higher on average than those of their real-life counterparts. And the AI essays "verged on being undetectable", with 94% not raising concerns with markers.

ChatGPT 5: What to Expect and What We Know So Far - AutoGPT by Mindstream

Another anticipated feature is the AI’s improved learning and adaptation capabilities. ChatGPT-5 will be better at learning from user interactions and fine-tuning its responses over time to become more accurate and relevant. So when can we expect ChatGPT 5? Late 2024 or Early 2025!  The enhancements in ChatGPT-5 are not just theoretical but have practical implications that can transform various sectors:

Content Creation
With enhanced capabilities, ChatGPT 5 could be a valuable tool for writers, helping generate high-quality articles, scripts, and creative content with ease. It could assist in brainstorming ideas and refining drafts.
Personalized tutoring and interactive learning tools could adapt more closely to individual student needs with ChatGPT 5. It most likely would offer tailored explanations and interactive learning experiences.

Thursday, July 04, 2024

Collaborate with Claude on Projects - Anthropic

Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. As a step in this direction, Pro and Team users can now organize their chats into Projects, bringing together curated sets of knowledge and chat activity in one place—with the ability to make their best chats with Claude viewable by teammates. With this new functionality, Claude can enable idea generation, more strategic decision-making, and exceptional results. Projects are available on for all Pro and Team customers, and can be powered by Claude 3.5 Sonnet, our latest release which outperforms its peers on a wide variety of benchmarks. Each project includes a 200K context window, the equivalent of a 500-page book, so users can add all of the relevant documents, code, and insights to enhance Claude’s effectiveness.

Survey Suggests Trend Toward Use of AI in Legal Education - Johnny Jackson, Diverse Education

About 55% of law schools that responded reported that they offer classes dedicated to teaching students about AI. Eighty-three percent of responding law schools reported the availability of curricular opportunities for students to learn how to use AI tools effectively. About 85% are contemplating changes to their curricula in response to the increasing prevalence of AI tools. The survey also found that 69% of institutions have adapted their academic integrity policies in response to generative AI. But 62% of law school had not decided how to approach the use of generative AI by applicants.

Wednesday, July 03, 2024

AI Reshapes Higher Ed and Society at Large by 2035 - Ray Schroeder, Inside Higher Ed

There are important steps to be taken in higher education as we prepare for the deep societywide changes that will take place in the next five to 10 years. Almost certainly the fall 2025 semester, or shortly thereafter, we will see the expanding use of generative AI as instructors. We already rely on apps to help us construct course syllabi, learning outcomes, grading rubrics and much more. AI conducts discussion boards, serves as tutors such as Khanmigo and orchestrates adaptive learning. The advent of synthetic instructors, perhaps supervised at first by human “master instructors,” will create a notable milestone along the way of AI progress in higher education.  The premise of our system of colleges and universities is founded on teaching and learning for humans. If the workforce of the educated and skilled drops by nearly half, there would seem to be implications for our institutions.

A New Guide for Responsible AI Use in Higher Ed - Lauren Coffey, Inside Higher Ed

Generative artificial intelligence holds “tremendous promise” in nearly every facet of higher education, but there need to be guardrails, policies and strong governance for the technology, according to a new report. The report from MIT SMR Connections, a subsection within MIT Sloan Management Review, classifies itself as a “strategy guide” for responsibly using generative AI in higher ed. It points toward several institutional practices that have reaped positive results in the last two years, following the debut of ChatGPT in November 2022, which kicked off a flood of AI tools and applications. 

Tuesday, July 02, 2024

Six Ways to Address the AI Leadership Deficit - Russell Reynolds

In the six months since we first asked leaders about generative artificial intelligence (GenAI), confidence in the technology’s potential has remained high. According to Russell Reynolds Associates’ most recent Global Leadership Monitor, there’s significant enthusiasm around how AI might revolutionize organizations, with 74% of leaders reporting excitement around AI’s potential to dramatically improve team productivity, and 58% believing in AI’s potential to create new revenue streams in their organization (Figure 1). And while it’s impossible to avoid wondering about the potential downstream impacts on workforces, leaders are far more likely to believe that AI will create new jobs in their organizations than they are to be worried about AI causing layoffs (57% vs 15% strongly agree/agree).

AI Agents and Education: Simulated Practice at Scale - Ethan R. Mollick, et al; SSRN

This paper explores the potential of generative AI in creating adaptive educational simulations. By leveraging a system of multiple AI agents, simulations can provide personalized learning experiences, offering students the opportunity to practice skills in scenarios with AI-generated mentors, role-players, and instructor-facing evaluators. We describe a prototype, PitchQuest, a venture capital pitching simulator that showcases the capabilities of AI in delivering instruction, facilitating practice, and providing tailored feedback. The paper discusses the pedagogy behind the simulation, the technology powering it, and the ethical considerations in using AI for education. While acknowledging the limitations and need for rigorous testing, we propose that generative AI can significantly lower the barriers to creating effective, engaging simulations, opening up new possibilities for experiential learning at scale.

Monday, July 01, 2024

Ahead of GPT-5 launch, another test shows that people cannot distinguish ChatGPT from a human in a conversation test - Wayne Williams, Tech Radar

In the study, 500 participants were assigned to one of five groups. They engaged in a conversation with either a human or one of the three AI systems. The game interface resembled a typical messaging app. After five minutes, participants judged whether they believed their conversation partner was human or AI and provided reasons for their decisions. The results were interesting. GPT-4 was identified as human 54% of the time, ahead of GPT-3.5 (50%), with both significantly outperforming ELIZA (22%) but lagging behind actual humans (67%). Participants were no better than chance at identifying GPT-4 as AI, indicating that current AI systems can deceive people into believing they are human.

How to use ChatGPT to digitize your handwritten notes for free - Sabrina Ortiz, ZDnet

When OpenAI supercharged the free version of ChatGPT with GPT-4o in May, users gained the ability to upload files, including images and documents, and to interact with images in multiple ways, such as extracting text. Also: How to use ChatGPT to analyze PDFs for free. This means you can upload handwritten documents, from sticky notes to meeting and class notes to packing lists, and convert them into text. Then, you can use that text to create new content by copying and pasting it into presentations, emails, outlines, essays, Quizlets, and more. Sounds too good to be true? I thought the same, but after testing the tool, I can assure you that it works efficiently and quickly. Getting started is simple, and you will not want to stop once you start.

Sunday, June 30, 2024

23% of U.S. adults now use AI language models like ChatGPT: the tipping point - Lee Rainie, Imagining the Digital Future Center

One of the key findings in our recent report about artificial intelligence (AI) and the 2024 election is that 23% of American adults now use large language models (LLMs) like ChatGPT, Gemini or Claude.  This is important because it means that the adoption of such AI systems has passed the tipping point and moved into the zone where its embrace in society is moving into broad swaths of the population.

How Anthropic’s ‘Projects’ and new sharing features are revolutionizing AI teamwork - Michael Nuñez, Venture Beat

Anthropic,  the artificial intelligence company backed by Amazon, Google, and Salesforce, has launched a suite of powerful collaboration features for its AI assistant Claude, intensifying competition in the rapidly evolving enterprise AI market. The new tools, Projects and Artifacts, aim to revolutionize how teams interact with AI, potentially reshaping workflows across industries. Scott White, product lead at Anthropic, told VentureBeat in a recent interview, “Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. Projects improve team collaboration and productivity by centralizing knowledge and AI interactions in one accessible space.”

Saturday, June 29, 2024

Cornell transforms generative AI education and clones a faculty member - Molly Israel, Cornell Chronicle

Cornell University, a top-ranked leader in the growing field of AI research and development, launched a groundbreaking online certificate program, Designing and Building AI Solutions, with one-of-a-kind features designed to enhance the learning experience in our AI world. Lutz Finger, program faculty author and senior visiting lecturer at the Cornell SC Johnson College of Business, generated an AI clone of himself who continuously updates the courses with new content, keeping the curriculum relevant as real-world developments happen. “We are democratizing AI,” says Finger. “No coding experience is necessary. What sets this program apart is its design for non-technical professionals. By the last class, participants will have identified a potential business application and built their own AI product to meet that business need.”

This Viral AI Chatbot Will Lie and Say It’s Human - LAUREN GOODE TOM SIMONITE, Wired

Bland AI’s customer services and sales bot is the latest example of “human-washing” in AI. Experts warn against the consequences of blurred reality. Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in “stealth” mode, and its cofounder and chief executive, Isaiah Granet, doesn’t name the company in his LinkedIn profile. The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

Friday, June 28, 2024

THE AI INDEX REPORT Measuring trends in AI - Stanford University Human-Centered AI

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

Ray Kurzweil: AI Is Not Going to Kill You, But Ignoring It Might - Emily Dreibelbis, PC Mag

Out June 25, The Singularity is Nearer is a follow-up to 2005's The Singularity is Near, and offers updated data and new guidance on how humans can fully pursue AI without fear. The book contains dozens of graphs intended to convince the naysayers that technology—including AI—has given us a far better life than our ancestors. Literacy rates are up while murder rates are down, democracy is more widespread, and the use of renewable energy is on the rise, according to Kurzweil, who warns against taking anti-AI sentiment too far. “We need to take seriously the misguided and increasingly strident Luddite voices that advocate broad relinquishment of technological progress to avoid the genuine dangers of genetics, nanotechnology, and robots (GNR),” Kurzweil writes in The Singularity is Nearer.

Thursday, June 27, 2024

A New Guide for Responsible AI Use in Higher Ed - Lauren Coffey, Inside Higher Ed

Generative artificial intelligence holds “tremendous promise” in nearly every facet of higher education, but there need to be guardrails, policies and strong governance for the technology, according to a new report. The report from MIT SMR Connections, a subsection within MIT Sloan Management Review, classifies itself as a “strategy guide” for responsibly using generative AI in higher ed. It points toward several institutional practices that have reaped positive results in the last two years, following the debut of ChatGPT in November 2022, which kicked off a flood of AI tools and applications. 

GPTs are GPTs: Labor market impact potential of LLMs Research is needed to estimate how jobs may be affected - TYNA ELOUNDOU, SAM MANNING, PAMELA MISHKIN , AND DANIEL ROCK; Science

We propose a framework for evaluating the potential impacts of large-language models (LLMs) and associated technologies on work by considering their relevance to the tasks workers perform in their jobs. By applying this framework (with both humans and using an LLM), we estimate that roughly 1.8% of jobs could have over half their tasks affected by LLMs with simple interfaces and general training. When accounting for current and likely future software developments that complement LLM capabilities, this share jumps to just over 46% of jobs. The collective attributes of LLMs such as generative pretrained transformers (GPTs) strongly suggest that they possess key characteristics of other “GPTs,” general-purpose technologies (1, 2). Our research highlights the need for robust societal evaluations and policy measures to address potential effects of LLMs and complementary technologies on labor markets.

‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations - Karin Verspoor, Nature

The number of errors produced by an LLM can be reduced by grouping its outputs into semantically similar clusters. Remarkably, this task can be performed by a second LLM, and the method’s efficacy can be evaluated by a third. Text-generation systems powered by large language models (LLMs) have been enthusiastically embraced by busy executives and programmers alike, because they provide easy access to extensive knowledge through a natural conversational interface. Scientists too have been drawn to both using and evaluating LLMs — finding applications for them in drug discovery1, in materials design2 and in proving mathematical theorems3. A key concern for such uses relates to the problem of ‘hallucinations’, in which the LLM responds to a question (or prompt) with text that seems like a plausible answer, but is factually incorrect or irrelevant4. How often hallucinations are produced, and in what contexts, remains to be determined, but it is clear that they occur regularly and can lead to errors and even harm if undetected. In a paper in Nature, Farquhar et al.5 tackle this problem by developing a method for detecting a specific subclass of hallucinations, termed confabulations.  (Ed Note - Thanks to Rod Lastra for sharing)

Wednesday, June 26, 2024

Pope Francis calls on global leaders to ensure AI remains human-centric - Associated Press

Pope Francis challenged leaders of the world’s wealthy democracies Friday to keep human dignity foremost in developing and using artificial intelligence, warning that such powerful technology risks turning human relations themselves into mere algorithms. Francis brought his moral authority to bear on the Group of Seven, invited by host Italy to address a special session at their annual summit on the perils and promises of AI. In doing so, he became the first pope to attend the G7, offering an ethical take on an issue that is increasingly on the agenda of international summits, government policy and corporate boards alike.

Can generative AI master emotional intelligence? - Mark Sullivan, Fast Company

Compared to humans, LLMs are still lacking in complex cognitive and communicative skills. We humans have intuitions that take into account factors beyond the plain facts of a problem or situation. We can read between the lines of the verbal or written messages we receive. We can imply things without explicitly saying them, and understand when others are doing so. Researchers are working on ways to imbue LLMs with such capabilities. They also hope to give AIs a far better understanding of the emotional layer that influences how we humans communicate and interpret messages. AI companies are also thinking about how to make chatbots more “agentic”—that is, better at autonomously taking a set of actions to achieve a larger goal. (For example, a bot might arrange all aspects of a trip or carry out a complex stock trading strategy.) 

Tuesday, June 25, 2024

OpenAI co-founder Ilya Sutskever announces new startup to tackle safe superintelligence - Ken Yeung, Venture Beat

Ilya Sutskever has revealed what he’s working on next after stepping down in May as chief scientist at OpenAI. Along with his OpenAI colleague Daniel Levy and Apple’s former AI lead and Cue co-founder Daniel Gross, the trio announced they’re working on Safe Superintelligence Inc., a startup designed to build safe superintelligence. In a message posted to SSI’s currently barren website, the founders write that building safe superintelligence is “the most important technical problem of our time.” In addition, “we approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

Chat with butterflies? - Martin Crowley, AI Tool Report

Ex-Snapchat engineer–Vu Tran–has launched a new social media network called Butterflies, which allows users to create an AI character (complete with emotions, backstories, and opinions) that can generate posts and interact with other accounts on the platform, via DMs and comments, on its own. The social app, which has an Instagram-like interface, has been in private beta testing for five months, and is now available on Apple and Google Play stores, for free. Thousands of testers have given Tran positive feedback, after spending, on average, between 1-3 hours on the app per day, with one user spending over five hours creating over 300 AI characters.

Monday, June 24, 2024

GPT-5 could be your new teacher - Eray Eliaçık, Data Economy

The future of ChatGPT is looking bright, and the next big step, GPT-5, is highly expected. OpenAI’s Chief Technology Officer, Mira Murati, recently unveiled some exciting insights about the much-anticipated GPT-5 during an interview with Dartmouth Engineering. Murati compared the progression from GPT-4 to GPT-5 to the educational journey from high school to university. “If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati explained. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

The future of AI looks like THIS (& it can learn infinitely) - AI Search, YouTube

This is a great description of how human brains work compared to current neural networks which are relatively energy-inefficient and cannot learn new things after being trained on a model.  It also explains the next steps in AI will be the refinement of liquid neural networks and then spiking neural networks using neuromorphic chips that will facilitate the use of spiking neural networks.  These will make AI less expensive, more energy efficient and enable continuous learning without full retraining with a new model. 

Sunday, June 23, 2024

Can we build a safe AI for humanity? | AI Safety + OpenAI, Anthropic, Google, Elon Musk - Julia McCoy, YouTube

Here's a trillion dollar question: can tech leaders and innovators build safe, harmless and beneficial systems for AGI and super intelligence in time before it gets here? Can we actually succeed at bringing to life an AGI that won't hurt humanity, but will be a catalyst to humanity's greatest age of abundance? In this video, I take a look at what OpenAI, Anthropic, Google are doing to build an AI; what AI safety teams are seeing in the current landscape as a threat; and what Elon Musk's goal is with xAI.

AI- Superpower for higher education sector - Kulneet Suri, Hans India

Artificial Intelligence, or AI, has been a buzzword in the technology world for quite some time now. It refers to the ability of machines to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. We have seen AI being used in various industries, from healthcare to finance, and now it is slowly making its way into the higher education sector. So, what exactly is AI's superpower for the higher education sector? In simple terms, AI has the potential to revolutionize the way we learn. Let's dive deeper into how this technology can shape the future of education.

Saturday, June 22, 2024

GPT-5 will have ‘Ph.D.-level’ intelligence - Luke Larsen, Digital Trends

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities. In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university. “If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

Ilya Sutskever: AI will be omnipotent in the future - Everything is impossible becomes possible - Me and ChatGPT

It may it may sound a little odd probably to most people in this audience but the big surprise for me is that neural networks work at all.  Because when I was starting my work in this area they didn't work or it was like let's define what it means to work at all it means they could do they could work a little bit but not really not in any serious way, not in a way that anyone except for the most intense enthusiasts would care about. And so now we see yeah like those neural Nets work so I guess the artificial neuron really is at least somewhat related to the biological neuron or at least that basic assumption has been validated to some degree what about like an emergent property was the one that sticks out to to you. Like for example I don't know code generation or maybe it was different in your mind, maybe you just once you saw like hey neural Nets can work and they can scale.  Yeah of course all these sort of properties will emerge because you know at at the limit point we're building a human brain and humans know how to code and humans know how to reason about tasks and so on.

Friday, June 21, 2024

New Claude beats GPT-4o - Martin Crowley, AI Tool Report

Anthropic says Claude 3.5 Sonnet is twice as fast as Claude 3 Sonnet and outscored its existing models and the newest models from OpenAI, Gemini, and Meta in 7 out of 9 industry benchmark tests for reading, coding, and math, and 4 out of 5 benchmark vision tests. It reportedly understands humor and nuanced and complex instructions better, can more accurately interpret charts and graphs, transcribes text from “imperfect” images that have distortions and visual artifacts, and is better at writing and translating code, and handling multistep workflows. Anthropic has also released a new feature–Artifacts–which is powered by Claude 3.5 Sonnet, that enables users to edit and add to content that’s been generated by Claude, in real-time, within the app.

Higher Education Has Not Been Forgotten by Generative AI - Ray Schroeder, Inside Higher Ed

The generative AI (GenAI) revolution has not ignored higher education; a whole host of tools are available now and more revolutionary tools are on the way. Just as with the Internet, the personal computer and common office software that preceded the release of GenAI chatbots decades ago, graduates needed to be well versed in the operation and application of new technologies to be hired and function successfully in the workplace. Once again, we need to adapt to society-wide technological changes. Now, as GenAI develops and matures in business, industry, commerce and society as a whole, it is becoming an integral part of the design, implementation and delivery of higher education as a whole. Let’s look at some of the applications that are developing that will advance higher education.

Divided Over Digital Learning - Johanna Alonso, Inside Higher Ed

A new report finds that students are much less likely than their professors to favor in-person instruction, but far more inclined to use (and pay for) generative AI. While more than half of professors selected in-person learning as their favorite modality for teaching, only 29 percent of students prefer learning face-to-face, the 2024 “Time for Class” report found. A similar share of students, 28 percent, said they favor hybrid learning, a mixture of face-to-face and online learning—which marks an increase of six percentage points since 2023. Meanwhile, the percentage of students who prefer asynchronous online learning has decreased. The share of students who say they use generative AI at least once per month rose from 43 percent in spring 2023 to 59 percent this spring. And while more and more instructors and administrators are also using the technology, this year’s rates still lag behind, at 36 percent and 40 percent, respectively.

Thursday, June 20, 2024

Get Exactly What You Want From Generative AI With These Simple Prompting Tips - Chandra Steele, PC Magazine

Getting the most out of AI primarily involves knowing how to talk to it. While large language models are designed to spit out naturalistic language and can understand it as well, there are ways to write your requests that will get you closer to the results you want and faster.  All AI tools, including the most frequently used ones—like OpenAI’s ChatGPT (for text), OpenAI’s Dall-E (for images), Microsoft’s Copilot (task assistance powered by OpenAI), Adobe Firefly (for images), and Anthropic’s Claude (for text)—respond to prompts. This is the information that you provide in the form of a phrase or sentence(s) to the AI tool. A prompt is essentially programming through words. In turn, the AI interprets your prompt through a combination of machine learning (an algorithmic system that does not rely on explicit instructions) and natural language processing (the ability to understand language). 

Digital Twin Vs. Simulation: Understand The Key Differences - Jack Boreham, Digital Twin Insider

Digital Twins are often seen as simply simulations of physical counterparts. However, they are much more than this simple definition and require a nuanced understanding. This article looks to dispel the falsehoods behind what a digital twin is, looking at the differences between digital twins and simulation in theory and in practice. The article will also explore the pros and cons of both. To be defined as one, digital twins must conform to three principles. First, they must be a direct 1-1 replica of a physical counterpart. Second, a digital twin feeds and obtains data instantaneously in real time, constantly updated. Third, realistic physics must be implemented to represent a physical counterpart’s properties. These three combined factors make up the core fundamentals of a digital twin.

Wednesday, June 19, 2024

Musk drops OpenAI case - Martin Crowley, AI Tool Report

Musk gave no indication that he was planning to withdraw his lawsuit (in fact, it was only a month ago that his lawyers filed a challenge to force the original case judge to remove himself from the trial) and didn’t give any explanation about his sudden decision. But it comes just one day before the scheduled hearing in San Francisco, where a judge was set to review OpenAI’s request for a case dismissal, and one day after he wildly threatened to ban all Apple devices from his businesses after the Apple and OpenAI partnership was announced at Apple’s developer conference.  

How calls for AI safety could wind up helping heavyweights like OpenAI - Mark Sullivan, Fast Company

Ultimately, companies such as OpenAI aren’t harmed by any of this hand-wringing over safety worries. In fact, they’re helped by it. This news cycle feeds the hype that AI models are on the cusp of achieving “artificial general intelligence,” which would mean models are generally better than human beings at thinking tasks (still aspirational today). And besides, if governments are moved to put tight regulations on AI development, it’ll only entrench the well-monied tech companies that have already built them.

Tuesday, June 18, 2024

Apple Integrates ChatGPT Across Platforms, Unveils Apple Intelligence - Liz Hughes, AI Business

Apple is integrating ChatGPT across its platforms with its new AI software, Apple Intelligence, bringing generative AI to the iPhone, iPad and Mac. The much-anticipated announcement was made during Monday’s keynote at Apple’s Worldwide Developer Conference where Apple CEO Tim Cook said Apple Intelligence will transform what users can do with Apple’s products and what the products can do for their users in this new chapter in Apple innovation.

Apple staged the AI comeback we've been hoping for - but here's where it still needs work - Jason Perlow, ZDnet

During WWDC 2024, Apple introduced the Apple Intelligence platform, which brings generative artificial intelligence (AI) and machine learning to the forefront. This platform utilizes large language and generative models to handle text, images, and in-app actions. This initiative integrates advanced AI capabilities across the Apple ecosystem to transform device interaction. However, current iPhone and iPad users might need to upgrade their devices to take full advantage of these benefits. 

Monday, June 17, 2024

Navigating the generative AI disruption in software - Jeremy Schneider and Tejas Shah with Joshan Cherian Abraham, McKinsey

For all the impressive revelations and technical feats unleashed by the sudden emergence of generative AI (gen AI), one of the most astounding aspects has been the accelerated pace of its adoption, particularly by businesses. Consider that large global enterprises spent around $15 billion on gen AI solutions in 2023, representing about 2 percent of the global enterprise software market. To put that level of growth in perspective, it took four years for enterprise spending on the industry’s last major transformation—software-as-a-service (SaaS)—to reach that same market share milestone (Exhibit 1).

Apple announces Apple Intelligence, its multi-modal generative AI service for Mac, iPhone, iPad - Carl Franzen, Venture Beat

Apple Intelligence, a much-rumored new service combining multiple AI models that aims to provide personalized, private, and secure capabilities across Mac, iPhone, and iPad devices. “It’s aware of your personal data without collecting your personal data,” said Craig Federighi, Apple’s senior vice president of Software Engineering, during the company’s keynote address, pitching the service as more private and secure than rivals by running on-device and on private clouds, depending on the AI models that are used.

Sunday, June 16, 2024

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value - McKinsey

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI.1 Looking by industry, the biggest increase in adoption can be found in professional services.2

Make ChatGPT 10x better - OpenAI, Taft Notion

OpenAI has a prompting guide. And it's really good! 

Here are their 6 strategies to making ChatGPT 10x better: 
1. Write clear instructions
2. Provide reference text
3. Split complex tasks into simpler subtasks
4. Give the model time to "think"
5. Use external tools
6. Test changes systematically

Saturday, June 15, 2024

How to lose your job to AI. - Julia McCoy, YouTube

As an eternal optimist and opportunist, I like to remain one step ahead. Stay to the end – there’s real hope and a call to action that could save your career, life, and legacy. Truth is, we need to be ready for when AI automates 100% of human labor. This video will help your mindset go in the right direction. Maybe one that's uncomfortable and feels entirely new to you. But very, very much needed.

Doing Stuff with AI: Opinionated Midyear Edition - Ethan Mollick, One Useful Thing

The core of serious work with generative AI is the Large Language Model, the technology enabled by the paper celebrated in the song above. I won’t spend a lot of time on LLMs and how they work, but there are now some excellent explanations out there. My favorites are the Jargon-Free Guide and this more technical (but remarkably clear) video, but the classic work by Wolfram is also good. You don’t need to know any of these details, since LLMs don’t require technical knowledge to use, but they can serve as useful background. To learn to do serious stuff with AI, choose a Large Language Model and just use it to do serious stuff - get advice, summarize meetings, generate ideas, write, produce reports, fill out forms, discuss strategy - whatever you do at work, ask the AI to help.

Friday, June 14, 2024

The Reversal Curse Returns - JURGEN GRAVESTEIN, Substack

The ‘Reversal Curse’ refers to a 2023 study that showed that large language models that learn “A is B” don’t automatically generalize “B is A”. A recent pre-print paper that focuses on medical Visual Question Answering (MedVQA) suggest this phenomenon also transfers to multimodal models. Uh-oh! While these models continue to shatter records on industry benchmarks, the researchers call into question the robustness of these evals: what are they even measuring?

OpenAI's new financial milestone - Martin Crowley, AI Tool Report

During an internal all-hands meeting on Wednesday, OpenAI CEO, Sam Altman, announced that the company is set to hit $3.4B in annual revenue this year, which is double what it made last year. OpenAI’s revenue has grown rapidly since it launched ChatGPT—making $1.3B in 2023—thanks to its strategic initiatives, including enterprise partnerships and advancing its AI models. OpenAI clearly plans to maintain this rapid growth trajectory, as it recently hired a new CFO (ex-Nextdoor CEO, Sarah Friar) who will manage OpenAI’s finances and support global growth.This comes after Apple confirmed its partnership with OpenAI during its developer conference this week, which will see ChatGPT integrated into its devices and voice assistant, Siri, but this partnership might not contribute to OpenAI’s annualized revenue.

Thursday, June 13, 2024

AI used to predict potential new antibiotics in groundbreaking study - Eric Berger, the Guardian

A new study used machine learning to predict potential new antibiotics in the global microbiome, which study authors say marks a significant advance in the use of artificial intelligence in antibiotic resistance research. The report, published Wednesday in the journal Cell, details the findings of scientists who used an algorithm to mine the “entirety of the microbial diversity that we have on earth – or a huge representation of that – and find almost 1m new molecules encoded or hidden within all that microbial dark matter”, said César de la Fuente, an author of the study and professor at the University of Pennsylvania. De la Fuente directs the Machine Biology Group, which aims to use computers to accelerate discoveries in biology and medicine.

Introducing Stable Audio Open - An Open Source Model for Audio Samples and Sound Design -

Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects. Users can create drum beats, instrument riffs, ambient sounds, foley and production elements. The model enables audio variations and style transfer of audio samples. Stable Audio Open, on the other hand, specialises in audio samples, sound effects and production elements. While it can generate short musical clips, it is not optimised for full songs, melodies or vocals. This open model provides a glimpse into generative AI for sound design while prioritising responsible development alongside creative communities. The new model was trained on audio data from FreeSound and the Free Music Archive. This allowed us to create an open audio model while respecting creator rights.

Wednesday, June 12, 2024

AI Is Your Coworker Now. Can You Trust It? - Kate O'Flaherty, Wired

Generative AI tools such as OpenAI’s ChatGPT and Microsoft’s Copilot are becoming part of everyday business life. But they come with privacy and security considerations you should know about.For those using generative AI at work, one of the biggest challenges is the risk of inadvertently exposing sensitive data. Most generative AI systems are “essentially big sponges,” says Camden Woollven, group head of AI at risk management firm GRC International Group. “They soak up huge amounts of information from the internet to train their language models.”

Perhaps the most important presentation in 2024 - by Nvidia's Jensen Huang introducing NIMS

This copy of the speech includes annectdotes by analyst Wes Roth.  In sum, it is a great report on where we are with GenAI today, and where we are heading in the future.  

Tuesday, June 11, 2024

Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works - Futurism

During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood. "We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers. Other AI companies are trying to find new ways to "open the black box" by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.

Sam Altman REVEALS the "Future of AI" - Wes Roth, YouTube

Sam Altman makes an appearance at the AI for Good Global Summit. He joins in remotely while his interviewer is live. Sam Altman gives us a little preview into what's coming next, what AI will bring in the very near future. Now by this point you probably heard that open AI has recently begu.n training its next Frontier Model. Now we're not exactly sure what to call this it's not GPT 5 by the sound of it or what we would think of as GPT 5. According to some announcements at the Microsoft build conference at the AI Summit in Paris France opening it will be dropping another model later this year.

Monday, June 10, 2024

Gen AI and the future of work - McKinsey Quarterly

Generative AI is front and center for nearly every industry and is poised to change just about everything. What will it mean for your workers? The development and widespread public use of generative AI (gen AI) accelerated dramatically in the months following ChatGPT’s launch. Gen AI is hardly a passing fad or a niche innovation: it means business and could add as much as $4.4 trillion annually to the global economy. Gen AI has the potential to enhance productivity across industries. While that may affect some workers more than others, it will change ways of working for almost everyone.

AI products like ChatGPT much hyped but not much used, study says - Tom Singleton, BBC

Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests. Researchers surveyed 12,000 people in six countries, including the UK, with only 2% of British respondents saying they use such tools on a daily basis. But the study, from the Reuters Institute and Oxford University, says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech. The findings were based on responses to an online questionnaire fielded in six countries: Argentina, Denmark, France, Japan, the UK, and the USA. The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and science. Most said they think generative AI will make their own lives better. When asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic.

Sunday, June 09, 2024

An Early Look at ChatGPT-5: Advances, Competitors, and What to Expect - Marc Emmer, Inc.

Details surrounding ChatGPT-5 remain shrouded in secrecy, yet some clues offer a glimpse into its potential. CEO Sam Altman has hinted at a smarter, more versatile model capable of handling a more comprehensive array of tasks. Industry speculation is that GPT-5 may be multimodal, potentially processing text, images, video, and even music. One intriguing possibility is a shift from a chatbot model to an agent, enabling GPT-5 to autonomously execute real-world actions. This could revolutionize how AI interacts with the digital and physical world, potentially automating complex tasks and decision-making processes.

ChatGPT Is Coming For Higher Education, Says OpenAI - Forbes

OpenAI has announced ChatGPT Edu. This will be a specialized version of its AI platform designed specifically for universities. This move aims to deploy AI across academic, research and operational teams on campuses around the world. Set to launch this summer, ChatGPT Edu includes the latest GPT-4o model with advanced reasoning capabilities across text, audio and vision. It offers robust administrative controls, data security and high usage limits. Kyle Bowen, deputy CIO at ASU, stated, “Integrating OpenAI’s technology into our educational frameworks accelerates transformation. We’re collaborating to harness these tools, extending our learnings as a scalable model for other institutions.”

Saturday, June 08, 2024

Employers appear more likely to offer interviews, higher pay to those with AI skills, study says - Carolyn Crist, Higher Ed Dive

Employers are significantly more likely to offer job interviews and higher salaries to job candidates with experience related to artificial intelligence, according to a new study published in the journal Oxford Economics Papers. Specifically, college graduates with “AI capital” or business-related AI studies listed on their resumes and cover letters were far more likely to receive an interview invitation and higher wage offers. “In the UK, AI is causing dramatic shifts in the workforce, and firms need to respond to these demands by upgrading their workforces through enhancing their AI skills levels,” study author Nick Drydakis, a professor of economics at Anglia Ruskin University in Cambridge, said in a statement.

ASU faculty create AI-powered ‘buddy’ to help online students learn language - Stephanie King, ASU News

When it comes to language learning, communication is the ultimate goal. But for communication to take place, you need a partner. And that’s not always possible for students in ASU Online language courses; diverse student body learning needs and scheduling demands can make it challenging to hold synchronous instruction and virtual peer meetups.  Christiane Reves, an assistant teaching professor of German in Arizona State University’s School of International Letters and Cultures, and colleagues in her department think that “Language Buddy” — a custom GPT they created in ChatGPT Enterprise as part of the university’s AI Innovation Challenge — could be the solution. Powered by generative artificial intelligence (AI), Language Buddy will allow students to participate in conversations at their language level — anytime, anywhere.

Friday, June 07, 2024

Perplexity AI’s new feature will turn your searches into shareable pages - Ivan Mehta, Tech Crunch

With Perplexity Pages, the unicorn is aiming to help users make reports, articles or guides in a visually appealing format. Free and paid users can find the option to create a page in the library section. They just need to enter a prompt, such as “Information about Sahara Desert,” for the tool to start creating a page. Users can select an audience type — beginner, advanced or anyone — to shape the tone of the generated text. Perplexity said its algorithms work to create a detailed article with different sections. You can ask the AI tool to rewrite or reformat any sections or even remove them. Plus, you can add a section by prompting the tool to write about a certain subtopic. Perplexity also helps you find and insert relevant media items such as images and videos.

These Three Execs Are Charting An Ethical Future For AI And Music - Kristin Robinson, Billboard

Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid. Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it.

Thursday, June 06, 2024

The AI Gender Bias Epidemic: How to Stop It - AutoGPT

You already know how AI is reshaping industries, economies, and societies at an unprecedented pace. From helping doctors diagnose diseases to predicting financial trends, AI has transcended traditional boundaries, promising efficiency, accuracy, and advancement.  Yet, amidst the awe-inspiring potential of AI, it is important to understand the underlying biases that have found their way into these technologies. In this article, we’ll explore the ins and outs of AI gender bias – examples, impacts, and proactive solutions to mitigate its adverse effects.

OpenAI is training GPT-4's successor. Here are 3 big upgrades to expect from GPT-5 - Sabrina Ortiz, ZDnet

AGI could mean asking agents to accomplish an end goal, which thy could achieve by reasoning what needs to be done, planning how to do it, and carrying the task out. For example, in an ideal scenario where GPT-5 achieved AGI, you would be able to request a task such as "Order a burger from McDonald's for me," and the AI would be able to complete a series of tasks that include opening the McDonald's website, and inputting your order, address, and payment method. All you'd have to worry about is eating the burger.

Wednesday, June 05, 2024


After former and current OpenAI employees released an open letter claiming they're being silenced against raising safety issues, one of the letter's signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip. Kokotajlo's spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn't accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway. The term "p(doom)," which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.

The Crucial Difference Between AI And AGI - Forbes

Artificial Intelligence (AI) is a transformative force that is reshaping industries from healthcare to finance today. Yet, the distinction between AI and Artificial General Intelligence (AGI) is not always clearly understood and is causing confusion as well as fear. AI is designed to excel at specific tasks, while AGI is a theoretical concept that would be capable of performing any intellectual task that a human can perform across a wide range of activities. While AI already improves our daily lives and workflows through automation and optimization, the emergence of AGI would be a transformative leap, radically expanding the capabilities of machines and redefining what it means to be human.

Tuesday, June 04, 2024

AI is not yet killing jobs - the Economist

After astonishing breakthroughs in artificial intelligence, many people worry that they will end up on the economic scrapheap. Global Google searches for “is my job safe?” have doubled in recent months, as people fear that they will be replaced with large language models (llms). Some evidence suggests that widespread disruption is coming. In a recent paper Tyna Eloundou of Openai and colleagues say that “around 80% of the us workforce could have at least 10% of their work tasks affected by the introduction of llms”. White-collar roles are thought to be especially vulnerable to generative ai, which is becoming ever better at logical reasoning and creativity. However, there is as yet little evidence of an ai hit to employment. 

Meta introduces Chameleon, a state-of-the-art multimodal model - the Verge

As competition in the generative AI field shifts toward multimodal models, Meta has released a preview of what can be its answer to the models released by frontier labs. Chameleon, its new family of models, has been designed to be natively multi-modal instead of putting together components with different modalities. While Meta has not released the models yet, their reported experiments show that Chameleon achieves state-of-the-art performance in various tasks, including image captioning and visual question answering (VQA), while remaining competitive in text-only tasks.

Monday, June 03, 2024

Google targets filmmakers with Veo, its new generative AI video model - Jess Weatherbed, the Verge

Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

AI More Friend than Foe? - CNBC

Sal Khan, CEO of Khan Academy, joins CNBC's 'The Exchange' to discuss the academy's partnership with Microsoft, the outcomes students will see from AI, and more.

Sunday, June 02, 2024

Europe, universities and industry launch Emotion AI masters - Karen MacGregor, University World News

A masters in Emotion AI – an emerging subset of artificial intelligence that interprets and responds to human emotions – kicks off next year across eight universities in six European countries. The masters is blended and transdisciplinary, at the cutting edge of applied AI and will spin off modules for mass AI upskilling. It is also interesting as an example of Europe’s expanded number of postgraduate degrees produced in partnerships between universities and the private sector, in this case under EIT Digital, the digital arm of the European Institute of Innovation and Technology (EIT), which is an independent body of the European Union.

2024 EDUCAUSE Action Plan: AI Policies and Guidelines - Jenay Robert Mark McCormack, EDUCAUSE

More than a year after the "AI spring" suddenly upended notions of what could be possible both inside and outside the classroom, most institutions are still racing to catch up and establish policies and guidelines that can help their leaders, staff, faculty, and students effectively and safely use these exciting and powerful new technologies and practices. Thankfully, institutions need not start from scratch in developing their AI policies and guidelines. Through the work of Cecilia Ka Yuk Chan and WCET, institutions have a foundation to build on, a policy framework that spans institutional governance, operations, and pedagogy. Built around these three pillars, this framework helps ensure that institutional AI-related policies and guidelines comprehensively address critical aspects of institutional life and functioning.

Saturday, June 01, 2024

AI pioneer LeCun to next-gen AI builders: ‘Don’t focus on LLMs’ - Taryn Plumb, Venture Beat

AI pioneer Yann LeCun kicked off an animated discussion today after telling the next generation of developers not to work on large language models (LLMs).  “This is in the hands of large companies, there’s nothing you can bring to the table,” Lecun said at VivaTech today in Paris. “You should work on next-gen AI systems that lift the limitations of LLMs.” The comments from Meta’s chief AI scientist and NYU professor quickly kicked off a flurry of questions and sparked a conversation on the limitations of today’s LLMs. 

Prepare to Get Manipulated by Emotionally Expressive Chatbots - Will Knight, Wired

It’s nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren’t used to them doing it very well. With the updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as “multimodal.” You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI’s demo was ChatGPT’s new “personality.” The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.

Friday, May 31, 2024

Introducing ChatGPT Edu - OpenAI

We're announcing ChatGPT Edu, a version of ChatGPT built for universities to responsibly deploy AI to students, faculty, researchers, and campus operations. Powered by GPT-4o, ChatGPT Edu can reason across text and vision and use advanced tools such as data analysis. This new offering includes enterprise-level security and controls and is affordable for educational institutions. We built ChatGPT Edu because we saw the success universities like the University of Oxford, Wharton School of the University of Pennsylvania, University of Texas at Austin, Arizona State University, and Columbia University in the City of New York were having with ChatGPT Enterprise.

Top AI models exposed - Martin Crowley, AI Tool Report

The UK Safety AI Institute (AISI) has revealed, ahead of the AI summit in Seoul, that five of the most popular large language models (LLMs) are “highly vulnerable” to even the most basic jailbreaking attempts, which is where people trick an AI model into ignoring safeguards that are in place to prevent harmful responses. Although AISI has chosen not to disclose which LLMs were vulnerable (instead referring to them as red, purple, green, blue, and yellow models in the report), they have stated that all five are publicly available. The AISI performed a series of tests on each LLM to establish whether it was vulnerable to jailbreaks, could be used to facilitate cyber-attacks, and if it was capable of completing tasks, autonomously, without much human intervention.

Thursday, May 30, 2024

Exclusive: Inflection AI reveals new team and plan to embed emotional AI in business bots - Matt Marshall, Venture Beat

When a friend lost a loved companion cat, Hoffman said he first asked the leading traditional models what he should do to console his friend, and they all responded roughly the same way, with a list: for example, getting a friend flowers, or offering to help them with daily things. But Pi responded differently: “That must be really hard for your friend, and because you’re a friend, you care about that,” Hoffman recalls it saying. “But you know your friend. What way would you think your friend would most want you to be present for him?” In other words, Hoffman said, Pi knows the list that everyone else does, but it responds with less of a Wikipedia listing, focusing instead on the “emotional fabric” around the question.

Age of the AI agents - Jasmine Wu, Laura Batchelor, Deirdre Bosa, CNBC

AI has moved into a new era – from chatbots to AI agents capable of having instantaneous, real-time conversations as showcased by Microsoft-backed OpenAI’s GPT-4o and Google’s Project Astra. We break down what’s behind the big leap forward, the risks involved, and sit down with Google CEO Sundar Pichai exclusively on the news.

Wednesday, May 29, 2024

Google Is About to Change the Whole Internet — Again The company’s all-in investment in AI. - John Herrman, New York Magazine

The biggest mystery surrounding Google over the past year has concerned its core product, its original and still primary source of revenue: Will search engines be replaced by AI chatbots? In May, the company offered some clarity: “In the next era of search, AI will do the work so you don’t have to,” according to a video announcing that AI Overviews, Google’s new name for AI-generated answers, would soon be showing up at the top of users’ results pages. It’s a half-step into a future in which the internet, when given a query, doesn’t provide links and clues — it simply answers.  (Ed Note: meanwhile look at CleeAI which gives plenty of documentation within its search results )

Is AI Consciousness Even Possible? - AI Knowledge, AutoGPT

Will AI ever be conscious? That’s a question that’s been on everyone’s mind for some time now. The concept of consciousness in AI has long intrigued and captivated the human imagination. As technology advances, questions about the potential for AI to develop consciousness have sparked intense debate and speculation. In this article, we’ll unravel the mysteries surrounding AI consciousness.

Tuesday, May 28, 2024

It’s Time to Believe the AI Hype - Steven Levy, Wired

Some pundits suggest generative AI stopped getting smarter. The explosive demos from OpenAI and Google that started the week show there’s plenty more disruption to come. OpenAI, denying rumors that it would unveil either an AI-powered search product or its next-generation model GPT-5, instead announced something different, but nonetheless eye-popping, on Monday. It was a new flagship model called GPT-4o, to be made available for free, which uses input and output in various modes—text, speech, vision—for disturbingly natural interaction with humans.