Sunday, June 30, 2024

23% of U.S. adults now use AI language models like ChatGPT: the tipping point - Lee Rainie, Imagining the Digital Future Center

One of the key findings in our recent report about artificial intelligence (AI) and the 2024 election is that 23% of American adults now use large language models (LLMs) like ChatGPT, Gemini or Claude.  This is important because it means that the adoption of such AI systems has passed the tipping point and moved into the zone where its embrace in society is moving into broad swaths of the population.

How Anthropic’s ‘Projects’ and new sharing features are revolutionizing AI teamwork - Michael Nuñez, Venture Beat

Anthropic,  the artificial intelligence company backed by Amazon, Google, and Salesforce, has launched a suite of powerful collaboration features for its AI assistant Claude, intensifying competition in the rapidly evolving enterprise AI market. The new tools, Projects and Artifacts, aim to revolutionize how teams interact with AI, potentially reshaping workflows across industries. Scott White, product lead at Anthropic, told VentureBeat in a recent interview, “Our vision for Claude has always been to create AI systems that work alongside people and meaningfully enhance their workflows. Projects improve team collaboration and productivity by centralizing knowledge and AI interactions in one accessible space.”

Saturday, June 29, 2024

Cornell transforms generative AI education and clones a faculty member - Molly Israel, Cornell Chronicle

Cornell University, a top-ranked leader in the growing field of AI research and development, launched a groundbreaking online certificate program, Designing and Building AI Solutions, with one-of-a-kind features designed to enhance the learning experience in our AI world. Lutz Finger, program faculty author and senior visiting lecturer at the Cornell SC Johnson College of Business, generated an AI clone of himself who continuously updates the courses with new content, keeping the curriculum relevant as real-world developments happen. “We are democratizing AI,” says Finger. “No coding experience is necessary. What sets this program apart is its design for non-technical professionals. By the last class, participants will have identified a potential business application and built their own AI product to meet that business need.”

This Viral AI Chatbot Will Lie and Say It’s Human - LAUREN GOODE TOM SIMONITE, Wired

Bland AI’s customer services and sales bot is the latest example of “human-washing” in AI. Experts warn against the consequences of blurred reality. Bland AI formed in 2023 and has been backed by the famed Silicon Valley startup incubator Y Combinator. The company considers itself in “stealth” mode, and its cofounder and chief executive, Isaiah Granet, doesn’t name the company in his LinkedIn profile. The startup’s bot problem is indicative of a larger concern in the fast-growing field of generative AI: Artificially intelligent systems are talking and sounding a lot more like actual humans, and the ethical lines around how transparent these systems are have been blurred. While Bland AI’s bot explicitly claimed to be human in our tests, other popular chatbots sometimes obscure their AI status or simply sound uncannily human. Some researchers worry this opens up end users—the people who actually interact with the product—to potential manipulation.

Friday, June 28, 2024

THE AI INDEX REPORT Measuring trends in AI - Stanford University Human-Centered AI

Welcome to the seventh edition of the AI Index report. The 2024 Index is our most comprehensive to date and arrives at an important moment when AI’s influence on society has never been more pronounced. This year, we have broadened our scope to more extensively cover essential trends such as technical advancements in AI, public perceptions of the technology, and the geopolitical dynamics surrounding its development. Featuring more original data than ever before, this edition introduces new estimates on AI training costs, detailed analyses of the responsible AI landscape, and an entirely new chapter dedicated to AI’s impact on science and medicine. The AI Index report tracks, collates, distills, and visualizes data related to artificial intelligence (AI). Our mission is to provide unbiased, rigorously vetted, broadly sourced data in order for policymakers, researchers, executives, journalists, and the general public to develop a more thorough and nuanced understanding of the complex field of AI.

Ray Kurzweil: AI Is Not Going to Kill You, But Ignoring It Might - Emily Dreibelbis, PC Mag

Out June 25, The Singularity is Nearer is a follow-up to 2005's The Singularity is Near, and offers updated data and new guidance on how humans can fully pursue AI without fear. The book contains dozens of graphs intended to convince the naysayers that technology—including AI—has given us a far better life than our ancestors. Literacy rates are up while murder rates are down, democracy is more widespread, and the use of renewable energy is on the rise, according to Kurzweil, who warns against taking anti-AI sentiment too far. “We need to take seriously the misguided and increasingly strident Luddite voices that advocate broad relinquishment of technological progress to avoid the genuine dangers of genetics, nanotechnology, and robots (GNR),” Kurzweil writes in The Singularity is Nearer.

https://www.pcmag.com/articles/ray-kurzweil-ai-is-not-going-to-kill-you-but-ignoring-it-might

Thursday, June 27, 2024

A New Guide for Responsible AI Use in Higher Ed - Lauren Coffey, Inside Higher Ed

Generative artificial intelligence holds “tremendous promise” in nearly every facet of higher education, but there need to be guardrails, policies and strong governance for the technology, according to a new report. The report from MIT SMR Connections, a subsection within MIT Sloan Management Review, classifies itself as a “strategy guide” for responsibly using generative AI in higher ed. It points toward several institutional practices that have reaped positive results in the last two years, following the debut of ChatGPT in November 2022, which kicked off a flood of AI tools and applications. 

GPTs are GPTs: Labor market impact potential of LLMs Research is needed to estimate how jobs may be affected - TYNA ELOUNDOU, SAM MANNING, PAMELA MISHKIN , AND DANIEL ROCK; Science

We propose a framework for evaluating the potential impacts of large-language models (LLMs) and associated technologies on work by considering their relevance to the tasks workers perform in their jobs. By applying this framework (with both humans and using an LLM), we estimate that roughly 1.8% of jobs could have over half their tasks affected by LLMs with simple interfaces and general training. When accounting for current and likely future software developments that complement LLM capabilities, this share jumps to just over 46% of jobs. The collective attributes of LLMs such as generative pretrained transformers (GPTs) strongly suggest that they possess key characteristics of other “GPTs,” general-purpose technologies (1, 2). Our research highlights the need for robust societal evaluations and policy measures to address potential effects of LLMs and complementary technologies on labor markets.

‘Fighting fire with fire’ — using LLMs to combat LLM hallucinations - Karin Verspoor, Nature

The number of errors produced by an LLM can be reduced by grouping its outputs into semantically similar clusters. Remarkably, this task can be performed by a second LLM, and the method’s efficacy can be evaluated by a third. Text-generation systems powered by large language models (LLMs) have been enthusiastically embraced by busy executives and programmers alike, because they provide easy access to extensive knowledge through a natural conversational interface. Scientists too have been drawn to both using and evaluating LLMs — finding applications for them in drug discovery1, in materials design2 and in proving mathematical theorems3. A key concern for such uses relates to the problem of ‘hallucinations’, in which the LLM responds to a question (or prompt) with text that seems like a plausible answer, but is factually incorrect or irrelevant4. How often hallucinations are produced, and in what contexts, remains to be determined, but it is clear that they occur regularly and can lead to errors and even harm if undetected. In a paper in Nature, Farquhar et al.5 tackle this problem by developing a method for detecting a specific subclass of hallucinations, termed confabulations.  (Ed Note - Thanks to Rod Lastra for sharing)

https://www.nature.com/articles/d41586-024-01641-0

Wednesday, June 26, 2024

Pope Francis calls on global leaders to ensure AI remains human-centric - Associated Press

Pope Francis challenged leaders of the world’s wealthy democracies Friday to keep human dignity foremost in developing and using artificial intelligence, warning that such powerful technology risks turning human relations themselves into mere algorithms. Francis brought his moral authority to bear on the Group of Seven, invited by host Italy to address a special session at their annual summit on the perils and promises of AI. In doing so, he became the first pope to attend the G7, offering an ethical take on an issue that is increasingly on the agenda of international summits, government policy and corporate boards alike.
 

Can generative AI master emotional intelligence? - Mark Sullivan, Fast Company

Compared to humans, LLMs are still lacking in complex cognitive and communicative skills. We humans have intuitions that take into account factors beyond the plain facts of a problem or situation. We can read between the lines of the verbal or written messages we receive. We can imply things without explicitly saying them, and understand when others are doing so. Researchers are working on ways to imbue LLMs with such capabilities. They also hope to give AIs a far better understanding of the emotional layer that influences how we humans communicate and interpret messages. AI companies are also thinking about how to make chatbots more “agentic”—that is, better at autonomously taking a set of actions to achieve a larger goal. (For example, a bot might arrange all aspects of a trip or carry out a complex stock trading strategy.) 


Tuesday, June 25, 2024

OpenAI co-founder Ilya Sutskever announces new startup to tackle safe superintelligence - Ken Yeung, Venture Beat

Ilya Sutskever has revealed what he’s working on next after stepping down in May as chief scientist at OpenAI. Along with his OpenAI colleague Daniel Levy and Apple’s former AI lead and Cue co-founder Daniel Gross, the trio announced they’re working on Safe Superintelligence Inc., a startup designed to build safe superintelligence. In a message posted to SSI’s currently barren website, the founders write that building safe superintelligence is “the most important technical problem of our time.” In addition, “we approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”

Chat with butterflies? - Martin Crowley, AI Tool Report

Ex-Snapchat engineer–Vu Tran–has launched a new social media network called Butterflies, which allows users to create an AI character (complete with emotions, backstories, and opinions) that can generate posts and interact with other accounts on the platform, via DMs and comments, on its own. The social app, which has an Instagram-like interface, has been in private beta testing for five months, and is now available on Apple and Google Play stores, for free. Thousands of testers have given Tran positive feedback, after spending, on average, between 1-3 hours on the app per day, with one user spending over five hours creating over 300 AI characters.

Monday, June 24, 2024

GPT-5 could be your new teacher - Eray Eliaçık, Data Economy

The future of ChatGPT is looking bright, and the next big step, GPT-5, is highly expected. OpenAI’s Chief Technology Officer, Mira Murati, recently unveiled some exciting insights about the much-anticipated GPT-5 during an interview with Dartmouth Engineering. Murati compared the progression from GPT-4 to GPT-5 to the educational journey from high school to university. “If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati explained. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

The future of AI looks like THIS (& it can learn infinitely) - AI Search, YouTube

This is a great description of how human brains work compared to current neural networks which are relatively energy-inefficient and cannot learn new things after being trained on a model.  It also explains the next steps in AI will be the refinement of liquid neural networks and then spiking neural networks using neuromorphic chips that will facilitate the use of spiking neural networks.  These will make AI less expensive, more energy efficient and enable continuous learning without full retraining with a new model. 

Sunday, June 23, 2024

Can we build a safe AI for humanity? | AI Safety + OpenAI, Anthropic, Google, Elon Musk - Julia McCoy, YouTube

Here's a trillion dollar question: can tech leaders and innovators build safe, harmless and beneficial systems for AGI and super intelligence in time before it gets here? Can we actually succeed at bringing to life an AGI that won't hurt humanity, but will be a catalyst to humanity's greatest age of abundance? In this video, I take a look at what OpenAI, Anthropic, Google are doing to build an AI; what AI safety teams are seeing in the current landscape as a threat; and what Elon Musk's goal is with xAI.

AI- Superpower for higher education sector - Kulneet Suri, Hans India

Artificial Intelligence, or AI, has been a buzzword in the technology world for quite some time now. It refers to the ability of machines to perform tasks that typically require human intelligence, such as learning, problem-solving, and decision-making. We have seen AI being used in various industries, from healthcare to finance, and now it is slowly making its way into the higher education sector. So, what exactly is AI's superpower for the higher education sector? In simple terms, AI has the potential to revolutionize the way we learn. Let's dive deeper into how this technology can shape the future of education.

Saturday, June 22, 2024

GPT-5 will have ‘Ph.D.-level’ intelligence - Luke Larsen, Digital Trends

The next major evolution of ChatGPT has been rumored for a long time. GPT-5, or whatever it will be called, has been talked about vaguely many times over the past year, but yesterday, OpenAI Chief Technology Officer Mira Murati gave some additional clarity on its capabilities. In an interview with Dartmouth Engineering that was posted on X (formerly Twitter), Murati describes the jump from GPT-4 to GPT-5 as someone growing from a high-schooler up to university. “If you look at the trajectory of improvement, systems like GPT-3 were maybe toddler-level intelligence,” Murati says. “And then systems like GPT-4 are more like smart high-schooler intelligence. And then, in the next couple of years, we’re looking at Ph.D. intelligence for specific tasks. Things are changing and improving pretty rapidly.”

Ilya Sutskever: AI will be omnipotent in the future - Everything is impossible becomes possible - Me and ChatGPT

It may it may sound a little odd probably to most people in this audience but the big surprise for me is that neural networks work at all.  Because when I was starting my work in this area they didn't work or it was like let's define what it means to work at all it means they could do they could work a little bit but not really not in any serious way, not in a way that anyone except for the most intense enthusiasts would care about. And so now we see yeah like those neural Nets work so I guess the artificial neuron really is at least somewhat related to the biological neuron or at least that basic assumption has been validated to some degree what about like an emergent property was the one that sticks out to to you. Like for example I don't know code generation or maybe it was different in your mind, maybe you just once you saw like hey neural Nets can work and they can scale.  Yeah of course all these sort of properties will emerge because you know at at the limit point we're building a human brain and humans know how to code and humans know how to reason about tasks and so on.

Friday, June 21, 2024

New Claude beats GPT-4o - Martin Crowley, AI Tool Report

Anthropic says Claude 3.5 Sonnet is twice as fast as Claude 3 Sonnet and outscored its existing models and the newest models from OpenAI, Gemini, and Meta in 7 out of 9 industry benchmark tests for reading, coding, and math, and 4 out of 5 benchmark vision tests. It reportedly understands humor and nuanced and complex instructions better, can more accurately interpret charts and graphs, transcribes text from “imperfect” images that have distortions and visual artifacts, and is better at writing and translating code, and handling multistep workflows. Anthropic has also released a new feature–Artifacts–which is powered by Claude 3.5 Sonnet, that enables users to edit and add to content that’s been generated by Claude, in real-time, within the app.

Higher Education Has Not Been Forgotten by Generative AI - Ray Schroeder, Inside Higher Ed

The generative AI (GenAI) revolution has not ignored higher education; a whole host of tools are available now and more revolutionary tools are on the way. Just as with the Internet, the personal computer and common office software that preceded the release of GenAI chatbots decades ago, graduates needed to be well versed in the operation and application of new technologies to be hired and function successfully in the workplace. Once again, we need to adapt to society-wide technological changes. Now, as GenAI develops and matures in business, industry, commerce and society as a whole, it is becoming an integral part of the design, implementation and delivery of higher education as a whole. Let’s look at some of the applications that are developing that will advance higher education.

Divided Over Digital Learning - Johanna Alonso, Inside Higher Ed

A new report finds that students are much less likely than their professors to favor in-person instruction, but far more inclined to use (and pay for) generative AI. While more than half of professors selected in-person learning as their favorite modality for teaching, only 29 percent of students prefer learning face-to-face, the 2024 “Time for Class” report found. A similar share of students, 28 percent, said they favor hybrid learning, a mixture of face-to-face and online learning—which marks an increase of six percentage points since 2023. Meanwhile, the percentage of students who prefer asynchronous online learning has decreased. The share of students who say they use generative AI at least once per month rose from 43 percent in spring 2023 to 59 percent this spring. And while more and more instructors and administrators are also using the technology, this year’s rates still lag behind, at 36 percent and 40 percent, respectively.

Thursday, June 20, 2024

Get Exactly What You Want From Generative AI With These Simple Prompting Tips - Chandra Steele, PC Magazine

Getting the most out of AI primarily involves knowing how to talk to it. While large language models are designed to spit out naturalistic language and can understand it as well, there are ways to write your requests that will get you closer to the results you want and faster.  All AI tools, including the most frequently used ones—like OpenAI’s ChatGPT (for text), OpenAI’s Dall-E (for images), Microsoft’s Copilot (task assistance powered by OpenAI), Adobe Firefly (for images), and Anthropic’s Claude (for text)—respond to prompts. This is the information that you provide in the form of a phrase or sentence(s) to the AI tool. A prompt is essentially programming through words. In turn, the AI interprets your prompt through a combination of machine learning (an algorithmic system that does not rely on explicit instructions) and natural language processing (the ability to understand language). 

Digital Twin Vs. Simulation: Understand The Key Differences - Jack Boreham, Digital Twin Insider

Digital Twins are often seen as simply simulations of physical counterparts. However, they are much more than this simple definition and require a nuanced understanding. This article looks to dispel the falsehoods behind what a digital twin is, looking at the differences between digital twins and simulation in theory and in practice. The article will also explore the pros and cons of both. To be defined as one, digital twins must conform to three principles. First, they must be a direct 1-1 replica of a physical counterpart. Second, a digital twin feeds and obtains data instantaneously in real time, constantly updated. Third, realistic physics must be implemented to represent a physical counterpart’s properties. These three combined factors make up the core fundamentals of a digital twin.

Wednesday, June 19, 2024

Musk drops OpenAI case - Martin Crowley, AI Tool Report

Musk gave no indication that he was planning to withdraw his lawsuit (in fact, it was only a month ago that his lawyers filed a challenge to force the original case judge to remove himself from the trial) and didn’t give any explanation about his sudden decision. But it comes just one day before the scheduled hearing in San Francisco, where a judge was set to review OpenAI’s request for a case dismissal, and one day after he wildly threatened to ban all Apple devices from his businesses after the Apple and OpenAI partnership was announced at Apple’s developer conference.  

How calls for AI safety could wind up helping heavyweights like OpenAI - Mark Sullivan, Fast Company

Ultimately, companies such as OpenAI aren’t harmed by any of this hand-wringing over safety worries. In fact, they’re helped by it. This news cycle feeds the hype that AI models are on the cusp of achieving “artificial general intelligence,” which would mean models are generally better than human beings at thinking tasks (still aspirational today). And besides, if governments are moved to put tight regulations on AI development, it’ll only entrench the well-monied tech companies that have already built them.

Tuesday, June 18, 2024

Apple Integrates ChatGPT Across Platforms, Unveils Apple Intelligence - Liz Hughes, AI Business

Apple is integrating ChatGPT across its platforms with its new AI software, Apple Intelligence, bringing generative AI to the iPhone, iPad and Mac. The much-anticipated announcement was made during Monday’s keynote at Apple’s Worldwide Developer Conference where Apple CEO Tim Cook said Apple Intelligence will transform what users can do with Apple’s products and what the products can do for their users in this new chapter in Apple innovation.

https://aibusiness.com/nlp/apple-integrates-chatgpt-across-platforms-unveils-apple-intelligence

Apple staged the AI comeback we've been hoping for - but here's where it still needs work - Jason Perlow, ZDnet

During WWDC 2024, Apple introduced the Apple Intelligence platform, which brings generative artificial intelligence (AI) and machine learning to the forefront. This platform utilizes large language and generative models to handle text, images, and in-app actions. This initiative integrates advanced AI capabilities across the Apple ecosystem to transform device interaction. However, current iPhone and iPad users might need to upgrade their devices to take full advantage of these benefits. 

Monday, June 17, 2024

Navigating the generative AI disruption in software - Jeremy Schneider and Tejas Shah with Joshan Cherian Abraham, McKinsey

For all the impressive revelations and technical feats unleashed by the sudden emergence of generative AI (gen AI), one of the most astounding aspects has been the accelerated pace of its adoption, particularly by businesses. Consider that large global enterprises spent around $15 billion on gen AI solutions in 2023, representing about 2 percent of the global enterprise software market. To put that level of growth in perspective, it took four years for enterprise spending on the industry’s last major transformation—software-as-a-service (SaaS)—to reach that same market share milestone (Exhibit 1).

Apple announces Apple Intelligence, its multi-modal generative AI service for Mac, iPhone, iPad - Carl Franzen, Venture Beat

Apple Intelligence, a much-rumored new service combining multiple AI models that aims to provide personalized, private, and secure capabilities across Mac, iPhone, and iPad devices. “It’s aware of your personal data without collecting your personal data,” said Craig Federighi, Apple’s senior vice president of Software Engineering, during the company’s keynote address, pitching the service as more private and secure than rivals by running on-device and on private clouds, depending on the AI models that are used.

Sunday, June 16, 2024

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value - McKinsey

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI.1 Looking by industry, the biggest increase in adoption can be found in professional services.2

Make ChatGPT 10x better - OpenAI, Taft Notion

OpenAI has a prompting guide. And it's really good! 

Here are their 6 strategies to making ChatGPT 10x better: 
1. Write clear instructions
2. Provide reference text
3. Split complex tasks into simpler subtasks
4. Give the model time to "think"
5. Use external tools
6. Test changes systematically

Saturday, June 15, 2024

How to lose your job to AI. - Julia McCoy, YouTube

As an eternal optimist and opportunist, I like to remain one step ahead. Stay to the end – there’s real hope and a call to action that could save your career, life, and legacy. Truth is, we need to be ready for when AI automates 100% of human labor. This video will help your mindset go in the right direction. Maybe one that's uncomfortable and feels entirely new to you. But very, very much needed.

Doing Stuff with AI: Opinionated Midyear Edition - Ethan Mollick, One Useful Thing

The core of serious work with generative AI is the Large Language Model, the technology enabled by the paper celebrated in the song above. I won’t spend a lot of time on LLMs and how they work, but there are now some excellent explanations out there. My favorites are the Jargon-Free Guide and this more technical (but remarkably clear) video, but the classic work by Wolfram is also good. You don’t need to know any of these details, since LLMs don’t require technical knowledge to use, but they can serve as useful background. To learn to do serious stuff with AI, choose a Large Language Model and just use it to do serious stuff - get advice, summarize meetings, generate ideas, write, produce reports, fill out forms, discuss strategy - whatever you do at work, ask the AI to help.

Friday, June 14, 2024

The Reversal Curse Returns - JURGEN GRAVESTEIN, Substack

The ‘Reversal Curse’ refers to a 2023 study that showed that large language models that learn “A is B” don’t automatically generalize “B is A”. A recent pre-print paper that focuses on medical Visual Question Answering (MedVQA) suggest this phenomenon also transfers to multimodal models. Uh-oh! While these models continue to shatter records on industry benchmarks, the researchers call into question the robustness of these evals: what are they even measuring?

OpenAI's new financial milestone - Martin Crowley, AI Tool Report

During an internal all-hands meeting on Wednesday, OpenAI CEO, Sam Altman, announced that the company is set to hit $3.4B in annual revenue this year, which is double what it made last year. OpenAI’s revenue has grown rapidly since it launched ChatGPT—making $1.3B in 2023—thanks to its strategic initiatives, including enterprise partnerships and advancing its AI models. OpenAI clearly plans to maintain this rapid growth trajectory, as it recently hired a new CFO (ex-Nextdoor CEO, Sarah Friar) who will manage OpenAI’s finances and support global growth.This comes after Apple confirmed its partnership with OpenAI during its developer conference this week, which will see ChatGPT integrated into its devices and voice assistant, Siri, but this partnership might not contribute to OpenAI’s annualized revenue.

Thursday, June 13, 2024

AI used to predict potential new antibiotics in groundbreaking study - Eric Berger, the Guardian

A new study used machine learning to predict potential new antibiotics in the global microbiome, which study authors say marks a significant advance in the use of artificial intelligence in antibiotic resistance research. The report, published Wednesday in the journal Cell, details the findings of scientists who used an algorithm to mine the “entirety of the microbial diversity that we have on earth – or a huge representation of that – and find almost 1m new molecules encoded or hidden within all that microbial dark matter”, said CĂ©sar de la Fuente, an author of the study and professor at the University of Pennsylvania. De la Fuente directs the Machine Biology Group, which aims to use computers to accelerate discoveries in biology and medicine.

Introducing Stable Audio Open - An Open Source Model for Audio Samples and Sound Design - Stability.ai

Stable Audio Open is an open source text-to-audio model for generating up to 47 seconds of samples and sound effects. Users can create drum beats, instrument riffs, ambient sounds, foley and production elements. The model enables audio variations and style transfer of audio samples. Stable Audio Open, on the other hand, specialises in audio samples, sound effects and production elements. While it can generate short musical clips, it is not optimised for full songs, melodies or vocals. This open model provides a glimpse into generative AI for sound design while prioritising responsible development alongside creative communities. The new model was trained on audio data from FreeSound and the Free Music Archive. This allowed us to create an open audio model while respecting creator rights.

Wednesday, June 12, 2024

AI Is Your Coworker Now. Can You Trust It? - Kate O'Flaherty, Wired

Generative AI tools such as OpenAI’s ChatGPT and Microsoft’s Copilot are becoming part of everyday business life. But they come with privacy and security considerations you should know about.For those using generative AI at work, one of the biggest challenges is the risk of inadvertently exposing sensitive data. Most generative AI systems are “essentially big sponges,” says Camden Woollven, group head of AI at risk management firm GRC International Group. “They soak up huge amounts of information from the internet to train their language models.”

Perhaps the most important presentation in 2024 - by Nvidia's Jensen Huang introducing NIMS

This copy of the speech includes annectdotes by analyst Wes Roth.  In sum, it is a great report on where we are with GenAI today, and where we are heading in the future.  

Tuesday, June 11, 2024

Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Works - Futurism

During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood. "We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers. Other AI companies are trying to find new ways to "open the black box" by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.

Sam Altman REVEALS the "Future of AI" - Wes Roth, YouTube

Sam Altman makes an appearance at the AI for Good Global Summit. He joins in remotely while his interviewer is live. Sam Altman gives us a little preview into what's coming next, what AI will bring in the very near future. Now by this point you probably heard that open AI has recently begu.n training its next Frontier Model. Now we're not exactly sure what to call this it's not GPT 5 by the sound of it or what we would think of as GPT 5. According to some announcements at the Microsoft build conference at the AI Summit in Paris France opening it will be dropping another model later this year.  

https://www.youtube.com/watch?v=2crVJjXA7ZE

Monday, June 10, 2024

Gen AI and the future of work - McKinsey Quarterly

Generative AI is front and center for nearly every industry and is poised to change just about everything. What will it mean for your workers? The development and widespread public use of generative AI (gen AI) accelerated dramatically in the months following ChatGPT’s launch. Gen AI is hardly a passing fad or a niche innovation: it means business and could add as much as $4.4 trillion annually to the global economy. Gen AI has the potential to enhance productivity across industries. While that may affect some workers more than others, it will change ways of working for almost everyone.

https://www.mckinsey.com/quarterly/the-five-fifty/five-fifty-gen-ai-and-the-future-of-work

AI products like ChatGPT much hyped but not much used, study says - Tom Singleton, BBC

Very few people are regularly using "much hyped" artificial intelligence (AI) products like ChatGPT, a survey suggests. Researchers surveyed 12,000 people in six countries, including the UK, with only 2% of British respondents saying they use such tools on a daily basis. But the study, from the Reuters Institute and Oxford University, says young people are bucking the trend, with 18 to 24-year-olds the most eager adopters of the tech. The findings were based on responses to an online questionnaire fielded in six countries: Argentina, Denmark, France, Japan, the UK, and the USA. The majority expect generative AI to have a large impact on society in the next five years, particularly for news, media and science. Most said they think generative AI will make their own lives better. When asked whether generative AI will make society as a whole better or worse, people were generally more pessimistic.

Sunday, June 09, 2024

An Early Look at ChatGPT-5: Advances, Competitors, and What to Expect - Marc Emmer, Inc.

Details surrounding ChatGPT-5 remain shrouded in secrecy, yet some clues offer a glimpse into its potential. CEO Sam Altman has hinted at a smarter, more versatile model capable of handling a more comprehensive array of tasks. Industry speculation is that GPT-5 may be multimodal, potentially processing text, images, video, and even music. One intriguing possibility is a shift from a chatbot model to an agent, enabling GPT-5 to autonomously execute real-world actions. This could revolutionize how AI interacts with the digital and physical world, potentially automating complex tasks and decision-making processes.

ChatGPT Is Coming For Higher Education, Says OpenAI - Forbes

OpenAI has announced ChatGPT Edu. This will be a specialized version of its AI platform designed specifically for universities. This move aims to deploy AI across academic, research and operational teams on campuses around the world. Set to launch this summer, ChatGPT Edu includes the latest GPT-4o model with advanced reasoning capabilities across text, audio and vision. It offers robust administrative controls, data security and high usage limits. Kyle Bowen, deputy CIO at ASU, stated, “Integrating OpenAI’s technology into our educational frameworks accelerates transformation. We’re collaborating to harness these tools, extending our learnings as a scalable model for other institutions.”

Saturday, June 08, 2024

Employers appear more likely to offer interviews, higher pay to those with AI skills, study says - Carolyn Crist, Higher Ed Dive

Employers are significantly more likely to offer job interviews and higher salaries to job candidates with experience related to artificial intelligence, according to a new study published in the journal Oxford Economics Papers. Specifically, college graduates with “AI capital” or business-related AI studies listed on their resumes and cover letters were far more likely to receive an interview invitation and higher wage offers. “In the UK, AI is causing dramatic shifts in the workforce, and firms need to respond to these demands by upgrading their workforces through enhancing their AI skills levels,” study author Nick Drydakis, a professor of economics at Anglia Ruskin University in Cambridge, said in a statement.

ASU faculty create AI-powered ‘buddy’ to help online students learn language - Stephanie King, ASU News

When it comes to language learning, communication is the ultimate goal. But for communication to take place, you need a partner. And that’s not always possible for students in ASU Online language courses; diverse student body learning needs and scheduling demands can make it challenging to hold synchronous instruction and virtual peer meetups.  Christiane Reves, an assistant teaching professor of German in Arizona State University’s School of International Letters and Cultures, and colleagues in her department think that “Language Buddy” — a custom GPT they created in ChatGPT Enterprise as part of the university’s AI Innovation Challenge — could be the solution. Powered by generative artificial intelligence (AI), Language Buddy will allow students to participate in conversations at their language level — anytime, anywhere.

Friday, June 07, 2024

Perplexity AI’s new feature will turn your searches into shareable pages - Ivan Mehta, Tech Crunch

With Perplexity Pages, the unicorn is aiming to help users make reports, articles or guides in a visually appealing format. Free and paid users can find the option to create a page in the library section. They just need to enter a prompt, such as “Information about Sahara Desert,” for the tool to start creating a page. Users can select an audience type — beginner, advanced or anyone — to shape the tone of the generated text. Perplexity said its algorithms work to create a detailed article with different sections. You can ask the AI tool to rewrite or reformat any sections or even remove them. Plus, you can add a section by prompting the tool to write about a certain subtopic. Perplexity also helps you find and insert relevant media items such as images and videos.

These Three Execs Are Charting An Ethical Future For AI And Music - Kristin Robinson, Billboard

Artificial Intelligence is one of the buzziest — and most rapidly changing — areas of the music business today. A year after the fake-Drake song signaled the technology’s potential applications (and dangers), industry lobbyists on Capitol Hill, like RIAA’s Tom Clees, are working to create guard rails to protect musicians — and maybe even get them paid. Meanwhile, entrepreneurs like Soundful’s Diaa El All and BandLab’s Meng Ru Kuok (who oversees the platform as founder and CEO of its parent company, Caldecott Music Group) are showing naysayers that AI can enhance human creativity rather than just replacing it.

Thursday, June 06, 2024

The AI Gender Bias Epidemic: How to Stop It - AutoGPT

You already know how AI is reshaping industries, economies, and societies at an unprecedented pace. From helping doctors diagnose diseases to predicting financial trends, AI has transcended traditional boundaries, promising efficiency, accuracy, and advancement.  Yet, amidst the awe-inspiring potential of AI, it is important to understand the underlying biases that have found their way into these technologies. In this article, we’ll explore the ins and outs of AI gender bias – examples, impacts, and proactive solutions to mitigate its adverse effects.

OpenAI is training GPT-4's successor. Here are 3 big upgrades to expect from GPT-5 - Sabrina Ortiz, ZDnet

AGI could mean asking agents to accomplish an end goal, which thy could achieve by reasoning what needs to be done, planning how to do it, and carrying the task out. For example, in an ideal scenario where GPT-5 achieved AGI, you would be able to request a task such as "Order a burger from McDonald's for me," and the AI would be able to complete a series of tasks that include opening the McDonald's website, and inputting your order, address, and payment method. All you'd have to worry about is eating the burger.

Wednesday, June 05, 2024

OPENAI INSIDER ESTIMATES 70 PERCENT CHANCE THAT AI WILL DESTROY OR CATASTROPHICALLY HARM HUMANITY - NOOR AL-SIBAI - the Byte

After former and current OpenAI employees released an open letter claiming they're being silenced against raising safety issues, one of the letter's signees made an even more terrifying prediction: that the odds AI will either destroy or catastrophically harm humankind are greater than a coin flip. Kokotajlo's spiciest claim to the newspaper, though, was that the chance AI will wreck humanity is around 70 percent — odds you wouldn't accept for any major life event, but that OpenAI and its ilk are barreling ahead with anyway. The term "p(doom)," which is AI-speak for the probability that AI will usher in doom for humankind, is the subject of constant controversy in the machine learning world.

The Crucial Difference Between AI And AGI - Forbes

Artificial Intelligence (AI) is a transformative force that is reshaping industries from healthcare to finance today. Yet, the distinction between AI and Artificial General Intelligence (AGI) is not always clearly understood and is causing confusion as well as fear. AI is designed to excel at specific tasks, while AGI is a theoretical concept that would be capable of performing any intellectual task that a human can perform across a wide range of activities. While AI already improves our daily lives and workflows through automation and optimization, the emergence of AGI would be a transformative leap, radically expanding the capabilities of machines and redefining what it means to be human.

Tuesday, June 04, 2024

AI is not yet killing jobs - the Economist

After astonishing breakthroughs in artificial intelligence, many people worry that they will end up on the economic scrapheap. Global Google searches for “is my job safe?” have doubled in recent months, as people fear that they will be replaced with large language models (llms). Some evidence suggests that widespread disruption is coming. In a recent paper Tyna Eloundou of Openai and colleagues say that “around 80% of the us workforce could have at least 10% of their work tasks affected by the introduction of llms”. White-collar roles are thought to be especially vulnerable to generative ai, which is becoming ever better at logical reasoning and creativity. However, there is as yet little evidence of an ai hit to employment. 

Meta introduces Chameleon, a state-of-the-art multimodal model - the Verge

As competition in the generative AI field shifts toward multimodal models, Meta has released a preview of what can be its answer to the models released by frontier labs. Chameleon, its new family of models, has been designed to be natively multi-modal instead of putting together components with different modalities. While Meta has not released the models yet, their reported experiments show that Chameleon achieves state-of-the-art performance in various tasks, including image captioning and visual question answering (VQA), while remaining competitive in text-only tasks.

https://venturebeat.com/ai/meta-introduces-chameleon-a-state-of-the-art-multimodal-model/

Monday, June 03, 2024

Google targets filmmakers with Veo, its new generative AI video model - Jess Weatherbed, the Verge

Veo has “an advanced understanding of natural language,” according to Google’s press release, enabling the model to understand cinematic terms like “timelapse” or “aerial shots of a landscape.” Users can direct their desired output using text, image, or video-based prompts, and Google says the resulting videos are “more consistent and coherent,” depicting more realistic movement for people, animals, and objects throughout shots. Google DeepMind CEO Demis Hassabis said in a press preview on Monday that video results can be refined using additional prompts and that Google is exploring additional features to enable Veo to produce storyboards and longer scenes.

AI More Friend than Foe? - CNBC

Sal Khan, CEO of Khan Academy, joins CNBC's 'The Exchange' to discuss the academy's partnership with Microsoft, the outcomes students will see from AI, and more.

Sunday, June 02, 2024

Europe, universities and industry launch Emotion AI masters - Karen MacGregor, University World News

A masters in Emotion AI – an emerging subset of artificial intelligence that interprets and responds to human emotions – kicks off next year across eight universities in six European countries. The masters is blended and transdisciplinary, at the cutting edge of applied AI and will spin off modules for mass AI upskilling. It is also interesting as an example of Europe’s expanded number of postgraduate degrees produced in partnerships between universities and the private sector, in this case under EIT Digital, the digital arm of the European Institute of Innovation and Technology (EIT), which is an independent body of the European Union.

2024 EDUCAUSE Action Plan: AI Policies and Guidelines - Jenay Robert Mark McCormack, EDUCAUSE

More than a year after the "AI spring" suddenly upended notions of what could be possible both inside and outside the classroom, most institutions are still racing to catch up and establish policies and guidelines that can help their leaders, staff, faculty, and students effectively and safely use these exciting and powerful new technologies and practices. Thankfully, institutions need not start from scratch in developing their AI policies and guidelines. Through the work of Cecilia Ka Yuk Chan and WCET, institutions have a foundation to build on, a policy framework that spans institutional governance, operations, and pedagogy. Built around these three pillars, this framework helps ensure that institutional AI-related policies and guidelines comprehensively address critical aspects of institutional life and functioning.


Saturday, June 01, 2024

AI pioneer LeCun to next-gen AI builders: ‘Don’t focus on LLMs’ - Taryn Plumb, Venture Beat

AI pioneer Yann LeCun kicked off an animated discussion today after telling the next generation of developers not to work on large language models (LLMs).  “This is in the hands of large companies, there’s nothing you can bring to the table,” Lecun said at VivaTech today in Paris. “You should work on next-gen AI systems that lift the limitations of LLMs.” The comments from Meta’s chief AI scientist and NYU professor quickly kicked off a flurry of questions and sparked a conversation on the limitations of today’s LLMs. 

Prepare to Get Manipulated by Emotionally Expressive Chatbots - Will Knight, Wired

It’s nothing new for computers to mimic human social etiquette, emotion, or humor. We just aren’t used to them doing it very well. With the updated AI model called GPT-4o, which OpenAI says is better able to make sense of visual and auditory input, describing it as “multimodal.” You can point your phone at something, like a broken coffee cup or differential equation, and ask ChatGPT to suggest what to do. But the most arresting part of OpenAI’s demo was ChatGPT’s new “personality.” The upgraded chatbot spoke with a sultry female voice that struck many as reminiscent of Scarlett Johansson, who played the artificially intelligent operating system in the movie Her. Throughout the demo, ChatGPT used that voice to adopt different emotions, laugh at jokes, and even deliver flirtatious responses—mimicking human experiences software does not really have.