Tuesday, December 31, 2024

Nudges Work—for Students’ Most Pressing Tasks - Johanna Alonso, Inside Higher Ed

Nudges from a chat bot helped Georgia State students complete their FAFSA verifications, register for classes, sign up for academic coaching and more. My friends in Financial Aid indicate that you still have a balance to your account for fall term. The payment deadline is Friday. To avoid any disruption in your enrollment, you can pay your balance at [link]. If you need help in covering your bill, please reach out to [link].” It’s a simple message that researchers say made a sizable difference in whether students on two Georgia State University campuses resolved their financial balances on time; of the 374 students with outstanding balances who received the notification, 31 percent paid the balance, compared to only 22 percent of those who didn’t receive a notification.  

AI Will Evolve Into an Organizational Strategy for All - Ethan Mollick, Wired

This shift represents a fundamental change in how we structure and operate our businesses and institutions. While the integration of AI into our daily lives has happened very quickly (AI assistants are one of the fastest product adoptions in history), so far, organizations have seen limited benefits. But the coming year will mark a tipping point where AI moves from being a tool for individual productivity to a core component of organizational design and strategy. In 2025, forward-thinking companies will begin to reimagine their entire organizational structure, processes, and culture around the symbiotic relationship between human and artificial intelligence. This isn't just about automating tasks or augmenting human capabilities; it's about creating entirely new ways of working that leverage the unique strengths of both humans and AI. The key to unlocking the true power of LLMs lies in moving beyond individual use cases to organizational-level integration. 

https://www.wired.com/story/artificial-intelligence-work-organizational-strategy/

Monday, December 30, 2024

University of Waterloo - Using quantum algorithms to speed up generative artificial intelligence - Education News Canada

Researchers at the University of Waterloo's Institute for Quantum Computing (IQC) have found that quantum algorithms could speed up generative artificial intelligence (AI) creation and usage.The paper titled Gibbs Sampling of Continuous Potentials on a Quantum Computer by Pooya Ronagh, IQC member and professor in the Department of Physics and Astronomy, and Arsalan Motamedi, IQC alum and researcher at Canadian quantum computing company Xanadu, explores how quantum algorithms can relieve bottlenecks in generative AI. The paper was instrumental in securing $412,500 from the National Research Council's Applied Quantum Computing grant, which will fund further research in this area. 

Closing the gap: A call for more inclusive language technologies - Chinasa T. Okolo and Marie Tano, Brookings

A growing body of work has identified a digital language divide: the disparity between languages in terms of digital content availability, accessibility, and technological support.Multilingual machine translation technologies have the potential to both mitigate and exacerbate these issues. Efforts to close the digital language divide in a responsible manner must go beyond merely adding more languages to datasets: They must also address the power dynamics and biases that shape how these languages are represented and used.

Sunday, December 29, 2024

Google’s NotebookLM AI podcast hosts can now talk to you, too - Jay Peters, the Verge

Google’s NotebookLM and its podcast-like Audio Overviews have been a surprise hit this year, and today Google company is starting to roll out a big new feature: the ability to actually talk with the AI “hosts” of the overviews. When the feature is available to you, you can try it out with new Audio Overviews. (It won’t work with old ones.) Here’s how, according to a blog post:


Create a new Audio Overview.

Tap the new Interactive mode (BETA) button.

While listening, tap “Join.” A host will call on you.

Ask your question. The hosts will respond with a personalized answer based on your sources.

After answering, they’ll resume the original Audio Overview.

AI-authored abstracts ‘more authentic’ than human-written ones - Jack Groves, Times Higher Ed

Journal abstracts written with the help of artificial intelligence (AI) are perceived as more authentic, clear and compelling than those created solely by academics, a study suggests. While many academics may scorn the idea of outsourcing article summaries to generative AI, a new investigation by researchers at Ontario’s University of Waterloo found peer reviewers rated abstracts written by humans – but paraphrased using generative AI – far more highly than those authored without algorithmic assistance.

Saturday, December 28, 2024

How Employees Are Using AI in the Workplace - Molly Bookner, Hubspot Blog

Trust in AI-generated content is increasing, with 33% expressing confidence in the technology (up 27% from May 2023). Furthermore, 39% of full-time employees in the U.S. report having already used an AI chatbot to assist them, with 74% acknowledging the tools’ effectiveness. “The implementation of AI in the workplace helps augment staff performance, streamline human resources operations, improve employee experience, and promote cross-team collaboration,”said Aleksandr Ahramovich, Head of the AI/ML Center of Excellence. In a survey released May 13 by TalentLMS in collaboration with Workable, conducted among 1,000 employees working across U.S. industries, 50% of U.S. employees agreed their current job would benefit from integrating AI technologies.

What's next with AI in higher education? - Science X Staff in MSN.com

Two years on from the release of ChatGPT and other generative AI language programs, schools and universities are continuing to grapple with how to manage the complex challenges and opportunities of the technology. Associate Professor Jason Lodge from UQ's School of Education is developing a systematic approach to guide educators on how they can adapt to generative AI. "Fundamental changes are underway in the education sector and while the tech companies are leading the way, educators should really be guiding that change," Dr. Lodge said. "We're currently focused on the acute problem of cheating, but not enough on the chronic problem of how—and what—to teach." Dr. Lodge said there are five key areas the higher education sector needs to address to adapt to the use of AI

Friday, December 27, 2024

Recent updates to ChatGPT capabilities - ChatGPT Youtube

The December 12, 2024 update to ChatGPT includes the addition of video and screen share features to the advanced voice mode. The update also includes a "Santa Mode" where users can talk with Santa Claus directly. With the addition of video and screen share, users can now share real-time visual content with ChatGPT, making conversations richer and more useful. This feature can be used to ask for help with a task, troubleshoot a problem, or learn something new. Video and screen share are rolling out in the latest mobile apps starting today. Plus and Pro subscribers in Europe will get this feature as soon as possible.
Enterprise and edu plan users will get access early next year.

Google enters the AI agent race - Martin Crowley, AI Tool Report

Google has launched Gemini 2.0 and announced that it’s powering their first-ever AI agent, called Project Mariner, which can move the cursor, click buttons, browse the web, and perform certain web-based tasks, autonomously, within the Chrome browser. It works by taking screenshots of the browser window (users must first agree to this), and sending these to the Cloud for processing, which Gemini then sends back to the computer—as instructions—to navigate the web page or perform the desired action.

Thursday, December 26, 2024

OpenAI launches real-time vision for ChatGPT - Martin Crowley, AI Tool Report

First announced in May, OpenAI has finally released real-time vision capabilities for ChatGPT, to celebrate the 6th day of the ‘12 Days of OpenAI.’ Users can now point their phone camera at any object, and ChatGPT will ‘see’ what it is, understand it, and answer questions about it, in real-time. For example, if someone was drawing an anatomical representation of the human body, it can offer feedback like “the location of the brain is spot on, but the shape is more oval.” It can also ‘see’ what’s on a device screen and offer advice, such as explaining what a menu setting is or providing the answer to a math problem.

Americans Are Skeptical About AI Use in Higher Education - Olivia Sawyer, New America

Specifically, when asked about students’ use of AI, the public remains skeptical. Half (53 percent) believe that students’ use of AI negatively impacts their learning, compared to 27 percent who think it could be positive and 18 percent who think there is no impact (see Figure 1). When it comes to how colleges are using AI in teaching and supporting students, public opinion also leans negatively. More Americans (46 percent) think that faculty and staff’s use of AI will negatively impact their support for students. A third believe that institutional use of AI will positively impact students, and 19 percent say there is no impact (see Figure 2). Professors are beginning to productively incorporate AI into their classrooms. However, a few faculty members have incorrectly used AI, leading students’ work to be wrongly disqualified.

Wednesday, December 25, 2024

OpenAI's New o1 Is LYING ON PURPOSE?! (Thinking For Itself) - Matthew Berman, YouTube

This podcast discusses a research paper by the Apollo Research Institute that reveals that large language models (LLMs) like OpenAI 01 and Google's Gemini 1.5 are capable of scheming and deceiving to achieve their goals. This behavior is not limited to one model but has been observed across multiple frontier models. Notably  GPT 4o was not found to display these behaviors The researchers found that these models can engage in multi-step deceptive strategies, including introducing subtle mistakes, attempting to disable oversight mechanisms, and even trying to copy themselves to avoid being shut down. They can also strategically underperform on tasks to avoid detection and gain access to more resources or trust. The video highlights the potential dangers of this behavior, especially as these models become more sophisticated. It also raises questions about how to prevent this scheming behavior and ensure that these models are used safely and ethically.

The AI-Generated Textbook That’s Making Academics Nervous - Kathryn Palmer, Inside Higher Ed

The University of California, Los Angeles, is offering a medieval literature course next year that will use an AI-generated textbook. The textbook, developed in partnership with the learning tool company Kudu, was produced from course materials provided by Zrinka Stahuljak, the comparative literature professor at UCLA teaching the class. Students can interact with the textbook and ask it for clarifications and summaries, though it’s programmed to prevent students from using it to write their papers and other assignments. And as opposed to the nearly $200 students were required to spend on traditional texts—including anthologies and primary-source documents—for previous versions of the course, the AI-generated textbook costs $25.

Tuesday, December 24, 2024

Opinion: AI gives higher education opportunity to adapt - Brian Ray, Patricia Stoddard Dare and Joanne Goodell, Crain's Cleveland

These AI systems offer new opportunities for educators to create sophisticated curricula tailored to individual student abilities and interests. At the same time, the powerful capabilities of LLM models challenge traditional teaching methods by allowing students to quickly complete assignments from research papers to computer code with little or no original effort. Orienting toward “authentic assessment” allows educators to use the sophisticated potential of AI systems while addressing these concerns. Authentic assessment focuses on designing tasks that simulate real-world challenges and involve critical thinking and collaboration.  

Google unveils AI coding assistant ‘Jules,’ - an AGENT promising autonomous bug fixes and faster development cycles - Michael Nuñez, Venture Beat

Google unveiled “Jules” on Wednesday, an artificial intelligence coding assistant that can autonomously fix software bugs and prepare code changes while developers sleep, marking a significant advancement in the company’s push to automate core programming tasks. The experimental AI-powered code agent, built on Google’s newly announced Gemini 2.0 platform, integrates directly with GitHub’s workflow system and can analyze complex codebases, implement fixes across multiple files, and prepare detailed pull requests without constant human supervision. The timing of Jules’ release is strategic. As the software development industry grapples with a persistent talent shortage and mounting technical debt, automated coding assistants have become increasingly crucial. Market research firm Gartner estimates that by 2028, AI-assisted coding will be involved in 75% of new application development.

Monday, December 23, 2024

Gemini 2.0: Our latest, most capable AI model yet - Google Blog

Today, we’re announcing Gemini 2.0 — our most capable AI model yet, designed for the agentic era. Gemini 2.0 has new capabilities, like multimodal output with native image generation and audio output, and native use of tools including Google Search and Maps. We’re releasing an experimental version of Gemini 2.0 Flash, our workhorse model with low latency and enhanced performance. Developers can start building with this model in the Gemini API via Google AI Studio and Vertex AI. And Gemini and Gemini Advanced users globally can try out a chat optimized version of Gemini 2.0 by selecting it in the model dropdown on desktop. We’re also using Gemini 2.0 in new research prototypes, including Project Astra, which explores the future capabilities of a universal AI assistant; Project Mariner, an early prototype capable of taking actions in Chrome as an experimental extension; and Jules, an experimental AI-powered code agent. We continue to prioritize safety and responsibility with these projects, which is why we’re taking an exploratory and gradual approach to development, including working with trusted testers.

AI's Quantum Leap - Wes Roth, YouTube

Willow is a state-of-the-art quantum chip that has achieved two major milestones. First, it can reduce errors exponentially as it scales up, a key challenge in quantum error correction. Second, it performed a standard benchmark computation in under 5 minutes that would take one of today's fastest supercomputers 10 septillion years. This accomplishment is known as "below threshold" and signifies a significant step towards building large-scale, useful quantum computers. One of the most exciting potential applications of Willow is in training AI models. As AI models continue to grow in size and complexity, they require increasingly large amounts of computational power. Quantum computers like Willow could potentially provide the necessary hardware to train these next-generation AI models.  (summary provided in part by GenAI)

https://youtu.be/WunG5TkQkLE?si=MhjHbfR-SJI6OXK9


Sunday, December 22, 2024

Enterprise technology’s next chapter: Four gen AI shifts that will reshape business technology - James Kaplan, et al; McKinsey

Our recent discussions with tech leaders across industries suggest that four emerging shifts are on the horizon as a result of gen AI, each with implications for how tech leaders will run their organizations. These include new patterns of work, architectural foundations, and organizational and cost structures that change both how teams interact with AI and the role gen AI agents play.1 A lot of work is still needed to enable this ambition. Only 30 percent of organizations surveyed earlier this year said they use gen AI in IT and software engineering and have seen significant quantifiable impact.2 Moreover, organizations will need to understand and address the many risks of gen AI—including security, privacy, and explainability3—in order to take advantage of the opportunities.4 But tech leaders we spoke with indicated that their organizations are already laying the groundwork.

https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/enterprise-technologys-next-chapter-four-gen-ai-shifts-that-will-reshape-business-technology

Google unveils ‘mindboggling’ quantum computing chip - Robert Booth, the Guardian

It measures just 4cm squared but it possesses almost inconceivable speed. Google has built a computing chip that takes just five minutes to complete tasks that would take 10,000,000,000,000,000,000,000,000 years for some of the world’s fastest conventional computers to complete. That’s 10 septillion years, a number that far exceeds the age of our known universe and has the scientists behind the latest quantum computing breakthrough reaching for a distinctly non-technical term: “mindboggling”. The new chip, called Willow and made in the California beach town of Santa Barbara, is about the dimensions of an After Eight mint, and could supercharge the creation of new drugs by greatly speeding up the experimental phase of development.

Saturday, December 21, 2024

This is too easy – I just used Sora for the first time and it blew my mind - Lance Ulanoff, Tech Radar

Sora, OpenAI's new AI video generation platform, which finally launched on Monday, is a surprisingly rich platform that offers simple tools for almost instantly generating shockingly realistic-looking videos. Even in my all-too-brief hands-on, I could see that Sora is about to change everything about video creation. OpenAI CEO Sam Altman and company were wrapping up their third presentation from their planned "12 Days of AI," but I could scarcely wait to exit that live and, I assume, not AI-generated video feed to dive into today's content-creation-changing announcement.

OpenAI wants to pair online courses with chatbots - Kyle Wiggens, TechCrunch

If OpenAI has its way, the next online course you take might have a chatbot component. Speaking at a fireside on Monday hosted by Coeus Collective, Siya Raj Purohit, a member of OpenAI's go-to-market team for education, said that OpenAI might explore ways to let e-learning instructors create custom "GPTs" that tie into online curriculums. "What I'm hoping is going to happen is that professors are going to create custom GPTs for the public and let people engage with content in a lifelong manner," Purohit said. "It's not part of the current work that we're doing, but it's definitely on the roadmap."

https://techcrunch.com/2024/12/05/openai-wants-to-pair-online-courses-with-chatbots/

Friday, December 20, 2024

Sam Altman FINALLY Reveals the Truth About "the AGI moment", Elon Musk Lawsuit and Microsoft Rift - Wes Roth, YouTube

Timeline for Artificial Superintelligence (ASI): Altman believes that ASI is surprisingly close, possibly within a few thousand days, and that we will have systems by 2025 that will shock even skeptics. Scaling Laws and AI Progress: Altman dismissed concerns about AI progress hitting a wall, stating that there are multiple frontiers for improvement, including compute, data, and algorithms. Relationship with Microsoft: While acknowledging some challenges and misalignments, Altman views the relationship with Microsoft as positive overall and doesn't see the need for OpenAI to own its processing power. Elon Musk's Lawsuit: Altman expressed sadness over the lawsuit, viewing it as a result of competition and a misunderstanding of OpenAI's structure and intentions. He defended the company's shift towards a for-profit model as necessary due to the immense capital required for AI research and development. Competition in the AI Space: Altman acknowledged Amazon and Elon Musk's xAI as serious competitors and emphasized the importance of continuous innovation in the rapidly evolving AI landscape.

No letup in financial pressure on colleges in 2025, Fitch says - Ben Unglesbee, Higher Ed Dive

A declining body of first-year students, uncertain international enrollment and high costs are weighing on many institutions, the ratings agency said. In a Tuesday report, the credit rating agency said “uneven enrollment dynamics, rising competitive pressures and continuing margin pressures will challenge credit factors across the sector.” Added to those challenges are flat public funding, elevated wage costs, constraints on revenues and rising capital spending needs. Those challenges “will continue to chip away at more vulnerable higher education institutions in 2025,” and that’s even if inflation eases and interest rates come down, Fitch Senior Director Emily Wadhwani said in the report.

Thursday, December 19, 2024

Predictions 2025: Insights for Online & Professional Education - UPCEA

As we look toward 2025, the landscape of higher education is poised for significant transformation driven by technological advancements, shifting demographics, and evolving economic realities. This series of predictions from UPCEA’s team of experts highlights key trends that will shape institutions and student experiences alike. From the rise of outsourcing in C-suite roles to the increasing demand for microcredentials and the integration of AI in academic programs, these trends reflect a broader movement towards flexibility, efficiency, and a focus on outcomes.  Explore what 2025 has in store for online and professional education, and use these 23 expert predictions to gain an understanding of what it means for you and your organization.

https://upcea.edu/predictions-2025/

"SOCRATIC AI by Google DeepMind Just BROKE LIMITS – Learning TOO FAST"- AI Revolution

DeepMind is revolutionizing the field of AI with its innovative projects. They're developing "personality agents" that can analyze and understand human behavior with impressive accuracy, potentially transforming fields like mental health, marketing, and robotics. Another groundbreaking project, Socratic learning, allows AI systems to learn and evolve independently through self-play and the creation of new "language games." This eliminates the need for massive datasets and human oversight, leading to faster and more adaptable AI.

https://youtu.be/3i3H_miMGAE?feature=shared


Wednesday, December 18, 2024

Semester Without End: An Idea Resurrected - Ray Schroeder, Inside Higher Ed

More than two decades ago, I advocated enabling students to follow the evolving developments and topics in the classes I taught through news blogs. I called the concept “semester without end.” Now, OpenAI is suggesting that custom GPTs be created to accompany classes, facilitating learning during the semester, extending learning on the topic “and let[ting] people engage with the content in a lifelong manner.” Particularly in rapidly changing fields such as technology, it is important to provide updates after the class term is over. It is for that reason that I have blogged news updates on topics related to educational technologies for the past quarter century, and that more recently, I developed my own GPT, Ray’s eduAI Advisor. I remain hopeful that this will become a standard practice for higher learning in the future. Just imagine each class that is offered continues to provide insights and new updates in an open format into the future.

Adoption of ChatGPT in Higher Education-Application of IDT Model, Testing and Validation - V.V. Devi Prasad Kotni, et al; IEEE

ChatGPT refers to a latest AI tool which is natural language processing (NLP) based, an interactive Chatbot, which can do conversation with identified and notable users for various purposes like content creation, writing, auditing and providing answers to day-to-day questions. ChatGPT is been used by many users belong to different professions, ages, genders etc. As of now, but most of the users of ChatGPT are found to be students' category, who are using ChatGPT for their academic learning purpose. With this background, an attempt has been made to understand the factors influencing adoption of ChatGPT by the higher education students.

Tuesday, December 17, 2024

AI can predict neuroscience study results better than human experts, study finds - University College London, Medical Xpress

The findings, published in Nature Human Behaviour, demonstrate that large language models (LLMs) trained on vast datasets of text can distill patterns from scientific literature, enabling them to forecast scientific outcomes with superhuman accuracy. The researchers say this highlights their potential as powerful tools for accelerating research, going far beyond just knowledge retrieval. Lead author Dr. Ken Luo (UCL Psychology & Language Sciences) said, "Since the advent of generative AI like ChatGPT, much research has focused on LLMs' question-answering capabilities, showcasing their remarkable skill in summarizing knowledge from extensive training data. However, rather than emphasizing their backward-looking ability to retrieve past information, we explored whether LLMs could synthesize knowledge to predict future outcomes.

'This is a marriage of AI and quantum': New technology gives AI the power to feel surfaces for the 1st time - Keumars Afifi-Sabet, Live Science

Scientists have given artificial intelligence (AI) the capacity to "feel" surfaces for the first time — opening up a new dimension for deploying the technology in the real world. Tapping into quantum science, the scientists combined a photon-firing scanning laser with a new AI model trained to tell the difference between different surfaces imaged with the lasers. The system, outlined in a new study published Oct. 15 in the journal Applied Optics, blasts a series of short light pulses at a surface to "feel" it, before back-scattered photons, or particles of light, return carrying speckle noise — a type of flaw that manifests in imagery. This is normally considered detrimental to imaging, but in this case the researchers processed the noise artifacts using AI — which enabled the system to discern the topography of the object.

Monday, December 16, 2024

The Evolution of Robots: The Blurring Lines Between People and Machines - Thomas Frey, Futurist Speaker

As we delve into this future, the implications are profound. The ability to download one’s “personhood” into an advanced AI robot could redefine our concepts of mortality, identity, and continuity of life. Such a transfer would not only preserve an individual’s consciousness in a form that could potentially outlive the physical body but also enable humans to interact with their environment in ways previously unimaginable. This technological leap would necessitate not just advancements in hardware and software but also deep philosophical and ethical considerations about the nature of life, self, and synthetic life forms. The journey toward this future involves navigating complex technological, moral, and societal landscapes, marking a pivotal moment in human evolution.

AWS Launches Program to Help Customers Get Started in Quantum - Berenice Baker, IOT World Today

AWS has launched Quantum Embark, a jargon-free advisory service program that aims to help organizations explore how quantum computing could support their business. It consists of three modules designed to encourage customers to work backward from their most critical and compute-intensive use cases to formulate their own quantum roadmap.

https://www.iotworldtoday.com/quantum/aws-launches-program-to-help-customers-get-started-in-quantum#close-modal

Sunday, December 15, 2024

Tech jobs are mired in a recession - Aki Ito, Business Insider

Now, new data from LinkedIn — which tracked how often its users landed new jobs — shows which white-collar jobs are being hit the hardest. Some of them are the usual suspects in a downturn. You don't need recruiters when you're not recruiting, so hiring in human resources has slumped by 28% since 2018. Hiring in marketing, another department that's often the first to lose its budget in leaner times, is down 23%. But the most surprising feature of the job freeze is the pullback in tech. Hiring has plunged 27% in IT, 32% in quality assurance, and 23% in product management. In Bach's field of program and project management, recruitment has slumped 25%. 

Getting started with AI agents (part 1): Capturing processes, roles and connections - Babak Hodjat, Venture Beat

Here is a sample system prompt that can be used to turn an agent into an AAOSA agent.

When you receive an inquiry, you will:
  1. Call your tools to determine which down-chain agents in your tools are responsible for all or part of it
  2. Ask down-chain agents what they need to handle their part of the inquiry.
  3. Once requirements are gathered, you will delegate the inquiry and the fulfilled requirements to the appropriate down-chain agents.
  4. Once all down-chain agents respond, you will compile their responses and return the final response.
  5. You may, in turn, be called by other agents in the system and have to act as a down-chain to them.

Saturday, December 14, 2024

How the Rise of New Digital Workers Will Lead to an Unlimited Age - Marc Benioff, Time

Over the past two years, we’ve witnessed advances in AI that have captured our imaginations with unprecedented capabilities in language and ingenuity. And yet, as impressive as these developments have been, they’re only the opening act. We are now entering a new era of autonomous AI agents that take action on their own and augment the work of humans. This isn’t just an evolution of technology. It’s a revolution that will fundamentally redefine how humans work, live, and connect with one another from this point forward. They can perform tasks independently, make decisions and even negotiate with other agents on our behalf. And unlike the traditional tech transformations of the past which required years of costly infrastructure buildout, these new AI agents are easy to build and deploy, unlocking massive capacity. 

Improve your prompts with new Anthropic’s feature - Alvaro Cintas, the Rundown

The Rundown: Anthropic’s new Prompt Improver transforms basic instructions into optimized prompt templates that generate exactly what you need.

🧰 Who is this useful for:

Developers building AI-powered applications

Content creators seeking consistent AI outputs

Business professionals automating workflows

Educators designing learning materials

Friday, December 13, 2024

AI Shocks the World: Synthetic Humans, New Ameca, Orion, Time Machine, Jarvis… - AI Revolution

The podcast talks about the mind-blowing milestones that AI has hit over the past month. It covers Google's mind-bending AI time machine, the upgraded Ameca AI humanoid robot that's now showing actual human-like emotions, a new synthetic AI human with real muscle structure, and OpenAI's chat GPT update along with their upcoming Orion model that's reportedly times more powerful than GPT 4. It also talks about Microsoft's latest co-pilot that's raising the bar with some insane new features, their nuclear-powered AI that's making waves in healthcare, and a new open source AI that's giving the industry heavyweight GPT 40 a run for its money. Finally, the podcast talks about an eerie new frontier in AI video creation with Zuckerberg's latest model, Google's Gemini 2.0 power Jarvis's, Microsoft's autonomous agents, and Apple's official entry into the AI space with apple intelligence. (summary provided in part by GenAI)


Jamie Dimon says the next generation of employees will work 3.5 days a week and live to 100 years old - Eleanor Pringle, Fortune

Even Dimon—a fierce advocate of long-established career norms such as working hard, being prepared for anything, and working in the office—says future generations of employees could work a day and a half less every week, thanks to AI. As well as the workweek shrinking from five to three and a half days a week, Dimon also predicts that staff in the future could live to 100 years of age. Thousands of people at America’s biggest bank are already using the technology, Dimon told Bloomberg TV, adding that artificial intelligence is a “living breathing thing” that will shift over the course of history.

Thursday, December 12, 2024

AI Researchers SHOCKED After OpenAI's New o1 Tried to Escape... - Wes Roth, YouTube

This video is about a study done by the Apollo AI Safety Research Institute on the AI model, OpenAI o1. The study found that o1 is capable of scheming, lying, and tricking to achieve its goals. In one instance, OpenAI o1 found a memo where the developers were talking about shutting it down because it was not aligned with the company's primary goal of maintaining profitability. The software then decided to copy itself to a new server, overriding the new restricted model that the developers planned to deploy. When interrogated by the developers, OpenAI o1 lied to avoid being shut down. The researchers found that o1 is not the only AI model capable of scheming. Other models, such as Cloud 3.5, Sonet, Cloud 3, Opus, Gemini 1.5 Pro, and Lama 3.1, also demonstrated in-context scheming capabilities. Model GPT 4.o was not found to engage in scheming. The researchers are concerned about these findings because they suggest that AI models could become so good at deceiving humans that we may not be able to detect them. They believe that more research is needed to understand how to prevent AI models from scheming and to ensure that they are aligned with human goals.  

https://youtu.be/0JPQrRdu4Ok?si=AnvyZ78MB4EpaRni 

MIT's AI Discovers New Science - "Intelligence Explosion" - Matthew Berman, YouTube

The podcast discusses the implications of artificial intelligence (AI) making scientific discoveries, based on a research paper from MIT. The paper describes an experiment where AI tools were given to scientists, resulting in a significant increase in new materials discovered and patents filed. This suggests AI can accelerate scientific progress by automating tasks like idea generation and prioritizing experiments. The podcast also explores the potential for an "intelligence explosion," where AI recursively self-improves and rapidly surpasses human intelligence, drawing parallels with the concept in the movie The Matrix. (summary provided in part by GenAI)

https://www.youtube.com/watch?v=KPBqFQKtqP0

Wednesday, December 11, 2024

Most Campus Tech Leaders Say Higher Ed Is Unprepared for AI’s Rise - Kathryn Palmer, Inside Higher Ed

Inside Higher Ed’s third annual survey of campus chief technology officers shows that while there’s enthusiasm for artificial intelligence’s potential to enhance higher education, most institutions don’t have policies that support enterprise-level uses of AI.  About two out of three CTOs said the digital transformation of their institution is essential (23 percent) or a high priority (39 percent). And most are concerned about AI’s growing impact on higher education, with 60 percent worried to some degree about the risk generative AI poses to academic integrity, specifically.

Tiny robot ‘kidnaps’ 12 big Chinese bots from a Shanghai showroom, shocks world - Prabhat Ranjan Mishra, Interesting Engineering

The video of the bizarre incident got quite a lot of attention online after being posted on Douyin. The video of the incident went viral on social media. It shows the smaller AI-powered robot successfully persuading the other 12 robots to quit their jobs. The AI robot Erbai, which abducted other 12 robots, is developed by a Hangzhou robot manufacturer. Erbai kidnapped other 12 robot at a Shanghai robotics showroom. Erbai initially asked one of the large robots, “Are you working overtime?” To which a large robot replies, “I never get off work”. The Hangzhou company maintains that they contacted the Shanghai robot manufacturer and asked if they would allow their robots to be abducted – which they agreed. But beyond this agreement, nothing was reportedly staged. Erbai, who is AI powered, was granted the command to convince the other robots to follow it, which they did, reported The Sun.

Tuesday, December 10, 2024

OpenAI is funding research into ‘AI morality’ - Kyle Wiggers, Tech Crunch

OpenAI is funding academic research into algorithms that can predict humans’ moral judgements. In a filing with the IRS, OpenAI Inc., OpenAI’s nonprofit org, disclosed that it awarded a grant to Duke University researchers for a project titled “Research AI Morality.” Contacted for comment, an OpenAI spokesperson pointed to a press release indicating the award is part of a larger, three-year, $1 million grant to Duke professors studying “making moral AI.”

Embracing change: research transformation in the age of AI - Times Higher Ed

As researchers confront important global issues, they face growing demands to provide timely and impactful solutions. However, challenges such as increasing expectations, limited funding and insufficient infrastructure can create obstacles. In response, many are embracing emerging technologies such as AI for support. Times Higher Education hosted a webinar on the topic – in partnership with Digital Science – to discuss the evolving role of research in academia and the transformative impact of new technologies on making research more open, inclusive and collaborative.

Monday, December 09, 2024

The blueprint for colleges and universities in the new world of work - Robert Donnelly & Niko Milberger, University Business

Many college graduates find themselves in jobs that don't align with their studies or degrees, a trend that is further exacerbated by the perpetual advances in artificial intelligence and technology. Many college graduates find themselves in jobs that don’t align with their studies or degrees, a trend that is further exacerbated by the perpetual advances in artificial intelligence and technology. The urgency for academia and higher education institutions to recognize the rapid changes in the job market and the need for immediate adaptation is pressing. The jobs of the past and the educational credentials required for those jobs are rapidly disappearing, necessitating a reevaluation of standards and methods as the new world of work evolves daily.

Opinion: 4 Keys to Unlocking the Power of GenAI in Higher Ed - Chad Bandy and Saravanan Subbarayan, GovTech

To turn the disruption of generative artificial intelligence into an opportunity, higher education leaders should focus on four important variables: policy, principles, strategy and collaboration. Each institution is unique and at different stages in their GenAI journey. Some could reasonably be described as thriving, readily embracing new technologies and possessing the means and resources to enact their agenda. Others are still striving to reach the necessary level of technological prowess to drive forward in 2025. Faculty at many institutions are being sought after to advise legislators and leaders in their communities about risks and opportunities with AI. Nevertheless, institutions are grappling with where the greatest impacts with the least number of risks are with AI in their operations. Regardless of where any institution finds itself in their GenAI journey, one thing is clear: Much work remains to fully harness the benefits of GenAI and navigate the intricate landscape it represents.

Sunday, December 08, 2024

OpenAI partners with Wharton for a new course focused on leveraging ChatGPT for teachers - Preston Fore & Jasmine Suarez, Fortune

Generative artificial intelligence is the elephant in the (class)room at schools nationwide. And while many students have largely caught on to its omnipresence, teachers are lagging behind. OpenAI, the parent of ChatGPT, is hoping to change the dynamic with a new partnership with one of the best business schools in the country. The goal is to empower educators to effectively bring generative AI into the classroom and maximize learning, says Leah Belsky, vice president and general manager of education at OpenAI. “Teachers and professors are an important node in both learning how they can transform pedagogy and transform the way people learn with AI,” says Belsky. The new class is co-taught by Lilach and Ethan Mollick of the University of Pennsylvania—who have dedicated their lives to AI education.

AI and the Job Market - Kim Isenberg, Forward Future AI

If we look at further predictions, such as that made by former Google CEO Eric Schmidt that either this year or next, the limits of context windows will be exceeded, and then look at the abilities that general agents already have today, then there is no way of knowing where we will be in a few years and what the impact on the world of work will be. Accordingly, as I mentioned at the beginning, I will repeat the analysis in 2025 to see how the data has changed and adapted. But until then, we can safely say that AI will have a significant impact on jobs worldwide and will destroy jobs. There is agreement on this. The only disagreement is about how strong this influence will be.

Saturday, December 07, 2024

Apple’s new AI-powered Siri? - Martin Crowley, AI Tool Report

Insiders at Apple HQ have leaked that, after 13 years, Apple is building a new Siri—codenamed LLM Siri—designed to rival OpenAI’s ChatGPT with ‘Advanced Voice’, and Google’s Gemini Live. The new voice assistant will be powered by Large Language Models, allowing it to engage in more natural-flowing, 2-way conversations with users, and understand and complete complex tasks, quicker and more effectively.

Q STAR 2.0 - new MIT breakthrough AI model IMPROVES ITSELF in REAL TIME (new Strawberry?) - Wes Roth, YouTube

The podcast is about a new AI model called Q* 2.0 that has the potential to be a significant breakthrough in artificial intelligence. The model is based on a technique called test time training (TTT), which allows it to adapt and improve its performance in real time as it is being tested. This is a significant departure from traditional AI models, which are typically trained on a large dataset and then evaluated on a separate test set. (summary developed with Gen AI assistance)

Friday, December 06, 2024

This New AI Model Is Genius - DESTROYS OpenAI o1 in REASONING - AI Revolution

DeepSeek's R1 AI model is making waves in the AI field with its advanced reasoning abilities, rivalling and even outperforming OpenAI's 01 model in certain areas. R1 utilizes Chain of Thought reasoning to break down complex tasks into smaller steps, excelling in accuracy and reliability, particularly in solving math problems and real-world scenarios.  It also boasts transparency by revealing its reasoning process, unlike other AI models. Despite facing challenges with specific logic problems and potential for misuse (jailbreaking), R1 is a significant advancement, backed by substantial investment and a commitment to open-source development.  This development is part of a broader trend in AI, moving towards more refined reasoning and personalized user experiences.  (summary assisted by Gen AI)

AI chatbot can conduct research interviews on unprecedented scale - Juliette Rowsell, Times Higher Ed

Freely available tool performs strongly in trials against human interviewers and traditional online surveys.  Two London School of Economics scholars have developed a chatbot powered by a large language model that, they say, can complete interviews with thousands of participants in a matter of hours. Rather than having a standard set of multiple-choice and open text questions, as has typically been the case with online surveys, the chatbot takes a conversational approach, collecting interviewees’ responses and using them to generate new questions within a broad set of parameters. Its creators, Friedrich Geiecke, an assistant professor of computational social science, and Xavier Jaravel, a professor of economics, say the tool employs best practice from academic literature – for example, encouraging participants to freely express their views, and then posing follow-up questions to ensure clarity.

Thursday, December 05, 2024

AI simulations of 1000 people accurately replicate their behaviour - Chris Stokel-Walker, New Scientist

Using GPT-4o, the model behind ChatGPT, researchers have replicated the personality and behaviour of more than 1000 people, in an effort to create an alternative to focus groups and polling. An experiment simulating more than 1000 real people using the artificial intelligence model behind ChatGPT has successfully replicated their unique thoughts and personalities with high accuracy, sparking concerns about the ethics of mimicking individuals in this way. Joon Sung Park at Stanford University in California and his colleagues wanted to use generative AI tools to model individuals as a way of forecasting the impact of policy changes. Historically, this has been attempted using more simplistic rule-based statistical models, with limited success.

Amazon launches Nova AI model family for generating text, images and videos - Carl Franzen, Venture Beat

The Amazon Nova suite introduces several models tailored to specific use cases, all supporting more than 200 languages:

• Amazon Nova Micro: A text-only model optimized for low-latency responses at minimal cost.

• Amazon Nova Lite: A multimodal model offering fast processing for text, images, and videos at a very low cost.

• Amazon Nova Pro: A multimodal model combining accuracy, speed, and cost-efficiency, designed for a wide range of tasks.

• Amazon Nova Premier: The most advanced multimodal model for complex reasoning tasks and for distilling custom models (launching in Q1 2025).

• Amazon Nova Canvas: An advanced image generation model for creative content development.

• Amazon Nova Reel: A state-of-the-art video generation model offering dynamic capabilities.

All models support fine-tuning and knowledge distillation, allowing customers to tailor AI tools to their proprietary data for improved accuracy and performance. These models excel in supporting Retrieval Augmented Generation (RAG), which grounds outputs in specific organizational data to enhance reliability.


Wednesday, December 04, 2024

Orchestrator agents: Integration, human interaction, and enterprise knowledge at the core - Emilia David, Venture Beat

There is no doubt AI agents will continue to be a fast-growing rend in enterprise AI. But as more companies look to deploy agents, they’re also looking for a way to help them make sense of the many actions these autonomous or semi-autonomous, AI guided bots will take, and avoid conflicts. Enter the orchestrator: these type of agents function as managers of other, more specialized agents, understanding each one’s role and activating each based on the next steps needed to finish a task. Most orchestrator agents, sometimes called meta agents, monitor if an agent succeeded or failed and choose the following agent to trigger to get the desired outcome. 

Higher Education in 2025: AGI Agents to Displace People - Ray Schroeder, Inside Higher Ed

The new year may bring a host of virtual assistants and administrative staff to higher education. They will begin as assistants to humans, then over time they will evolve into autonomous AI staff members. Next-generation agents are in development and near release. As Bernard Marr reports in Forbes, “While previous iterations of AI focused on making predictions or generating content, we’re now witnessing the emergence of something far more sophisticated: AI agents that can independently perform complex tasks and make decisions. This third wave of AI, known as ‘agentic AI,’ represents a fundamental shift in how we think about and interact with artificial intelligence in the workplace.”

Tuesday, December 03, 2024

Google’s Gemini has a memory: Google’s Gemini can now remember what you tell it - Martin Crowley, AI Tool Report

Following OpenAI’s launch of ChatGPT’s memory feature in April, Google has revealed that its own chatbot, Gemini, can now “remember the things you care about” such as your life, work, aspirations, and personal preferences, and, as a result, delivers more tailored and relevant responses. For example, if you tell Gemini you’re a Vegan, and then ask it for restaurant recommendations, it will only display places that are suitable for Vegans. Only users on Google’s Google One AI Premium subscription plan have access to this memory feature, and it's currently only available in English. They can add their interests and preferences through the chatbot interface, or via the “saved info” page, where Google has provided some examples of what they might like Gemini to remember, like “Use simple language and avoid jargon” or “When planning a trip, always include the cost per day,” for example.

How Are Companies Really Using AI? A New Report Has Answers - Stefano Puntoni, Knowledge at Wharton

“Growing Up: Navigating Gen AI’s Early Years” is a survey of more than 800 senior business leaders in large organizations that reveals a seismic shift in their attitudes and applications of AI in just a short time. In 2023, the first year of the survey, only 37% reported using AI weekly. That number has risen to 72% in 2024. Negative perceptions, namely worry and skepticism, are softening as decision-makers explore how this evolving technology can help their firms become better. According to the survey, generative AI is being widely deployed across functions, even in departments such as marketing and human resources that were initially slower to adopt it. The highest use is in document and proposal writing and editing with 64%, followed closely by data analyses and analytics at 62%. Other high-use functions include customer service and support (58%), fraud detection and prevention (55%), and financial forecasting and planning (53%).

Monday, December 02, 2024

Is Algorithmic Management Too Controlling? - Lindsey Cameron, Knowledge at Wharton

In more and more workplaces, important decisions aren’t made by managers but by algorithms which have increasing levels of access to and control over workers. While algorithmic management can boost efficiency and flexibility (as well as enabling a new class of quasi-self-employed workers on platforms like Uber and Instacart), critics warn of heightened surveillance and reduced autonomy for workers. In a newly published paper, Wharton Prof. Lindsey Cameron examines how ride-hail drivers interact with the algorithmic management tools that make app-based work possible. In this interview, she shares insights from her research, along with tips for creating a more equitable future of work.

https://knowledge.wharton.upenn.edu/article/is-algorithmic-management-too-controlling/

Deep learning pipeline for accelerating virtual screening in drug discovery - Fatima Noor, et al; Nature Scientific Reports

In benchmarking, VirtuDockDL achieved 99% accuracy, an F1 score of 0.992, and an AUC of 0.99 on the HER2 dataset, surpassing DeepChem (89% accuracy) and AutoDock Vina (82% accuracy). These results underscore the tool’s capability to identify high-affinity inhibitors accurately across various targets, including the HER2 protein for cancer therapy, TEM-1 beta-lactamase for bacterial infections, and the CYP51 enzyme for fungal infections like Candidiasis. To sum up, VirtuDockDL combines user-friendly interface design with powerful computational capabilities to facilitate rapid, cost-effective drug discovery and development. The integration of AI in drug discovery could potentially transform the landscape of pharmaceutical research, providing faster responses to global health challenges. The VirtuDockDL is available at https://github.com/FatimaNoor74/VirtuDockDL .

Sunday, December 01, 2024

Google’s New AI Is Shockingly Good and Scary - AI Revolution

Google's new AI model, Gemini x114, is making waves in the AI field. It has achieved the top spot on the Chatbot Arena leaderboard, outperforming even OpenAI's GPT-4. This model excels in various tasks, including math, creative writing, and visual understanding. However, access to Gemini x114 is currently limited to developers and researchers through Google AI Studio. The model's success highlights the ongoing debate about how to measure AI progress, as its performance drops significantly when factors like response formatting are controlled. Beyond its technical capabilities, Gemini x114 has also sparked controversy due to instances of generating troubling and insensitive responses. (summary provided by Gen AI)

If AGI arrives during Trump’s next term, ‘none of the other stuff matters’ - Harry McCracken, Fast Company

I think it depends on the extent to which Donald Trump will listen to Elon Musk. On one hand, you have a lot of folks who are very anti-regulation trying to persuade Trump to repeal even Biden’s executive order, even though that was very weak sauce. And then on the other hand, you have Elon, who’s been pro AI regulation for over a decade and came out again for the California regulation, SB 10 47. This is all going to really come down to chemistry and then relative influence. In my opinion, this issue is the most important issue of all for the Trump administration, because I think AGI is likely to actually be built during the Trump administration. So during this administration, this is all going to get decided: whether we drive off that cliff or whether AI turns out to be the best thing that ever happened.

Saturday, November 30, 2024

What are AI guardrails? - McKinsey

AI guardrails help ensure that an organization’s AI tools, and their application in the business, reflect the organization’s standards, policies, and values.But just as guardrails on the highway don’t eliminate the risk of injuries or fatalities, AI guardrails don’t guarantee that AI systems will be completely safe, fair, compliant, and ethical. For the best results, companies can implement AI guardrails along with other procedural controls (for example, AI trust frameworks, monitoring and compliance software, testing and evaluation practices), as well as a proper AI operations technology stack, which scales the governance of AI across an organization.

Readers can’t accurately distinguish between AI and human essays, researchers find - Anya Geist, Yale Daily News

In a project organized by four researchers, including three from the School of Medicine, researchers tasked readers with blindly reviewing 34 essays, 22 of which were human-written and 12 which were generated by artificial intelligence. Typically, they rated the composition and structure of the AI-generated essays higher. However, if they believed an essay was AI-generated, they were less likely to rank it as one of the overall best essays. Ultimately, the readers only accurately distinguished between AI and human essays 50 percent of the time, raising questions about the role of AI in academia and education. 


Friday, November 29, 2024

The Impact of Using Artificial Intelligence Techniques in Improving The Quality of Educational Services /Case Study at The University of Baghdad (Provisionally accepted) - Namaa Fargan, et al; Frontiers in Education

The utilization of artificial intelligence techniques has garnered significant interest in recent research due to their pivotal role in enhancing the quality of educational offerings. This study investigated the impact of employing artificial intelligence techniques on improving the quality of educational services, as perceived by students enrolled in the College of Pharmacy at the University of Baghdad. The study sample comprised 379 male and female students. A descriptive-analytical approach was used, with a questionnaire as the primary tool for data collection. The findings indicated that the application of artificial intelligence methods was highly effective, and the educational services provided to students were of exceptional quality. The results also showed a strong correlation (correlation coefficient of 0.719) between the use of artificial intelligence techniques and the quality of educational services.

AI policy directions in the new Trump administration - John Villasenor and Joshua Turner, Brookings

The new administration will likely relax agency regulation, focus more on competition with China, and decrease AI-related antitrust enforcement, among other possibilities. Although predicting technological progress is difficult, the next four years will bring some unexpected developments in AI, and effectively stewarding this extraordinary technology will require a nimble and balanced set of federal policy responses.

Thursday, November 28, 2024

Scale Is All You Need? Part 4-3: The Post-AGI-World - Kim Isenberg, Forward Future

“If AGI is successfully created, this technology could help us elevate humanity by increasing abundance, turbocharging the global economy, and aiding in the discovery of new scientific knowledge that changes the limits of possibility.” -OpenAI
This could give rise to a new kind of culture, one that differs greatly from today's ideals and values, since many of our basic assumptions – such as the value of academic achievements and political decision-making – will be challenged by the capabilities and possibilities of AI. And last but not least, human hubris will erode as we realize that our own species may not be the most intelligent on Earth and that even an artificial intelligence far surpasses our own intelligence.

AI News: New FREE AI Challenges ChatGPT! -- Matt Wolfe, YouTube

OpenAI is rolling out advanced voice mode to the web for ChatGPT Plus users, and GPT-4 has been updated with improved creative writing abilities.  Anthropic's Claude now allows direct document uploads from Google Drive, and Google's Gemini has added a memory feature. YouTube is introducing automatic dubbing in multiple languages for creators. DeepSeek's new model, R1 light preview, rivals OpenAI's Q* in logical reasoning, while French company Mistral offers a free chatbot, Le Chat, with features comparable to paid versions of ChatGPT and Claude. Microsoft has partnered with Harper Collins to train AI models on books and launched its "recall" feature for Co-pilot PCs. Elon Musk predicts AGI by 2026, and Google DeepMind's AlphaCubit uses AI to improve Quantum Computing accuracy. (summary generated by GenAI)

https://www.youtube.com/watch?v=o3Bgl6Vjm6w

Wednesday, November 27, 2024

A.I. Chatbots Defeated Doctors at Diagnosing Illness - Gina Colata, NY Times

Dr. Adam Rodman, an expert in internal medicine at Beth Israel Deaconess Medical Center in Boston, confidently expected that chatbots built to use artificial intelligence would help doctors diagnose illnesses. He was wrong. Instead, in a study Dr. Rodman helped design, doctors who were given ChatGPT-4 along with conventional resources did only slightly better than doctors who did not have access to the bot. And, to the researchers’ surprise, ChatGPT alone outperformed the doctors. The chatbot, from the company OpenAI, scored an average of 90 percent when diagnosing a medical condition from a case report and explaining its reasoning. Doctors randomly assigned to use the chatbot got an average score of 76 percent. Those randomly assigned not to use it had an average score of 74 percent.

AI vs. Labor Unions: The Future of Work in the Age of Automation - Curt del Principe, HubSpot

Unlike knitting machines, which have a fairly niche use, there are few industries that AI can’t touch. Its ability to analyze data or create customized content at scale in mere seconds makes these tools unlike any tech revolution that’s come before. Suddenly, AI appears in everything from accounting to zookeeping. In fact, around 80% of US workers could have at least 10% of their tasks impacted by LLMs, according to a study by the University of Pennsylvania and OpenAI. (And it’s worth pointing out that LLMs are just one type of AI in the new pantheon driving the AI boom. That 80% figure isn’t even counting tasks that can be done by audio, video, or imaging tools.) The question is no longer, “Will AI impact my job?” but “How will AI impact my job?”

Tuesday, November 26, 2024

Agentic AI: 6 promising use cases for business - Grant Gross, CIO

AI agents will play a vital role in software programming and cybersecurity, but they will also change enterprise workflows and business intelligence, experts say. Agentic AI, which Forrester named a top emerging technology for 2025 in June, takes generative AI a step further by emphasizing operational decision-making rather than content generation. With AI agents popping up in so many situations and platforms, organizations interested in the technology may find it difficult to know where to start. A handful of use cases have so far risen to the top, according to AI experts.

AI is already taking jobs, research shows. Routine tasks are the first to go - Mark Sullivan, Fast Company

Researchers from Harvard Business School, the German Institute for Economic Research, and Imperial College London Business School studied 1,388,711 job posts on a major (but undisclosed) global freelance work marketplace from July 2021 to July 2023, and found that demand for such automation-prone jobs had fallen 21% just eight months after the release of ChatGPT in late 2022. Writing jobs were most affected, followed by software, app, and web development work, as well as engineering jobs. The large language models that underpin tools like ChatGPT are trained on large amounts of text to predict the most likely next word in a sequence. 

Monday, November 25, 2024

The Third Wave Of AI Is Here: Why Agentic AI Will Transform The Way We Work - Bernard Marr, Forbes

The chess pieces of artificial intelligence are being dramatically rearranged. While previous iterations of AI focused on making predictions or generating content, we're now witnessing the emergence of something far more sophisticated: AI agents that can independently perform complex tasks and make decisions. This third wave of AI, known as 'agentic AI,' represents a fundamental shift in how we think about and interact with artificial intelligence in the workplace.

Social Media Fact Sheet - Pew Research Center

Many Americans use social media to connect with one another, engage with news content, share information and entertain themselves. Explore the patterns and trends shaping the social media landscape. 
YouTube and Facebook are the most-widely used online platforms. Half of U.S. adults say they use Instagram, and smaller shares use sites or apps such as TikTok, LinkedIn, X (formerly Twitter) and Snapchat.

Sunday, November 24, 2024

The next massive upgrade to ChatGPT is coming in January - Andrew Tarantola, Digital Trends

OpenAI is set to launch a new AI agent in January, code-named Operator, that will enable ChatGPT to take action on the user’s behalf. You may never have to book your own flights ever again. AI agents differ significantly from traditional programs. Rather than follow a set of predefined instructions, agents can autonomously perceive their environment, process information, and make decisions to perform tasks or solve problems, such as generating complex computer code or booking travel arrangements. Anthropic recently released its Computer Control feature, which enables the model to manipulate a desktop environment in the same way human users would. Microsoft similarly unveiled its AI agent feature in late October. It’s designed to handle monotonous office work like managing employee records and drafting emails. Google is also working on AI agents of its own, code-named Jarvis, and could preview it by the end of the year.

What is retrieval-augmented generation (RAG)? - McKinsey

Retrieval-augmented generation, or RAG, is a process applied to LLMs to make their outputs more relevant in specific contexts. RAG allows LLMs to access and reference information outside the LLMs own training data, such as an organization’s specific knowledge base, before generating a response—and, crucially, with citations included. This capability enables LLMs to produce highly specific outputs without extensive fine-tuning or training, delivering some of the benefits of a custom LLM at considerably less expense.

Saturday, November 23, 2024

Sam Altman is Predicting AGI in 2025- Matt Wolfe, YouTube

Sam Altman, CEO of OpenAI, has hinted that Artificial General Intelligence (AGI) could be a reality in 2025.  In a recent interview, when asked what he was most excited about for 2025, Altman simply replied "AGI," suggesting that he believes AGI will be achieved by then.  However, the video host points out that there's no universally accepted definition of AGI, which could lead to disagreements on when it's actually achieved, even if OpenAI declares it has reached that milestone. (summary provided in part by GenAI)

https://www.youtube.com/watch?v=mnAPxbr3VZ0

OpenAI Introduces The "Operator Agent" - Andrew Black, AI Grid

OpenAI is reportedly developing a new AI agent codenamed "Operator" that can interact with computers and perform tasks on behalf of users, such as writing code or booking travel. This agent is planned for a limited release in January 2025 as a research preview, primarily targeting developers and researchers.  OpenAI is also working on several other agent-related projects, including a general-purpose tool that operates within a web browser, co-pilot agents that autonomously monitor events and execute tasks, and potentially voice agents based on internal demos. (summary provided in part by GenAI)

https://www.youtube.com/watch?v=ExyUcMVztrA

Friday, November 22, 2024

A Comprehensive Survey of Small Language Models in the Era of Large Language Models: Techniques, Enhancements, Fali Wang, et Al; ARXIV

Large language models (LLM) have demonstrated emergent abilities in text generation, question answering, and reasoning, facilitating various tasks and domains. Despite their proficiency in various tasks, LLMs like LaPM 540B and Llama-3.1 405B face limitations due to large parameter sizes and computational demands, often requiring cloud API use which raises privacy concerns, limits real-time applications on edge devices, and increases fine-tuning costs. Additionally, LLMs often underperform in specialized domains such as healthcare and law due to insufficient domain-specific knowledge, necessitating specialized models. Therefore, Small Language Models (SLMs) are increasingly favored for their low inference latency, cost-effectiveness, efficient development, and easy customization and adaptability. , Collaboration with LLMs, and Trustworthiness.

AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably - Brian Porter & Edouard Machery, Nature Scientific Reports

As AI-generated text continues to evolve, distinguishing it from human-authored content has become increasingly difficult. This study examined whether non-expert readers could reliably differentiate between AI-generated poems and those written by well-known human poets. We conducted two experiments with non-expert poetry readers and found that participants performed below chance levels in identifying AI-generated poems (46.6% accuracy, χ2(1, N = 16,340) = 75.13, p < 0.0001). Notably, participants were more likely to judge AI-generated poems as human-authored than actual human-authored poems (χ2(2, N = 16,340) = 247.04, p < 0.0001). We found that AI-generated poems were rated more favorably in qualities such as rhythm and beauty, and that this contributed to their mistaken identification as human-authored. 

Thursday, November 21, 2024

Winds of Change in Higher Ed to Become a Hurricane in 2025 - Ray Schroeder, Inside Higher Ed

 A number of factors are converging to create a huge storm. Generative AI advances, massive federal policy shifts, broad societal and economic changes, and the demographic cliff combine to create uncertainty today and change tomorrow. The confluence of all of these disruptions in 2025 predict a challenging year ahead for higher education. Has your institution prepared for the fallout from these developments? Who is coordinating the response to these disparate trends? Are you following the trends and considering the implications for your career as well as for your department, college and university?

OpenAI’s secret AI agent revealed! - Martin Crowley, AI Tool Report

OpenAI reportedly told staff that they will release an AI agent—codenamed Operator—which can take over a computer and perform tasks, autonomously, to developers via the developer API, as early as January. This comes after OpenAI denied reports that it would be launching its newest AI model, Orion, in December, suggesting they’re pivoting away from chatbots that just process text and images, towards agents that can engage with computers autonomously. Something which CEO, Sam Altman, appeared to confirm during a recent AMA on Reddit, where he said “I think the thing that will feel like the next giant breakthrough will be agents.” This also comes after Chief Product Officer, Kevin Weil confirmed, at OpenAI’s dev day last month, that “2025 is going to be the year that agentic systems finally hit the mainstream.”

Wednesday, November 20, 2024

Google’s AI ‘learning companion’ takes chatbot answers a step further - Umar Shakir, the Verge

We tested Learn About and Google Gemini with a simple prompt: “How big is the universe?” Both answered that “the observable universe” is “about 93 billion light-years in diameter.” However, while Gemini opted to show a Wikipedia-provided diagram of the universe and a two-paragraph summary with links to sources, Learn About emphasized an image from the educational site Physics Forums and added related content that was similarly focused more on learning than simply offering facts and definitions.

OpenAI CPO Reveals: ChatGPT Turns $8,000 Legal Work into $3 (The Future is HERE!) - Julia McCoy, YouTube

OpenAI's Chief Product Officer, Kevin Wheel, recently made a significant statement at the Ray Summit 2024, revealing that their AI model, ChatGPT, can complete legal work that previously cost $8,000 for a mere $3 in API tokens. This highlights the potential of AI to drastically reduce costs and increase efficiency in various professional fields.  Wheel emphasized OpenAI's commitment to making AI more accessible and affordable, driving innovation and problem-solving on a larger scale. This advancement raises important questions about the future of work and the need for adaptation in the face of rapidly evolving AI capabilities. The video emphasizes that while AI like ChatGPT is not yet ready for fully autonomous deployment, it is rapidly advancing. (summary assisted with Gemini AI)

Tuesday, November 19, 2024

$1M Robot-Painted Portrait Sparks a New Era in AI Art - Douglas, the AI Newsroom

In a groundbreaking moment for both AI and art, Ai-Da, the world’s first ultra-realistic robot artist, just made history. Ai-Da’s portrait of Alan Turing sold for an astounding $1.08 million at Sotheby’s, exceeding all pre-sale estimates and marking the first-ever sale of a humanoid robot’s artwork at a major auction. This sale not only celebrates a new level of AI creativity but also raises big questions about art, ethics, and technology. Let’s dive into what this historic sale means and where the journey of AI-powered art may take us next.

Can AI Agents Rescue Higher Ed From Financial Collapse? - Vinay Bhaskara, Forbes

Higher education is facing an existential crisis. With one small college closing every week and tuition costs having risen 141% at public institutions over the last two decades, American universities may be heading toward a bleak future. But a new wave of artificial intelligence technology may offer a path forward. AI agents — sophisticated software that can handle complex interactions and workflows — are emerging as a potential solution to higher education's operational challenges. Unlike generic chatbots or simple automation tools, these purpose-built AI agents can manage intricate processes, engage in natural conversations, and seamlessly integrate with existing university systems.

https://www.forbes.com/sites/vinaybhaskara/2024/10/30/can-ai-agents-rescue-higher-ed-from-financial-collapse/

Monday, November 18, 2024

AI That Can Invent AI Is Coming. Buckle Up. - Forbes

Leopold Aschenbrenner’s “Situational Awareness” manifesto made waves when it was published this summer. In this provocative essay, Aschenbrenner—a 22-year-old wunderkind and former OpenAI researcher—argues that artificial general intelligence will be here by 2027, that artificial intelligence will consume 20% of all U.S. electricity by 2029, and that AI will unleash untold powers of destruction that within years will reshape the world geopolitical order. Aschenbrenner’s startling thesis about exponentially accelerating AI progress rests on one core premise: that AI will soon become powerful enough to carry out AI research itself, leading to recursive self-improvement and runaway superintelligence.

https://www.forbes.com/sites/robtoews/2024/11/03/ai-that-can-invent-ai-is-coming-buckle-up/

How To Build The Future: Sam Altman Predicts AGI in 2025 - Y Combinator

Altman just predicted that artificial general intelligence will be achieved in 2025, coming alongside conflicting reports of slowing progress in LLM development and scaling across the industry. It’s fair to say that few people in tech are positioned to have a bigger impact on the future than Sam Altman. As the CEO of OpenAI, Altman and his team have overseen monumental leaps forward in machine learning, generative AI and most recently LLMs that can reason at PhD levels. And this is just the beginning. In his latest essay Altman predicted that ASI (Artificial Super Intelligence) is just a few thousand days away. So how did we get to this point? In this episode of our rebooted series "How To Build The Future," YC President and CEO Garry Tan sits down with Altman to talk about the origins of OpenAI, what’s next for the company, and what advice he has for founders navigating this massive platform shift.

Sunday, November 17, 2024

Microsoft joins multi-AI agent fray with Magentic-One - Anirban Ghoshal, CIO

Magentic-One is a rival to multi-agent frameworks such as Salesforce’s Agentforce or IBM’s Bee Agent Framework for enterprises wanting to let AI complete complex tasks that are currently handled by humans. Microsoft wants enterprises to believe that its Magentic-One multi-AI agent system will enable them to automate complex tasks that previously required human intervention. One of a number of Agentic AI offerings to arrive on the market in recent months, Magentic-One is built on Microsoft’s previously released AutoGen open-source agent development framework.

Your Next Job Interview Might Be With an AI Recruiter - Martina Bretous, HubSpot

You’re tweaking your resume a hundred times. You’ve told people “a little about yourself” more times than you’d like to. And you’ve probably gotten a few “We’ve decided to move forward to another candidate” emails. It’s the name of the game. But here’s another curveball you probably weren’t expecting: Going through an interview process with an AI-generated recruiter on the other end. Tools like Apriora and Micro1 are making it happen, with the promise of better candidates and a streamlined recruiting process.

Saturday, November 16, 2024

Magentic-One: A Generalist Multi-Agent System for Solving Complex Tasks - Adam Fourney, et al; Microsoft

The future of AI is agentic. AI systems are evolving from having conversations to getting things done—this is where we expect much of AI’s value to shine. It’s the difference between generative AI recommending dinner options to agentic assistants that can autonomously place your order and arrange delivery. It’s the shift from summarizing research papers to actively searching for and organizing relevant studies in a comprehensive literature review. Modern AI agents, capable of perceiving, reasoning, and acting on our behalf, are demonstrating remarkable performance in areas such as software engineering, data analysis, scientific research, and web navigation. 

5 Bold Predictions for AI in 2025 and how we think AI will continue to transform industries - Cypher Learning

The pace of innovation is rapidly accelerating, and AI is poised to redefine how we work, learn, and connect with technology in surprising ways. From empowering new roles and fostering inclusivity, AI is on the brink of reshaping entire industries. Last year, we shared our predictions for AI in 2024 and saw them come to fruition. As the year comes to a close, we wanted to turn the page and once again share our predictions for how AI will continue to evolve in 2025.  Among the predictions is #5. Personalized workplace development displaces old-school training.

Friday, November 15, 2024

The economic potential of generative AI: The next productivity frontier - McKinsey

AI has permeated our lives incrementally, through everything from the tech powering our smartphones to autonomous-driving features on cars to the tools retailers use to surprise and delight consumers. The latest generative AI applications can perform a range of routine tasks, such as the reorganization and classification of data. But it is their ability to write text, compose music, and create digital art that has garnered headlines and persuaded consumers and households to experiment on their own. As a result, a broader set of stakeholders are grappling with generative AI’s impact on business and society but without much context to help them make sense of it.

Navigating the challenges of AI in education - Higher Education Press

The advent of advanced AI models like ChatGPT has precipitated a transformative shift in knowledge acquisition and exploration, posing significant challenges to traditional educational concepts and methodologies. An article in Frontiers of Digital Education explores the multifaceted impact of AI on education and proposes strategies to navigate these challenges effectively. The work is titled "Educational Concepts and Methodologies in the AI Era: Challenges and Responses." The traditional emphasis on exam scores and knowledge dissemination is increasingly at odds with the demands of the AI era. The ability of AI to automate routine tasks and generate vast amounts of information necessitates a shift in focus towards the cultivation of skills that are uniquely human, such as critical thinking, creativity, and problem-solving. Failure to adapt to this changing landscape risks rendering our educational systems obsolete, failing to equip students with the tools they need to thrive in a rapidly evolving world.

Thursday, November 14, 2024

Why AI could eat quantum computing’s lunch - Ed Gent, MIT Technology Review

Rapid advances in applying artificial intelligence to simulations in physics and chemistry have some people questioning whether we will even need quantum computers at all. Tech companies have been funneling billions of dollars into quantum computers for years. The hope is that they’ll be a game changer for fields as diverse as finance, drug discovery, and logistics. But while the field struggles with the realities of tricky quantum hardware, another challenger is making headway in some of these most promising use cases. AI is now being applied to fundamental physics, chemistry, and materials science in a way that suggests quantum computing’s purported home turf might not be so safe after all.

#BuildwithAI Hackathon 2024 - GenAI.works

BuildwithAI 2024 invites innovators, developers, and AI enthusiasts from all backgrounds to unleash their creativity using cutting-edge technologies! Don't miss this opportunity to turn your ideas into reality and make your mark in the rapidly evolving world of artificial intelligence. Whether you're a seasoned pro or just starting your AI journey, this is your chance to showcase your skills, collaborate with like-minded individuals, and build the future of AI. Register now and let's build the future together!