Monday, May 20, 2024

OpenAI CEO Sam Altman says the AI revolution should face regulations like airlines by an "international agency" to avert global harm to humanity News - Kevin Okemwam, Windows Central

OpenAI CEO Sam Altman was recently featured in the All-In podcast, where he discussed the development of GPT-5, AI regulation, a potential AI-powered iPhone competitor, and the OpenAI fiasco that led to his firing and reinstatement as CEO. Sam Altman says the iPhone is the greatest piece of technology humanity has ever made and would be a considerable undertaking to compete with. Altman indicates that AI regulation is crucial in the not-so-distant future when powerful AI systems could pose a significant threat to humanity. The CEO says AI should be regulated like an airplane by an international agency that ensures the safety testing of these advances.

Navigating the future of work: A case for a robot tax in the age of AI - Michael J. Ahn, Brookings

Artificial intelligence and robotics promise unprecedented efficiency while creating a risk of job loss for human workers. A specific tax on companies that deploy AI and robotics that are capable of autonomous decision-making could provide economic support for displaced workers as well as an incentive for strategic decision-making about automation, particularly when the benefits are marginal. Implementing this tax may require extending legal personhood to robots, not in order to grant robots human rights but to create a structured basis for interactions between robots, individuals, and the state.

Sunday, May 19, 2024

Google 3D calls coming in 2025 - Martin Crowly, AI Tools Report

Google originally introduced the concept of 3D video conferencing calls via a project codenamed ‘Project Starline’, in 2021, at the Google I/O conference. Now, three years later it’s announced that it will be available for consumer use as early as next year. Project Starline uses 3D imaging, AI, and special cameras and screens to create 3D images of remote people, on video calls, so it looks and feels like they’re in the room with you, having a natural face-to-face conversation. Together with HP–known for its computing experience–Microsoft has created this video calling technology that allows users to talk, gesture, and maintain eye contact as if they were physically present, sitting opposite you. After months of testing and private technical reviews, Google says that the immersive 3D experience reduces video meeting fatigue by 31%, resulting in a 12% faster reaction time on general day-to-day tasks.

Google takes on GPT-4o with Project Astra, an AI agent that understands dynamics of the world - Shubham Sharma, Venture Beat

Today, at its annual I/O developer conference in Mountain View, Google made a ton of announcements focused on AI, including Project Astra – an effort to build a universal AI agent of the future. An early version was demoed at the conference, however, the idea is to build a multimodal AI assistant that sits as a helper, sees and understands the dynamics of the world and responds in real time to help with routine tasks/questions. The premise is similar to what OpenAI showcased yesterday with GPT-4o-powered ChatGPT. That said, as GPT-4o begins to roll out over the coming weeks for ChatGPT Plus subscribers, Google appears to be moving a tad slower. 

Saturday, May 18, 2024

2024 EDUCAUSE Horizon Report | Teaching and Learning Edition - EDUCAUSE

This report profiles the trends and key technologies and practices shaping the future of teaching and learning, and envisions a number of scenarios for that future. It is based on the perspectives and expertise of a global panel of leaders from across the higher education landscape.

Element451 Introduces Gen AI Assistants for Higher Education - Kate Lucariello, Campus Technology

The two generative AI tools provide personalized help to students and staff, and fast, accurate, and timely information, the company said. Developed specifically for higher education, Bolt AI Assistants are personalized "team members" that give "tailored responses to specific objectives" and are "trained to handle specific roles and tasks," the company said in a release. Currently available AI Assistants include: Academic advisors; Career coaches; Financial aid advisors; Admissions advisors; Campus life advisors; Support peers; Program advisors; Design assistants; Copywriters; Campaign strategists; Events managers; Marketing advisors; and Data analysts.

Friday, May 17, 2024

Superhuman? What does it mean for AI to be better than a human? And how can we tell? - Ethan Mollick, One Useful Thing

No matter what happens next, today, as anyone who uses AI knows, we do not have an AI that does every task better than a human, or even most tasks. But that doesn’t mean that AI hasn’t achieved superhuman levels of performance in some surprisingly complex jobs, at least if we define superhuman as better than most humans, or even most experts. What makes these areas of superhuman performance interesting is that they are often for very “human” tasks that seem to require empathy and judgement. For example: If you debate with an AI, they are 87% more likely to persuade you to their assigned viewpoint than if you debate with an average human. GPT-4 helps people reappraise a difficult emotional situation better than 85% of humans, beating human advice-givers on the effectiveness, novelty, and empathy of their reappraisal.

https://www.oneusefulthing.org/p/superhuman

TikTok Starts Automatically Labeling AI-Produced Synthetic Media - ERIC HAL SCHWARTZ, Voicebot.ai

TikTok has begun rolling out a new feature to recognize and label content created with generative AI automatically. This Content Credentials tool comes out of TikTok’s new partnership with the Coalition for Content Provenance and Authenticity (C2PA) and is part of a larger strategy by the social media video platform to reduce misinformation and boost transparency related to what it refers to as AI-generated content (AIGC). The new system works for when people upload a video or image to TikTok, spotting synthetic content and slapping a label naming it as such, as seen in the image above. The tool currently reviews images and videos, but it is expected to be extended to audio content soon.

Thursday, May 16, 2024

Professors Worry About ‘Digital Surveillance’ of Their Work - Jack Grove, Inside Higher Ed

More than eight in 10 professors say universities’ excessive use of digital technologies is harming academic freedom, according to a survey of academics in the United Kingdom. The poll of more than 2,000 scholars conducted for the University and College Union (UCU), which represents 120,000 faculty and staff members in the U.K., highlights growing unease over the digital tools commonly used in academe, such as the virtual learning environments used to facilitate teaching, electronic systems to evaluate teaching performance and metrics-based systems such as SciVal that enable managers to scrutinize research publications and citations.

AI and the future of creativity: Takeaways from Music Matters 2024 - Krystal Coe, MusicTech

Thursday’s programme began with a discussion on the exponential growth of Generative AI across creative industries and the massive ecosystem that’s been built on the technology. “Once in a while in history, you see technology that will potentially change lives. I think we are at that stage right now,” said Kevin Chan, chief partner officer of Microsoft Singapore. He explained that the magic of Generative AI lies in the way it allows creators — both big and small — to “do much more with less”. For musicians, this means freeing up precious time to focus on what matters to them: making music. Asked about the impact of AI on the creator economy, Harari surprised the audience, saying that the technology is not going to change things drastically: “I don’t think it’s going to be as disruptive as people think it’s going to be,” he said. The executive also expressed scepticism towards fully AI-generated avatars because “you still need the sprinkle of human touch”.

Wednesday, May 15, 2024

Colleges Adapt to Non-traditional Realities - Joseph Bednar, Business West

At the recent ceremony that officially installed him as chancellor of UMass Amherst, Javier Reyes noted that attitudes about higher education are changing, while rapid advancements in technology, with artificial intelligence at the center, are forcing colleges and universities to find new ways to meet their obligations. “How does higher education respond to these challenges?” he asked. “How do we meet the needs of today’s students — students who are increasingly mobile and more agile? How do we meet the needs of a changing society? How do we remain nimble and adapt so that our students are prepared to be active and engaged members of their communities today, tomorrow, and for decades to come?” That’s a lot to unpack, but UMass will focus on six key areas, Reyes explained: education, research and creative activity, translation and knowledge transfer, engagement, inclusivity and wellness, and financial and operational viability.

Change in higher education — we must quickly adapt to changing, unequal digital environment - Letlhokwa George Mpedi, Daily Maverick

The question for leadership in higher education is how do we harness these changes to propel us in a direction where we are positioned to provide access, maintain and accelerate knowledge production for the benefit of society, meaningfully contribute to the public and social good and keep abreast of the shifting terrain that poses challenges constantly? There are, of course, unique considerations in South Africa. We are 30 years into democracy this year, and South African higher education has shifted from an elite race-based system to one that can be termed as “massified”. Yet, the shadows and vestiges of apartheid persist as the buoyancy of the economy has faltered. What do the sages say about the future of higher education? On the radar currently and for the foreseeable future are the sweeping changes in technologies and the advent of artificial intelligence (AI). These technologies have arrived like a tsunami and can and must change traditional teaching, learning and research methods.

Tuesday, May 14, 2024

What OpenAI did A new model opens up new possibilities - Ethan Mollick, One Useful Thing

OpenAI released a new AI model, GPT-4o, with some interesting capabilities. It also maintains the OpenAI tradition of terrible names for AI models (the “o” means “omni” - more on that shortly). Previously, new AI models from the major AI labs have focused on how smart the model is. GPT-4o appears to be a step up over GPT-4 and is the smartest model I have used. However, it does not represent a major leap over the previous version of GPT-4, the way that GPT-4 was a 10x improvement over the free GPT-3.5. That has to wait, presumably, until GPT-5, which is apparently still scheduled for some future release. But what it does do is quite interesting.

THE podcast: the future of XR and immersive learning - Monica Arés, Times Higher Education

Imagine a learning environment where an AI professor fields infinite student questions, where business students practise difficult conversations with an avatar that models an array of personas and reactions, where automated feedback is not static but dynamic and individualised. Artificial intelligence and XR tools are changing education and preparing students to live and work in an unpredictable world.  Immersive technology expert Monica Arés explains how the combination of artificial intelligence and extended reality in education has the potential to unlock curiosity and learning, the costs that come with these tools and what she thinks teaching technology will look like in 2034. In this conversation, she tells us about the evolution of edtech from the early days of virtual reality, immersive technology's potential for unlocking curiosity (and the costs that come with it), and what she thinks teaching technology will look like in 2034. Hint: it’s a personalised, creative world with fewer screens.

Hello GPT-4o - Open AI

GPT-4o (“o” for “omni”) is a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.