Thu 16 May 2024:
OpenAI on Monday launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand use of its popular chatbot.
The update brings GPT-4 to everyone, including OpenAI’s free users, technology chief Mira Murati said in a livestreamed event. She added that the new model, GPT-4o, is “much faster,” with improved capabilities in text, video and audio. OpenAI said it eventually plans to allow users to video chat with ChatGPT.
The o in GPT-4o stands for omni. The new model allows ChatGPT to handle 50 different languages with improved speed and quality, and it will also be available via OpenAI’s API making it possible for developers to begin building applications using the new model today, Murati said.
She added that GPT-4o is twice as fast as, and half the cost of, GPT-4 Turbo.
The latest update “is much faster” and improves “capabilities across text, vision, and audio,” OpenAI CTO Mira Murati said in a livestream announcement on Monday, according to The Verge.
It is set to be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added.
OpenAI says in a blog post from the company that GPT-4o’s capabilities “will be rolled out iteratively (with extended red team access starting today),” but its text and image capabilities will start to release today in ChatGPT.
Moreover, OpenAI CEO Sam Altman posted that the model is “natively multimodal”. This means that the model could generate content or understand commands in voice, text, or images.
“Developers who want to tinker with GPT-4o will have access to the API, which is half the price and twice as fast as GPT-4-turbo,” Altman added on X.
The features bring speech and video to all users, either free or paid, and will be rolled out over the next few weeks. The important key point is just what a difference using voice and video to interact with ChatGPT-4o brings.
The changes, OpenAI told viewers on the live-stream, are aimed at “reducing the friction” between “humans and machines”, and “bringing AI to everyone”.
OpenAI researcher Mark Chen said the model is able to “perceive your emotion,” adding the model can also handle users interrupting it. The team also asked it to analyze a user’s facial expression to comment on the emotions the person may be experiencing.
Race to add AI-powered chatbots
OpenAI, Microsoft and Google are at the helm of a generative AI gold rush as companies in seemingly every industry race to add AI-powered chatbots and agents to key services to avoid being left behind by competitors. Earlier this month, OpenAI rival Anthropic announced its first-ever enterprise offering and a free iPhone app.
A record $29.1 billion was invested across nearly 700 generative AI deals in 2023, an increase of more than 260% from the prior year, according to PitchBook. The market is predicted to top $1 trillion in revenue within a decade.
Some in the industry have raised concerns about the speed at which untested new services are coming to market, and academics and ethicists are distressed about the technology’s tendency to propagate bias.
SOURCE: INDEPENDENT PRESS AND NEWS AGENCIES
______________________________________________________________
FOLLOW INDEPENDENT PRESS:
WhatsApp CHANNEL
https://whatsapp.com/channel/0029VaAtNxX8fewmiFmN7N22
TWITTER (CLICK HERE)
https://twitter.com/IpIndependent
FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent
YOUTUBE (CLICK HERE)
https://www.youtube.com/@ipindependent
Think your friends would be interested? Share this story!