ChatGPT GPT-4o has been introduced! It will be free!
OpenAI officially introduced ChatGPT’s new model, “ChatGPT GPT-4o”, at its recent Spring Update event! The most significant announcement from the disclosed information was that the artificial intelligence model will be released not only for premium members but also for everyone, free of charge.
What can ChatGPT GPT-4o do?
The innovations within GPT-4o will be rolled out to users in the coming weeks. Additionally, the requirement for signing up for ChatGPT is being eliminated.
Moreover, you will also be able to utilize data, code, and image tools that allow you to analyze images without payment. Questions arise about whether it’s worth paying $20 a month for ChatGPT in this scenario.
OpenAI’s CTO, Mira Murati, notes that the biggest benefit for paying users will be the ability to make five times as many requests to GPT-4o within a day compared to the free plan.
ChatGPT is used regularly by over 100 million people, and GPT-4o will be significantly more efficient than previous GPT-4 versions. Let’s dive in.
ChatGPT GPT-4o will have the ability to reason over audio, text, and images
As a result, in this new version, you’ll be able to have real-time conversations with ChatGPT, similar to Siri, and do nearly everything. However, it seems that this model will be far superior to Siri because it can not only respond to your voice but also make interpretations based on the images from your camera.
One of the biggest innovations of GPT-4o is its live conversation feature. This model has the ability to work from voice to voice. Instead of converting speech to text first, it can directly listen to the voice.
In a demo of this feature, when OpenAI staff approached the voice assistant quickly with heavy breathing, it could offer advice to improve breathing techniques. It even gave a warning like “you are not a vacuum cleaner, calm down.”
This model is so smart that you don’t have to wait for the sentence to finish. You can intervene in real-time. Additionally, it can also perceive emotional states.
You can change OpenAI’s voice however you like
In another demo, as part of the ChatGPT Voice update, the OpenAI voice was shown to have the ability to be not only natural but also dramatic and emotional. The model can produce different voices, including different tones and singing abilities.
OpenAI can interpret live streams from your camera in real-time
One of the features of the new ChatGPT is its local vision capabilities, essentially the ability to “see” through the camera on your phone. During the demo, the team showed ChatGPT an equation they had written at that moment and asked the AI to help solve the problem. The model not only provides answers but also offers step-by-step guidance.
ChatGPT GPT-4o desktop application is coming
GPT-4o, working on a Mac with an extremely natural voice, can now display and analyze written code. Additionally, it can explain what it sees and identify any potential issues. During the demo, the AI could examine a graph and provide real-time feedback and information.
It can also perform real-time translation
During another demo, the OpenAI team showcased the ability of ChatGPT Voice to be used as a live translation tool. It took Mira Murati’s Italian words, translated them into English, and then translated English responses back into Italian.
The artificial intelligence can understand your emotions from the facial expressions you show to the camera
During a demo showcasing this, when a smiling face was shown, the AI understood the person’s mood and asked, “Would you like to share the reason for your good energy?”
While a clear release date hasn’t been provided for GPT-4o, it’s said to be released within the next few weeks. What are your thoughts? Don’t forget to share your thoughts with us in the comments section below.