From www.tomsguide.com
OpenAI has released an update to its advanced GPT-4-Turbo artificial intelligence model, bringing with it “majorly improved” responses and analysis capabilities.
Initially the model, which includes AI vision technology for analyzing and understanding content from video, image and audio, is only available to developers but OpenAI said these features will come to ChatGPT soon.
This is the first time GPT-4-Turbo with vision technology has been made available to third party developers. This could result in some compelling new apps and services around fashion, coding and even gaming.
The new model also brings the knowledge cut-off date up to December 2023. This is the point at which training finished on the AI. Previously the knowledge cut off was April of last year.
What is GPT-4-Turbo?
GPT-4 Turbo with Vision is now generally available in the API. Vision requests can now also use JSON mode and function calling.https://t.co/cbvJjij3uLBelow are some great ways developers are building with vision. Drop yours in a reply 🧵April 9, 2024
See more
Most of the focus for GPT-4-Turbo is on improving the life of developers accessing the OpenAI model through an API call. The company says the new update will streamline workflows and create more efficient apps. This is because different models were needed for image and text.
In the future, the model and its vision analysis capabilities will be expanded upon and added to consumer apps like ChatGPT, making its understanding of image and video more efficient.
This is something Google has started to roll-out with Gemini Pro 1.5, although for now, like OpenAI, the search giant has restricted it to platforms used by developers rather than consumers.
One of the most high profile applications is the viral coding agent Devin from Cognition Labs which is able to craft complex applications from a prompt.
What can you do with GPT-4-Turbo?
GPT-4 hasn’t performed particularly well in benchmark tests against new models recently, including Claude 3 Opus or Google’s Gemini. Some smaller models are also outperforming it on specific tasks.
The updates should change that, or at least add new compelling features for enterprise customers until GPT-5 comes out.
The update continues with the 128,000 token context window, which is equivalent to about a 300-page book. Not the largest on the market but enough for most use cases.
To date OpenAI has focused on audio analysis and understanding in addition to text and images inside ChatGPT. The new update brings video to more people. When this comes to ChatGPT users may be able to upload short video clips and have the AI give a summary of the content or pick out key moments.
More from Tom’s Guide
[ For more curated Computing news, check out the main news page here]
The post OpenAI just dropped a new ‘majorly improved ‘ version of GPT-4-Turbo — and its coming soon to ChatGPT first appeared on www.tomsguide.com