OpenAI's new GPT-4 is smarter than ever! Here is everything it can do
ChatGPT has come into our lives and everything started to change. OpenAI's new GPT-4 can comprehend data in both text and images.
Following Google's introduction of Workspace AI the most recent version of OpenAI's generative pre-trained transformer system, GPT-4, was published on Tuesday. The new system comes with brand new features as well as new additions. The new and improved GPT-4 will be able to generate text on input images as well, unlike the current generation GPT-3.5. The OpenAI team wrote on Tuesday that while it "exhibits human-level performance on various professional and academic benchmarks," it "is less capable than humans in many real-world situations."
GPT-4 is smarter than ever
According to OpenAI, the GPT-4 will be made accessible for ChatGPT and the API. For access, you must be a ChatGPT Plus subscriber. Additionally, there will be a usage limit for testing out the new format. Using a queue, the new model's API access is managed. "GPT-4 is more reliable, creative, and able to manage much more nuanced instructions than GPT-3.5," the OpenAI team wrote.
The newly added multi-modal input function will produce text outputs based on a broad range of mixed text and image inputs, whether those outputs are in natural language, programming code, or whatever. In essence, you can now scan through marketing and sales reports, complete with all of their graphs and statistics and book texts.