Today, OpenAI revealed that their most recent text-generating model, GPT-4, is now generally accessible via its API. All current OpenAI API developers “with a history of successful payments” will be able to access GPT-4 as of this afternoon. By the end of this month, the business intends to grant access to new developers. Thereafter, availability constraints will start to be increased “depending on compute availability.”
The Range is growing!
Since March, “millions of developers have requested access to the GPT-4 API, and the range of innovative products leveraging GPT-4 is growing every day,” according to a blog post by OpenAI. We see a time when any use case may be supported by chat-based models. GPT-4 performs at “human level” on a variety of professional and academic benchmarks and can generate text (including code) and accept image and text inputs, an upgrade above GPT-3.5, which only accepted text. Similar to earlier GPT models from OpenAI, GPT-4 was trained using data that was both publicly accessible and licenced by OpenAI.
Not all OpenAI clients currently have access to the image-understanding feature. Be My Eyes is the only partner OpenAI is testing it with at this time. However, it hasn’t said when it will make it available to a wider consumer base. It’s important to remember that GPT-4 isn’t flawless, just as even the strongest generative AI models of today. It “hallucinates” information and occasionally does so confidently. Additionally, it fails at difficult challenges like including security flaws in the code it generates since it doesn’t learn from its mistakes.
As with several of OpenAI’s other text-generating models, GPT-4 and GPT-3.5 Turbo, one of its more recent but less capable text-generating models (and one of the original models powering ChatGPT), will eventually allow developers to fine-tune the model using their own data. According to OpenAI, the functionality should be available later this year.
The rivalry for developing generative AI has intensified after the release of GPT-4 in March. Anthropic recently increased the context window for Claude, its flagship text-generating AI model, from 9,000 tokens to 100,000 tokens. Claude is currently in preview. The word “fantastic” would be broken up into the tokens “fan,” “tas,” and “tic” (context window refers to the text the model takes into account before generating new text).
Context window was previously ruled by GPT-4, which had a maximum weight of 32,000 tokens. Models with limited context windows typically “forget” the content of even very recent discussions, which causes them to stray from the subject.