OpenAI Upgrades Models, Lowers API Prices

OpenAI Upgrades Models, Lowers API Prices

OpenAI Upgrades Models, Lowers API Prices

OpenAI has unveiled new models including an upgraded GPT-4 Turbo, with enhanced capabilities for tasks like code generation.

OpenAI has made additional models available, including an upgraded preview model for the GPT-4 Turbo. OpenAI has reduced the price for accessing the GPT-3.5 Turbo application programming interface (API).

Furthermore, OpenAI has introduced new methods for developers to handle API keys and gain a better understanding of API utilization.

OpenAI states that the enhanced GPT-4 Turbo completes tasks such as code generation more thoroughly compared to the previous preview model, aiming to reduce instances of ‘laziness’ where the model fails to complete a task.

OpenAI made this statement in a blog post. This is the third time in the past year that OpenAI has reduced the costs of its GPT-3.5 Turbo models to assist its clients in scaling.

OpenAI has announced the introduction of a new model called gpt-3.5-turbo-0125 for the GPT-3.5 Turbo. The new model has decreased the input prices by fifty percent, reaching $0.0005 per thousand tokens.

Additionally, the output prices have been reduced by 25 percent, reaching $0.0015 per thousand tokens. Users of ChatGPT began to voice their dissatisfaction with the chatbot in December 2023, citing the lack of upgrades to GPT-4 as the reason for the chatbot’s tendency to decline tasks.

GPT-4 Turbo, trained on information as recent as April 2023, addressed the same issues with laziness that users of GPT-4, trained on data accessible before September 2021, faced.

OpenAI introduced embeddings which are smaller artificial intelligence models. OpenAI provides one definition of embeddings as sequences of integers that reflect concepts in material such as language or code.

Embeddings a type of artificial intelligence technology, assist computers in better comprehending and utilizing written language. Specifically, they accomplish this by transforming words and sentences into a format that computers can process.

Imagine that embeddings are similar to translators in that they convert human language into a very specific code that computers can comprehend and work with.

There is a sort of artificial intelligence known as retrieval-augmented generation that, rather than coming up with solutions from scratch, provides responses that are more accurate and pertinent.

It’s the same as having an artificial intelligence that, rather than attempting to guess an answer, does a brief search in a reference book and then informs you what it finds.

Presently, there are two new models that make use of these embeddings: “text-embedding-3-small” and a more powerful version known as “text-embedding-3-large.” Both of these models are currently available.

The capacity of these models is referred to using the terms “small” and “large”. While the “small” model is capable of understanding and converting material in a more complex manner, the “large” model is comparable to a translator that is more comprehensive by nature.

Now that these models are accessible, they can be utilized in applications that require effective retrieval and utilization of information. For a more straightforward explanation, these new tools are analogous to computer translators that are more intelligent and effective.

They assist users in better comprehending human language and in locating the information they require from enormous databases more expediently. This results in responses that are more accurate and useful when working with artificial intelligence systems.

Other artificial intelligence (AI) models, such as the Gemini developed by Google, compete with the GPT-4 developed by OpenAI. Regarding the ability to perform complicated mathematics and specialized coding, Gemini was superior to GPT-4.

However, there is a contention that the scores could differ if the advanced model of Gemini underwent the same tests as the GPT-4 Turbo.

Additionally, OpenAI intends to create a method via which inventors of GPT can get revenue from their individualized AI systems. To begin, builders in the United States will receive payment based on the amount of user engagement with their GPTs.

However, those subscribed to paid ChatGPT plans will have first access to the GPT shop.

Read Previous

Nvidia Teams Up with US NSF for AI Progress

Read Next

Ronaldo’s NFT Collaboration Faces Legal Challenges