Models

Overview

The OpenAI API is powered by a diverse set of models with different capabilities and price points. You can also make limited customizations to our original base models for your specific use case with fine-tuning.

MODELSDESCRIPTION
GPT-4 Limited betaA set of models that improve on GPT-3.5 and can understand as well as generate natural language or code
GPT-3.5A set of models that improve on GPT-3 and can understand as well as generate natural language or code
DALL·E BetaA model that can generate and edit images given a natural language prompt
Whisper BetaA model that can convert audio into text
EmbeddingsA set of models that can convert text into a numerical form
ModerationA fine-tuned model that can detect whether text may be sensitive or unsafe
GPT-3A set of models that can understand and generate natural language
Codex DeprecatedA set of models that can understand and generate code, including translating natural language to code

We have also published open source models including Point-E, Whisper, Jukebox, and CLIP.

Visit our model index for researchers to learn more about which models have been featured in our research papers and the differences between model series like InstructGPT and GPT-3.5.

GPT-4 Limited beta

GPT-4 is a large multimodal model (accepting text inputs and emitting text outputs today, with image inputs coming in the future) that can solve difficult problems with greater accuracy than any of our previous models, thanks to its broader general knowledge and advanced reasoning capabilities. Like gpt-3.5-turbo, GPT-4 is optimized for chat but works well for traditional completions tasks. Learn how to use GPT-4 in our chat guide.

GPT-4 is currently in a limited beta and only accessible to those who have been granted access. Please join the waitlist to get access when capacity is available.
LATEST MODELDESCRIPTIONMAX TOKENSTRAINING DATA
gpt-4More capable than any GPT-3.5 model, able to do more complex tasks, and optimized for chat. Will be updated with our latest model iteration.8,192 tokensUp to Sep 2021
gpt-4-0314Snapshot of gpt-4 from March 14th 2023. Unlike gpt-4, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023.8,192 tokensUp to Sep 2021
gpt-4-32kSame capabilities as the base gpt-4 mode but with 4x the context length. Will be updated with our latest model iteration.32,768 tokensUp to Sep 2021
gpt-4-32k-0314Snapshot of gpt-4-32 from March 14th 2023. Unlike gpt-4-32k, this model will not receive updates, and will only be supported for a three month period ending on June 14th 2023.32,768 tokensUp to Sep 2021

For many basic tasks, the difference between GPT-4 and GPT-3.5 models is not significant. However, in more complex reasoning situations, GPT-4 is much more capable than any of our previous models.

GPT-3.5

GPT-3.5 models can understand and generate natural language or code. Our most capable and cost effective model in the GPT-3.5 family is gpt-3.5-turbo which has been optimized for chat but works well for traditional completions tasks as well.

LATEST MODELDESCRIPTIONMAX TOKENSTRAINING DATA
gpt-3.5-turboMost capable GPT-3.5 model and optimized for chat at 1/10th the cost of text-davinci-003. Will be updated with our latest model iteration.4,096 tokensUp to Sep 2021
gpt-3.5-turbo-0301Snapshot of gpt-3.5-turbo from March 1st 2023. Unlike gpt-3.5-turbo, this model will not receive updates, and will only be supported for a three month period ending on June 1st 2023.4,096 tokensUp to Sep 2021
text-davinci-003Can do any language task with better quality, longer output, and consistent instruction-following than the curie, babbage, or ada models. Also supports inserting completions within text.4,097 tokensUp to Jun 2021
text-davinci-002Similar capabilities to text-davinci-003 but trained with supervised fine-tuning instead of reinforcement learning4,097 tokensUp to Jun 2021
code-davinci-002Optimized for code-completion tasks8,001 tokensUp to Jun 2021

We recommend using gpt-3.5-turbo over the other GPT-3.5 models because of its lower cost.

OpenAI models are non-deterministic, meaning that identical inputs can yield different outputs. Setting temperature to 0 will make the outputs mostly deterministic, but a small amount of variability may remain.
Feature-specific models

While the new gpt-3.5-turbo model is optimized for chat, it works very well for traditional completion tasks. The original GPT-3.5 models are optimized for text completion.

Our endpoints for creating embeddings and editing text use their own sets of specialized models.

Finding the right model

Experimenting with gpt-3.5-turbo is a great way to find out what the API is capable of doing. After you have an idea of what you want to accomplish, you can stay with gpt-3.5-turbo or another model and try to optimize around its capabilities.

You can use the GPT comparison tool that lets you run different models side-by-side to compare outputs, settings, and response times and then download the data into an Excel spreadsheet.

DALL·E Beta

DALL·E is a AI system that can create realistic images and art from a description in natural language. We currently support the ability, given a prommpt, to create a new image with a certain size, edit an existing image, or create variations of a user provided image.

The current DALL·E model available through our API is the 2nd iteration of DALL·E with more realistic, accurate, and 4x greater resolution images than the original model. You can try it through the our Labs interface or via the API.

Whisper Beta

Whisper is a general-purpose speech recognition model. It is trained on a large dataset of diverse audio and is also a multi-task model that can perform multilingual speech recognition as well as speech translation and language identification. The Whisper v2-large model is currently available through our API with the whisper-1 model name.

Currently, there is no difference between the open source version of Whisper and the version available through our API. However, through our API, we offer an optimized inference process which makes running Whisper through our API much faster than doing it through other means. For more technical details on Whisper, you can read the paper.

Embeddings

Embeddings are a numerical representation of text that can be used to measure the relateness between two pieces of text. Our second generation embedding model, text-embedding-ada-002 is a designed to replace the previous 16 first-generation embedding models at a fraction of the cost. Embeddings are useful for search, clustering, recommendations, anomaly detection, and classification tasks. You can read more about our latest embedding model in the announcement blog post.

Moderation

The Moderation models are designed to check whether content complies with OpenAI's usage policies. The models provide classification capabilities that look for content in the following categories: hate, hate/threatening, self-harm, sexual, sexual/minors, violence, and violence/graphic. You can find out more in our moderation guide.

Moderation models take in an arbitrary sized input that is automatically broken up to fix the models specific context window.

MODELDESCRIPTION
text-moderation-latestMost capable moderation model. Accuracy will be slighlty higher than the stable model
text-moderation-stableAlmost as capable as the latest model, but slightly older.

GPT-3

GPT-3 models can understand and generate natural language. These models were superceded by the more powerful GPT-3.5 generation models. However, the original GPT-3 base models (davinci, curie, ada, and babbage) are current the only models that are available to fine-tune.

LATEST MODELDESCRIPTIONMAX TOKENSTRAINING DATA
text-curie-001Very capable, faster and lower cost than Davinci.2,049 tokensUp to Oct 2019
text-babbage-001Capable of straightforward tasks, very fast, and lower cost.2,049 tokensUp to Oct 2019
text-ada-001Capable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.2,049 tokensUp to Oct 2019
davinciMost capable GPT-3 model. Can do any task the other models can do, often with higher quality.2,049 tokensUp to Oct 2019
curieVery capable, but faster and lower cost than Davinci.2,049 tokensUp to Oct 2019
babbageCapable of straightforward tasks, very fast, and lower cost.2,049 tokensUp to Oct 2019
adaCapable of very simple tasks, usually the fastest model in the GPT-3 series, and lowest cost.2,049 tokensUp to Oct 2019

Codex Deprecated

The Codex models are now deprecated. They were descendants of our GPT-3 models that would understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub. Learn more.

They’re most capable in Python and proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift, TypeScript, SQL, and even Shell.

The following Codex models are now deprecated:

LATEST MODELDESCRIPTIONMAX TOKENSTRAINING DATA
code-davinci-002Most capable Codex model. Particularly good at translating natural language to code. In addition to completing code, also supports inserting completions within code.8,001 tokensUp to Jun 2021
code-davinci-001Earlier version of code-davinci-0028,001 tokensUp to Jun 2021
code-cushman-002Almost as capable as Davinci Codex, but slightly faster. This speed advantage may make it preferable for real-time applications.Up to 2,048 tokens
code-cushman-001Earlier version of code-cushman-002Up to 2,048 tokens

For more, visit our guide on working with Codex.

Model endpoint compatibility

ENDPOINTMODEL NAME
/v1/chat/completionsgpt-4, gpt-4-0314, gpt-4-32k, gpt-4-32k-0314, gpt-3.5-turbo, gpt-3.5-turbo-0301
/v1/completionstext-davinci-003, text-davinci-002, text-curie-001, text-babbage-001, text-ada-001, davinci, curie, babbage, ada
/v1/editstext-davinci-edit-001, code-davinci-edit-001
/v1/audio/transcriptionswhisper-1
/v1/audio/translationswhisper-1
/v1/fine-tunesdavinci, curie, babbage, ada
/v1/embeddingstext-embedding-ada-002, text-search-ada-doc-001
/v1/moderationstext-moderation-stable, text-moderation-latest

This list does not include our first-generation embedding models nor our DALL·E models.

Continuous model upgrades

With the release of gpt-3.5-turbo, some of our models are now being continually updated. In order to mitigate the chance of model changes affecting our users in an unexpected way, we also offer model versions that will stay static for 3 month periods. With the new cadence of model updates, we are also giving people the ability to contribute evals to help us improve the model for different use cases. If you are interested, check out the OpenAI Evals repository.

The following models are the temporary snapshots that will be deprecated at the specified date. If you want to use the latest model version, use the standard model names like gpt-4 or gpt-3.5-turbo.

MODEL NAMEDEPRECATION DATE
gpt-3.5-turbo-0301June 1st, 2023
gpt-4-0314June 14th, 2023
gpt-4-32k-0314June 14th, 2023