OpenAI Model List

Full list of supported models and their context windows.

Model NameContext Window
1GPT-4o128000
2GPT-4 Turbo128000
3GPT-48192
4GPT-3.5 Turbo16385
5Embedding V3 large8191
6Embedding V3 small8191
7Embedding Ada 0028191

FAQ:

• What is OpenAI?

OpenAI is a research and deployment company that aims to promote and develop friendly artificial general intelligence (AGI) in a way that benefits humanity as a whole. OpenAI's goals are to advance digital intelligence in the way that is most likely to benefit humanity as a whole.

• What is LLM Token Counter?

LLM Token Counter is a sophisticated tool meticulously crafted to assist users in effectively managing token limits for a diverse array of widely-adopted Language Models (LLMs), including GPT-3.5, GPT-4, Claude-3, Llama-3, and many others. I am committed to continuously expanding the supported models and enhancing the tool's capabilities to empower you with an optimal experience in leveraging generative AI technology. Should you require any assistance or have suggestions for additional features, please feel free to reach out to me via email.

• Why use an LLM Token Counter?

Due to the inherent limitations of LLMs, it is crucial to ensure that the token count of your prompt falls within the specified token limit. Exceeding this limit may result in unexpected or undesirable outputs from the LLM.

• How does the LLM Token Counter work?

LLM Token Counter works by utilizing Transformers.js, a JavaScript implementation of the renowned Hugging Face Transformers library. Tokenizers are loaded directly in your browser, enabling the token count calculation to be performed client-side. Thanks to the efficient Rust implementation of the Transformers library, the token count calculation is remarkably fast.

• Will I leak my prompt?

No, you will not leak your prompt. The token count calculation is performed client-side, ensuring that your prompt remains secure and confidential. Your data privacy is of utmost importance, and this approach guarantees that your sensitive information is never transmitted to the server or any external entity.