Anthropic Model List

Full list of supported models and their context windows.

Model NameContext Window
1Claude 3.5 Sonnet200000
2Claude 3.5 Haiku200000
3Claude 3 Opus200000
4Claude 3 Sonnet200000
5Claude 3 Haiku200000
6Claude 2.1200000
7Claude 2.0100000
8Claude Instant 1.2100000

FAQ:

• What is Anthropic?

Anthropic is an AI safety and research company founded in 2021 by former OpenAI employees, focused on developing AI systems that are interpretable, steerable, and safe. The company aims to align AI technologies with human values and ensure their beneficial impact on society. Anthropic's work involves cutting-edge research in AI alignment and large-scale model training.

• What is LLM Token Counter?

LLM Token Counter is a sophisticated tool meticulously crafted to assist users in effectively managing token limits for a diverse array of widely-adopted Language Models (LLMs), including GPT-3.5, GPT-4, Claude-3, Llama-3, and many others. I am committed to continuously expanding the supported models and enhancing the tool's capabilities to empower you with an optimal experience in leveraging generative AI technology. Should you require any assistance or have suggestions for additional features, please feel free to reach out to me via email.

• Why use an LLM Token Counter?

Due to the inherent limitations of LLMs, it is crucial to ensure that the token count of your prompt falls within the specified token limit. Exceeding this limit may result in unexpected or undesirable outputs from the LLM.

• How does the LLM Token Counter work?

LLM Token Counter works by utilizing Transformers.js, a JavaScript implementation of the renowned Hugging Face Transformers library. Tokenizers are loaded directly in your browser, enabling the token count calculation to be performed client-side. Thanks to the efficient Rust implementation of the Transformers library, the token count calculation is remarkably fast.

• Will I leak my prompt?

No, you will not leak your prompt. The token count calculation is performed client-side, ensuring that your prompt remains secure and confidential. Your data privacy is of utmost importance, and this approach guarantees that your sensitive information is never transmitted to the server or any external entity.