Token Count
Controlled node
Overview
The Token Count node calculates the number of tokens in a given text using the specific tokenizer of a selected AI model. This is essential for ensuring your prompts and context fit within the model's context window limitations. The node also outputs the remaining token capacity for the selected model.
Different AI models use different tokenization schemes, so token counts can vary significantly between models for the same text. This node ensures accurate counting by using the actual tokenizer of your target model.
Model options
Intellectible provides token counting for several AI models, each with different context window sizes:
| Model | Description | Context Size |
|---|---|---|
| ultra light | Lightweight model tokenizer, great for quick estimates. | 128000 |
| standard | Standard instruction-tuned model tokenizer. | 128000 |
| advanced reasoning | Reasoning model tokenizer. | 128000 |
| large context | Very large context window model tokenizer. | 10000000 |
Token counts vary by model because different models use different tokenization algorithms. For example, GPT-style models typically use BPE (Byte Pair Encoding), while other models may use different schemes. Always select the model you plan to use for generation to get accurate counts.
Inputs
| Input | Type | Description | Default |
|---|---|---|---|
| Run | Event | Triggers the token counting operation. | - |
| Model | Enum | The model tokenizer to use for counting. | standard |
| Text | Text | The text content to analyze for token count. | - |
Outputs
| Output | Type | Description |
|---|---|---|
| Done | Event | Fires when the token counting is complete. |
| Count | Number | The total number of tokens in the input text. |
| Remaining | Number | The remaining tokens available in the model's context window (Context Size - Count). Returns NaN if calculation fails. |
Runtime behavior and defaults
By default, the node uses the standard model tokenizer. When triggered by the Run event, it processes the input text and outputs both the absolute token count and the remaining capacity.
The Remaining output is calculated by subtracting the token count from the selected model's context window size. This is useful for determining how much additional content (like system prompts or conversation history) can be added before hitting the model's limit.
If the text input is empty or invalid, the node will return a count of 0.
Example
A common use case is checking token count before sending text to an AI Write node:
- Create a Text node with your prompt content
- Connect the Text node's output to the Token Count node's Text input
- Set the Token Count Model to match the model you plan to use in your AI Write node
- Connect a Start node (or other trigger) to the Token Count Run input
- Connect the Token Count Done event to your AI Write node's Run input
- Use the Count or Remaining outputs to conditionally truncate text or split it into chunks if it exceeds your desired threshold
This ensures you never exceed the model's context window, preventing errors in downstream AI generation nodes.