Skip to main content

Token Count

Controlled node

Overview

The Token Count node calculates the number of tokens in a given text using the specific tokenizer of a selected AI model. This is essential for ensuring your prompts and context fit within the model's context window limitations. The node also outputs the remaining token capacity for the selected model.

Different AI models use different tokenization schemes, so token counts can vary significantly between models for the same text. This node ensures accurate counting by using the actual tokenizer of your target model.

Model options

Intellectible provides token counting for several AI models, each with different context window sizes:

ModelDescriptionContext Size
ultra lightLightweight model tokenizer, great for quick estimates.128000
standardStandard instruction-tuned model tokenizer.128000
advanced reasoningReasoning model tokenizer.128000
large contextVery large context window model tokenizer.10000000
Why token counts differ

Token counts vary by model because different models use different tokenization algorithms. For example, GPT-style models typically use BPE (Byte Pair Encoding), while other models may use different schemes. Always select the model you plan to use for generation to get accurate counts.

Inputs

InputTypeDescriptionDefault
RunEventTriggers the token counting operation.-
ModelEnumThe model tokenizer to use for counting.standard
TextTextThe text content to analyze for token count.-

Outputs

OutputTypeDescription
DoneEventFires when the token counting is complete.
CountNumberThe total number of tokens in the input text.
RemainingNumberThe remaining tokens available in the model's context window (Context Size - Count). Returns NaN if calculation fails.

Runtime behavior and defaults

By default, the node uses the standard model tokenizer. When triggered by the Run event, it processes the input text and outputs both the absolute token count and the remaining capacity.

The Remaining output is calculated by subtracting the token count from the selected model's context window size. This is useful for determining how much additional content (like system prompts or conversation history) can be added before hitting the model's limit.

If the text input is empty or invalid, the node will return a count of 0.

Example

A common use case is checking token count before sending text to an AI Write node:

  1. Create a Text node with your prompt content
  2. Connect the Text node's output to the Token Count node's Text input
  3. Set the Token Count Model to match the model you plan to use in your AI Write node
  4. Connect a Start node (or other trigger) to the Token Count Run input
  5. Connect the Token Count Done event to your AI Write node's Run input
  6. Use the Count or Remaining outputs to conditionally truncate text or split it into chunks if it exceeds your desired threshold

This ensures you never exceed the model's context window, preventing errors in downstream AI generation nodes.