Skip to main content

AI Write

Controlled node

Overview

A workhorse node for text generation tasks, AI Write uses a large language model to generate text based on a provided prompt.

By default, the output is locked to a seed (1) meaning it should generate the same output every time, although occasionally this system fails. If you want a different output every time, you can select Randomize Output. If none of the node inputs change and Randomize Output is unchecked, the generation output is cached to allow for faster development.

Model options

Intellectible provides a number of different text generation models to choose from for AI Write. Each has a different token cost and performance profile.

ModelDescriptionContext Size
ultra lightCheap and fast lightweight model, great for summarizing. Uses fewer tokens.128000
standardState of the art instruction tuned model suitable for most text generation tasks128000
standard reasoningGood for reasoning tasks with moderate cost128000
intermediate reasoningBetter reasoning capabilities than standard128000
advanced reasoningExcellent for reasoning tasks but nearly 10x more expensive than standard. Use sparingly.128000
advanced reasoning 2State-of-the-art reasoning model (Kimi K2.5) with high reasoning effort128000
large contextVery large context window model for processing extensive documents10000000
Do not exceed the context window

Your AI Write operation won't work if you exceed the model context window size. The sum of your token input and maximum output should be less than the context window size. Use the Token Count node to check the token count of any text.

Output events

There are two output event options for this node:

  • Stream: This allows you to receive the output as it is generated. This is useful for output into chat interfaces or other applications where you want to see the text as it is being generated.
  • Done: This fires when the node has finished running. This is useful for triggering other nodes or actions in your workflow.

Inputs

InputTypeDescriptionDefault
ModelEnumThe model to use for text generation. Options: ultra light, standard, standard reasoning, intermediate reasoning, advanced reasoning, advanced reasoning 2, large contextstandard
PromptTextProvides the context or topic for the AI to write about.-
Max TokensNumberSpecifies the maximum number of tokens (words or characters) to be generated.2000
TemperatureNumberControls the level of randomness or creativity in the generated text (0.0 to 1.0+).0.7
SeedNumberSets the random seed for the AI model, allowing for reproducibility of results.1
Randomize OutputBooleanDetermines whether the output should be randomized or not. If randomized, the seed is ignored.false
RunEventFires when the node starts running-

Outputs

OutputTypeDescription
OutputTextContains the generated text.
StreamEventFires each time the output accumulates during generation.
DoneEventFires when the node has finished generating text.

Runtime Behavior

Caching

When Randomize Output is unchecked and none of the node inputs change between runs, the AI Write node caches its output. This means subsequent runs will return the same generated text instantly without consuming additional tokens or calling the LLM API again.

Streaming

If you connect to the Stream event, the node will emit partial results as they are generated by the model. This is useful for real-time applications like chat interfaces. The Done event fires only after the complete text has been generated.

Seed and Reproducibility

The Seed input ensures reproducible outputs. When the same seed, prompt, and model settings are used (and Randomize Output is false), the node will generate identical text across multiple runs.

Example

Here's a simple example of using AI Write to generate a creative story:

  1. Add a Text node with the prompt: "Write a short story about a robot learning to paint"
  2. Connect the Text node's output to the Prompt input of AI Write
  3. Trigger the Run input with a Start node or button
  4. Connect the Output to a Show node to display the result

For a streaming example:

  1. Connect the Stream event to a UI element that updates in real-time
  2. The text will appear character by character as the model generates it
  3. Use the Done event to trigger a "Generation Complete" notification when finished