Skip to main content

Get Ranking Scores

Controlled node

Overview

The Get Ranking Scores node uses an AI reranking model to evaluate how relevant one or more texts are to a specific query. This is particularly useful for search result ranking, document relevance scoring, and reranking retrieved passages from vector or text searches.

The node takes a task description that defines the ranking criteria, a query to rank against, and one or more texts to evaluate. It returns relevance scores that indicate how well each text matches the query according to the specified task.

Model options

Intellectible provides the following reranking model:

ModelDescription
V1 AdvancedUses the Qwen3 Reranker 8B model for high-quality relevance scoring and ranking.

Inputs

InputTypeDescriptionDefault
RunEventFires when the node starts running.-
ModelEnumThe reranking model to use. Currently only supports V1 Advanced.v1advanced
TaskTextDescribes how to relate the text to the query (e.g., "Given a search query, retrieve relevant passages that answer the query").Given a search query, retrieve relevant passages that answer the query
QueryTextThe search query or question to rank texts against.-
TextTextA single text or an array of texts to be ranked/scored.-

Outputs

OutputTypeDescription
DoneEventFires when the node has finished processing.
ScoresDataIf the input Text was an array, returns an array of relevance scores. If Text was a single string, returns a single score. Returns null if inputs are invalid or no scores could be generated.

Runtime behavior and defaults

When triggered by the Run event, the node processes the inputs as follows:

  1. Input validation: If any required inputs (task, query, text, or model) are missing, the node outputs null for scores and returns immediately.
  2. Text normalization: The Text input is converted to an array of strings. If a single text string is provided, it is wrapped in an array.
  3. Ranking execution: The node calls the reranking AI model with the specified task, query, and documents.
  4. Output handling:
    • If the input was an array of texts, the output is an array of scores in the same order.
    • If the input was a single text, the output is a single score.
    • Returns null if the model returns no scores.

The default task description is optimized for search scenarios where you want to find passages that answer a specific query, but you can customize this to fit other ranking scenarios (e.g., "Rank these product descriptions by how well they match the user's shopping intent").

Example usage

Scenario: Reranking search results from a database query.

  1. Connect the Text input to an array of document passages retrieved from a previous search (e.g., from Vector Search Database or Text Search Database nodes).
  2. Set the Query input to the user's original search query.
  3. Optionally customize the Task description if you need specific ranking criteria (e.g., "Rank these legal documents by relevance to the case description").
  4. Trigger the Run event to get relevance scores.
  5. Use the Scores output with a Sort List node or similar to reorder your results by relevance.

Tip: This node works well in combination with vector search nodes - first retrieve candidate documents using vector similarity, then use Get Ranking Scores to rerank the top results for better relevance.