RIGHTBRAIN BLOG
Comparing different task versions in Rightbrain
Comparing different task versions in Rightbrain
How to build and compare different task configurations, including new prompt versions and aternative models


The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.





The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.

The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.

The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.

The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.

The Compare feature enables you to experiment with different task configurations and continuously improve performance. You can access the Compare view immediately after a Task Run.

What Can I Compare?
In the Compare view, you see two task configurations side-by-side:
Right Panel: Your current task, which displays the response and outputs from your most recent run.
Left Panel: An editable version where you can adjust:
User Prompt: Defines the task for the LLM.
System Prompt: Provides additional context and constraints.
Temperature: Adjusts the variation in responses.
Model: Choose from leading proprietary or open-source models.
Output Format: Specify structured formats that integrate seamlessly with your database.
You can also supply a new input that will be used for both configurations.

Comparing Tasks in Action
Testing a New Prompt
Let’s try comparing a new prompt against your existing configuration.
Evaluating a New Model
Curious about testing a different model? You can instantly compare models—for example, DeepSeek R1 versus Claude 3.5 Sonnet. This allows you to continuously evaluate and deploy new models, uncovering opportunities for both performance and efficiency gains.
Task Revisions
Every time you update your task configuration, you have the option to save it as a new revision. These revisions can be:
Tested side by side with existing versions.
Promoted directly into your active production pipeline.
Each revision is assigned a unique revision ID, which you can use for version logging and A/B testing. You can find the revision ID in the response from each task run.
If your task is currently running in a staging or production environment and you’d like to promote a new revision, simply click “Promote to Active Revision.” This action immediately replaces the existing version in your pipeline. In other words, the API endpoint for running the task will now use the new active version.

RELATED CONTENT
Our latest blogs and articles
Join our developer slack
Request to join our developer slack channel
Join us on

Join our developer slack
Request to join our developer slack channel
Join us on

Join our developer slack
Request to join our developer slack channel
Join us on

Join our developer slack
Request to join our developer slack channel
Join us on

Join our developer slack
Request to join our developer slack channel
Join us on

Join our developer slack
Request to join our developer slack channel
Join us on
