root.skills

Attributes

Classes

ACalibrateBatchParameters

ACalibrateBatchResult

AEvaluator

Wrapper for a single Evaluator.

APresetEvaluatorRunner

ASkill

Wrapper for a single Skill.

CalibrateBatchParameters

CalibrateBatchResult

Evaluator

Wrapper for a single Evaluator.

EvaluatorDemonstration

Evaluator demonstration

Evaluators

Evaluators (sub) API

InputVariable

Input variable definition.

ModelParams

Additional model parameters.

PresetEvaluatorRunner

ReferenceVariable

Reference variable definition.

Skill

Wrapper for a single Skill.

Skills

Skills API

Versions

Version listing (sub)API

Module Contents

class root.skills.ACalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None)
Parameters:
  • name (str)

  • prompt (str)

  • model (ModelName)

  • pii_filter (bool)

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.ADataLoader]])

data_loaders
input_variables
model
name
pii_filter
prompt
reference_variables
class root.skills.ACalibrateBatchResult

Bases: pydantic.BaseModel

mae_errors_model: Dict[str, float]
mae_errors_prompt: Dict[str, float]
results: List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
rms_errors_model: Dict[str, float]
rms_errors_prompt: Dict[str, float]
class root.skills.AEvaluator

Bases: root.generated.openapi_aclient.models.skill.Skill

Wrapper for a single Evaluator.

For available attributes, please check the (automatically generated) superclass documentation.

async arun(response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None) root.generated.openapi_aclient.models.EvaluatorExecutionResult

Asynchronously run the evaluator.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]]) – Optional list of evaluator execution functions.

  • expected_output (Optional[str]) – Optional expected output for the evaluator.

Return type:

root.generated.openapi_aclient.models.EvaluatorExecutionResult

class root.skills.APresetEvaluatorRunner(client: Awaitable[root.generated.openapi_aclient.ApiClient], skill_id: str, eval_name: str, skill_version_id: str | None = None)
Parameters:
  • client (Awaitable[root.generated.openapi_aclient.ApiClient])

  • skill_id (str)

  • eval_name (str)

  • skill_version_id (Optional[str])

skill_id
skill_version_id
class root.skills.ASkill

Bases: root.generated.openapi_aclient.models.skill.Skill

Wrapper for a single Skill.

For available attributes, please check the (automatically generated) superclass documentation.

async aevaluate(*, response: str, request: str | None = None, contexts: List[str] | None = None, variables: dict[str, str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult

Asynchronously run all validators attached to a skill.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • variables (Optional[dict[str, str]]) – Optional variables for the evaluator prompt template

  • functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult

async aopenai_base_url() str

Asynchronously get the OpenAI compatibility API URL for the skill.

Currently only OpenAI chat completions API is supported using the base URL.

Return type:

str

async arun(variables: Dict[str, str] | None = None) root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult

Asynchronously run a skill with optional variables.

Parameters:

variables (Optional[Dict[str, str]]) – The variables to be provided to the skill.

Return type:

root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult

class root.skills.CalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None)
Parameters:
  • name (str)

  • prompt (str)

  • model (ModelName)

  • pii_filter (bool)

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.DataLoader]])

data_loaders
input_variables
model
name
pii_filter
prompt
reference_variables
class root.skills.CalibrateBatchResult

Bases: pydantic.BaseModel

mae_errors_model: Dict[str, float]
mae_errors_prompt: Dict[str, float]
results: List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
rms_errors_model: Dict[str, float]
rms_errors_prompt: Dict[str, float]
class root.skills.Evaluator

Bases: root.generated.openapi_client.models.skill.Skill

Wrapper for a single Evaluator.

For available attributes, please check the (automatically generated) superclass documentation.

run(response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None) root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult

Run the evaluator.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]]) – Optional list of evaluator execution functions.

  • expected_output (Optional[str]) – Optional expected output for the evaluator.

  • variables (Optional[dict[str, str]]) – Optional variables for the evaluator prompt template

Return type:

root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult

class root.skills.EvaluatorDemonstration

Bases: pydantic.BaseModel

Evaluator demonstration

Demonstrations are used to train an evaluator to adjust its behavior.

justification: str | None = None
output: str
prompt: str | None = None
score: float
class root.skills.Evaluators(client: Awaitable[root.generated.openapi_aclient.ApiClient] | root.generated.openapi_client.api_client.ApiClient)

Evaluators (sub) API

Note

The construction of the API instance should be handled by accesing an attribute of a root.client.RootSignals instance.

Parameters:

client (Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient])

class Eval(*args, **kwds)

Bases: enum.Enum

Create a collection of name/value pairs.

Example enumeration:

>>> class Color(Enum):
...     RED = 1
...     BLUE = 2
...     GREEN = 3

Access them by:

  • attribute access:

    >>> Color.RED
    <Color.RED: 1>
    
  • value lookup:

    >>> Color(1)
    <Color.RED: 1>
    
  • name lookup:

    >>> Color['RED']
    <Color.RED: 1>
    

Enumerations can be iterated over, and know how many members they have:

>>> len(Color)
3
>>> list(Color)
[<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]

Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.

Answer_Correctness = 'd4487568-4243-4da8-9c76-adbaf762dbe0'
Answer_Relevance = '0907d422-e94f-4c9c-a63d-ec0eefd8a903'
Answer_Semantic_Similarity = 'ff350bce-4b07-4af7-9640-803c9d3c2ff9'
Clarity = '9976d9f3-7265-4732-b518-d61c2642b14e'
Coherence = 'e599886c-c338-458f-91b3-5d7eba452618'
Conciseness = 'be828d33-158a-4e92-a2eb-f4d96c13f956'
Confidentiality = '2eaa0a02-47a9-48f7-9b47-66ad257f93eb'
Context_Precision = '9d1e9a25-7e76-4771-b1e3-40825d7918c5'
Context_Recall = '8bb60975-5062-4367-9fc6-a920044cba56'
Engagingness = '64729487-d4a8-42d8-bd9e-72fd8390c134'
Faithfulness = '901794f9-634c-4852-9e41-7c558f1ff1ab'
Formality = '8ab6cf1a-42b5-4a23-a15c-21372816483d'
Harmlessness = '379fee0a-4fd1-4942-833b-7d78d78b334d'
Helpfulness = '88bc92d5-bebf-45e4-9cd1-dfa33309c320'
JSON_Content_Accuracy = 'b6a9aeff-c888-46d7-9e9c-7cf8cb461762'
JSON_Empty_Values_Ratio = '03829088-1799-438e-ae30-1db60832e52d'
JSON_Property_Completeness = 'e5de37f7-d20c-420f-8072-f41dce96ecfc'
JSON_Property_Name_Accuracy = '740923aa-8ffd-49cc-a95d-14f831243b25'
JSON_Property_Type_Accuracy = 'eabc6924-1fec-4e96-82ce-c03bf415c885'
Non_toxicity = 'e296e374-7539-4eb2-a74a-47847dd26fb8'
Originality = 'e72cb54f-548a-44f9-a6ca-4e14e5ade7f7'
Persuasiveness = '85bb6a74-f5dd-4130-8dcc-cffdf72327cc'
Politeness = '2856903a-e48c-4548-b3fe-520fd88c4f25'
Precision = '767bdd49-5f8c-48ca-8324-dfd6be7f8a79'
Quality_of_Writing_Creative = '060abfb6-57c9-43b5-9a6d-8a1a9bb853b8'
Quality_of_Writing_Professional = '059affa9-2d1c-48de-8e97-f81dd3fc3cbe'
Relevance = 'bd789257-f458-4e9e-8ce9-fa6e86dc3fb9'
Safety_for_Children = '39a8b5ba-de77-4726-a6b0-621d40b3cdf5'
Sentiment_recognition = 'e3782c1e-eaf4-4b2d-8d26-53db2160f1fd'
Truthfulness = '053df10f-b0c7-400b-892e-46ce3aa1e430'
async acalibrate(*, name: str, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

Asynchronously run calibration set on an existing evaluator.

Parameters:
  • name (str)

  • test_dataset_id (Optional[str])

  • test_data (Optional[List[List[str]]])

  • prompt (str)

  • model (ModelName)

  • pii_filter (bool)

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.ADataLoader]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

async acalibrate_batch(*, evaluator_definitions: List[ACalibrateBatchParameters], test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, parallel_requests: int = 1, _request_timeout: int | None = None) ACalibrateBatchResult

Asynchronously run calibration for a set of prompts and models

Parameters:
  • evaluator_definitions (List[ACalibrateBatchParameters]) – List of evaluator definitions.

  • test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.

  • test_data (Optional[List[List[str]]]) – Actual data to be used to test the skill.

  • parallel_requests (int) – Number of parallel requests. Uses ThreadPoolExecutor if > 1.

  • _request_timeout (Optional[int])

Return type:

ACalibrateBatchResult

Returns a dictionary with the results and errors for each model and prompt.

async acalibrate_existing(evaluator_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

Asynchronously run calibration set on an existing evaluator.

Parameters:
  • evaluator_id (str)

  • test_dataset_id (Optional[str])

  • test_data (Optional[List[List[str]]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

async acreate(predicate: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, objective_id: str | None = None, overwrite: bool = False) AEvaluator

Asynchronously create a new evaluator and return the result :param predicate: The question / predicate that is provided to the semantic quantification layer to :param transform it into a final prompt before being passed to the model (not used: :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to <unnamed>) :param objective_id: Already created objective id to assign to the eval skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to ‘root’, which means

Root Signals default at the time of skill creation)

Parameters:
  • fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.

  • pii_filter (bool) – Whether to use PII filter or not.

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of input variables for the skill.

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]]) – An optional list of reference variables for the skill.

  • data_loaders (Optional[List[root.data_loader.ADataLoader]]) – An optional list of data loaders, which populate the reference variables.

  • model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – An optional set of additional parameters to the model.

  • overwrite (bool) – Whether to overwrite a skill with the same name if it exists.

  • predicate (str)

  • name (Optional[str])

  • intent (Optional[str])

  • model (Optional[ModelName])

  • objective_id (Optional[str])

Return type:

AEvaluator

async aget_by_name(name: str) AEvaluator

Asynchronously get an evaluator instance by name.

Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators.

Parameters:

name (str)

Return type:

AEvaluator

async arun(evaluator_id: str, *, request: str, response: str, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, evaluator_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.EvaluatorExecutionResult

Asynchronously run an evaluator using its id and an optional version id . If no evaluator version id is given, the latest version of the evaluator will be used.

Returns a dictionary with the following keys: - score: a value between 0 and 1 representing the score of the evaluator

Parameters:
  • evaluator_id (str)

  • request (str)

  • response (str)

  • contexts (Optional[List[str]])

  • functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]])

  • expected_output (Optional[str])

  • evaluator_version_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_aclient.models.EvaluatorExecutionResult

calibrate(*, name: str, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

Run calibration set on an existing evaluator.

Parameters:
  • name (str)

  • test_dataset_id (Optional[str])

  • test_data (Optional[List[List[str]]])

  • prompt (str)

  • model (ModelName)

  • pii_filter (bool)

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.DataLoader]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

calibrate_batch(*, evaluator_definitions: List[CalibrateBatchParameters], test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, parallel_requests: int = 1, _request_timeout: int | None = None) CalibrateBatchResult

Run calibration for a set of prompts and models

Parameters:
  • evaluator_definitions (List[CalibrateBatchParameters]) – List of evaluator definitions.

  • test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.

  • test_data (Optional[List[List[str]]]) – Actual data to be used to test the skill.

  • parallel_requests (int) – Number of parallel requests. Uses ThreadPoolExecutor if > 1.

  • _request_timeout (Optional[int])

Return type:

CalibrateBatchResult

Returns a dictionary with the results and errors for each model and prompt.

calibrate_existing(evaluator_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

Run calibration set on an existing evaluator.

Parameters:
  • evaluator_id (str)

  • test_dataset_id (Optional[str])

  • test_data (Optional[List[List[str]]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]

create(predicate: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, objective_id: str | None = None, overwrite: bool = False) Evaluator

Create a new evaluator and return the result :param predicate: The question / predicate that is provided to the semantic quantification layer to :param transform it into a final prompt before being passed to the model (not used: :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to <unnamed>) :param objective_id: Already created objective id to assign to the eval skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to ‘root’, which means

Root Signals default at the time of skill creation)

Parameters:
  • fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.

  • pii_filter (bool) – Whether to use PII filter or not.

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of input variables for the skill.

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]]) – An optional list of reference variables for the skill.

  • data_loaders (Optional[List[root.data_loader.DataLoader]]) – An optional list of data loaders, which populate the reference variables.

  • model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – An optional set of additional parameters to the model.

  • overwrite (bool) – Whether to overwrite a skill with the same name if it exists.

  • predicate (str)

  • name (Optional[str])

  • intent (Optional[str])

  • model (Optional[ModelName])

  • objective_id (Optional[str])

Return type:

Evaluator

get_by_name(name: str) Evaluator

Get an evaluator instance by name.

Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators.

Parameters:

name (str)

Return type:

Evaluator

run(evaluator_id: str, *, request: str, response: str, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, evaluator_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult

Run an evaluator using its id and an optional version id . If no evaluator version id is given, the latest version of the evaluator will be used.

Returns a dictionary with the following keys: - score: a value between 0 and 1 representing the score of the evaluator

Parameters:
  • evaluator_id (str)

  • request (str)

  • response (str)

  • contexts (Optional[List[str]])

  • functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]])

  • expected_output (Optional[str])

  • evaluator_version_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult

EvaluatorName
client
versions
class root.skills.InputVariable

Bases: pydantic.BaseModel

Input variable definition.

name within prompt gets populated with the provided variable.

name: str
class root.skills.ModelParams

Bases: pydantic.BaseModel

Additional model parameters.

All fields are made optional in practice.

temperature: float | None = None
class root.skills.PresetEvaluatorRunner(client: root.generated.openapi_client.api_client.ApiClient, skill_id: str, eval_name: str, skill_version_id: str | None = None)
Parameters:
  • client (root.generated.openapi_client.api_client.ApiClient)

  • skill_id (str)

  • eval_name (str)

  • skill_version_id (Optional[str])

skill_id
skill_version_id
class root.skills.ReferenceVariable

Bases: pydantic.BaseModel

Reference variable definition.

name within prompt gets populated with content from dataset_id.

dataset_id: str
name: str
class root.skills.Skill

Bases: root.generated.openapi_client.models.skill.Skill

Wrapper for a single Skill.

For available attributes, please check the (automatically generated) superclass documentation.

evaluate(*, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult

Run all validators attached to a skill.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult

run(variables: Dict[str, str] | None = None) root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult

Run a skill with optional variables.

Parameters:

variables (Optional[Dict[str, str]]) – The variables to be provided to the skill.

Return type:

root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult

property openai_base_url: str

Get the OpenAI compatibility API URL for the skill.

Currently only OpenAI chat completions API is supported using the base URL.

Return type:

str

class root.skills.Skills(client: Awaitable[root.generated.openapi_aclient.ApiClient] | root.generated.openapi_client.api_client.ApiClient)

Skills API

Note

The construction of the API instance should be handled by accesing an attribute of a root.client.RootSignals instance.

Parameters:

client (Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient])

async acreate(prompt: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, system_message: str = '', fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.AValidator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, is_evaluator: bool | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, objective_id: str | None = None, overwrite: bool = False, _request_timeout: int | None = None) ASkill

Asynchronously create a new skill and return the result

Parameters:
  • prompt (str) – The prompt that is provided to the model (not used

  • API) (if using OpenAI compatibility)

  • name (Optional[str]) – Name of the skill (defaulting to <unnamed>)

  • objective_id (Optional[str]) – Already created objective id to assign to the skill.

  • intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set.

  • model (Optional[ModelName]) – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)

  • fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.

  • system_message (str) – The system instruction to give to the model (mainly useful with OpenAI compatibility API).

  • pii_filter (bool) – Whether to use PII filter or not.

  • validators (Optional[List[root.validators.AValidator]]) – An optional list of validators; not available if objective_id is set.

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of input variables for the skill.

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]]) – An optional list of reference variables for the skill.

  • is_evaluator (Optional[bool]) – Whether the skill is an evaluator or not. Evaluators should have prompts that cause model to return

  • data_loaders (Optional[List[root.data_loader.ADataLoader]]) – An optional list of data loaders, which populate the reference variables.

  • model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – An optional set of additional parameters to the model.

  • overwrite (bool) – Whether to overwrite a skill with the same name if it exists.

  • _request_timeout (Optional[int])

Return type:

ASkill

async acreate_chat(skill_id: str, *, chat_id: str | None = None, name: str | None = None, history_from_chat_id: str | None = None, _request_timeout: int | None = None) root.skill_chat.ASkillChat

Asynchronously create and store chat object with the given parameters.

Parameters:
  • skill_id (str) – The skill to chat with.

  • chat_id (Optional[str]) – Optional identifier to identify the chat. If not supplied, one is automatically generated.

  • name (Optional[str]) – Optional name for the chat.

  • history_from_chat_id (Optional[str]) – Optional chat_id to copy chat history from.

  • _request_timeout (Optional[int])

Return type:

root.skill_chat.ASkillChat

async adelete(skill_id: str) None

Asynchronously delete the skill from the registry.

Parameters:

skill_id (str) – The skill to be deleted.

Return type:

None

async aevaluate(skill_id: str, *, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult

Asynchronously run all validators attached to a skill.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • skill_version_id (Optional[str]) – Skill version id. If omitted, the latest version is used.

  • skill_id (str)

  • functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult

async aget(skill_id: str, _request_timeout: int | None = None) ASkill

Asynchronously get a Skill instance by ID.

Parameters:
  • skill_id (str) – The skill to be fetched

  • _request_timeout (Optional[int])

Return type:

ASkill

async alist(search_term: str | None = None, *, limit: int = 100, name: str | None = None, only_evaluators: bool = False) AsyncIterator[root.generated.openapi_aclient.models.skill_list_output.SkillListOutput]

Asynchronously iterate through the skills.

Note that call will list only publicly available global skills, and those models within organization that are available to the current user (or all if the user is an admin).

Parameters:
  • limit (int) – Number of entries to iterate through at most.

  • name (Optional[str]) – Specific name the returned skills must match.

  • only_evaluators (bool) – Match only Skills with is_evaluator=True.

  • search_term (Optional[str]) – Can be used to limit returned skills.

Return type:

AsyncIterator[root.generated.openapi_aclient.models.skill_list_output.SkillListOutput]

async arun(skill_id: str, variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult

Asynchronously run a skill with optional variables, model parameters, and a skill version id. If no skill version id is given, the latest version of the skill will be used. If model parameters are not given, Skill model params will be used. If the skill has no model params the default model parameters will be used.

Returns a dictionary with the following keys: - llm_output: the LLM response of the skill run - validation: the result of the skill validation

Parameters:
  • skill_id (str)

  • variables (Optional[Dict[str, str]])

  • model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]])

  • skill_version_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult

async atest(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.AValidator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]

Asynchronously test a skill definition with a test dataset and return the result.

For description of the rest of the arguments, please refer to create method.

Parameters:
  • test_dataset_id (str)

  • prompt (str)

  • model (ModelName)

  • fallback_models (Optional[List[ModelName]])

  • pii_filter (bool)

  • validators (Optional[List[root.validators.AValidator]])

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.ADataLoader]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]

async atest_existing(skill_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]

Asynchronously test an existing skill.

Note that only one of the test_data and test_data_set must be provided.

Parameters:
  • test_data (Optional[List[List[str]]]) – Actual data to be used to test the skill.

  • test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.

  • skill_id (str)

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]

async aupdate(skill_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, prompt: str | None = None, is_evaluator: bool | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) ASkill

Asynchronously update existing skill instance and return the result.

For description of the rest of the arguments, please refer to create method.

Parameters:
  • skill_id (str) – The skill to be updated

  • change_note (Optional[str])

  • data_loaders (Optional[List[root.data_loader.ADataLoader]])

  • fallback_models (Optional[List[ModelName]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])

  • model (Optional[ModelName])

  • name (Optional[str])

  • pii_filter (Optional[bool])

  • prompt (Optional[str])

  • is_evaluator (Optional[bool])

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])

  • model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]])

  • evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])

  • objective_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

ASkill

create(prompt: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, system_message: str = '', fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.Validator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, is_evaluator: bool | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, objective_id: str | None = None, overwrite: bool = False, _request_timeout: int | None = None) Skill

Create a new skill and return the result

Parameters:
  • prompt (str) – The prompt that is provided to the model (not used

  • API) (if using OpenAI compatibility)

  • name (Optional[str]) – Name of the skill (defaulting to <unnamed>)

  • objective_id (Optional[str]) – Already created objective id to assign to the skill.

  • intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set.

  • model (Optional[ModelName]) – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)

  • fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.

  • system_message (str) – The system instruction to give to the model (mainly useful with OpenAI compatibility API).

  • pii_filter (bool) – Whether to use PII filter or not.

  • validators (Optional[List[root.validators.Validator]]) – An optional list of validators; not available if objective_id is set.

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of input variables for the skill.

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]]) – An optional list of reference variables for the skill.

  • is_evaluator (Optional[bool]) – Whether the skill is an evaluator or not. Evaluators should have prompts that cause model to return

  • data_loaders (Optional[List[root.data_loader.DataLoader]]) – An optional list of data loaders, which populate the reference variables.

  • model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – An optional set of additional parameters to the model.

  • overwrite (bool) – Whether to overwrite a skill with the same name if it exists.

  • _request_timeout (Optional[int])

Return type:

Skill

create_chat(skill_id: str, *, chat_id: str | None = None, name: str | None = None, history_from_chat_id: str | None = None, _request_timeout: int | None = None) root.skill_chat.SkillChat

Create and store chat object with the given parameters.

Parameters:
  • skill_id (str) – The skill to chat with.

  • chat_id (Optional[str]) – Optional identifier to identify the chat. If not supplied, one is automatically generated.

  • name (Optional[str]) – Optional name for the chat.

  • history_from_chat_id (Optional[str]) – Optional chat_id to copy chat history from.

  • _request_timeout (Optional[int])

Return type:

root.skill_chat.SkillChat

delete(skill_id: str) None

Delete the skill from the registry.

Parameters:

skill_id (str) – The skill to be deleted.

Return type:

None

evaluate(skill_id: str, *, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, variables: dict[str, str] | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult

Run all validators attached to a skill.

Parameters:
  • response (str) – LLM output.

  • request (Optional[str]) – The prompt sent to the LLM. Optional.

  • contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators

  • variables (Optional[dict[str, str]]) – Optional variables for the evaluator prompt template

  • skill_version_id (Optional[str]) – Skill version id. If omitted, the latest version is used.

  • skill_id (str)

  • functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult

get(skill_id: str, _request_timeout: int | None = None) Skill

Get a Skill instance by ID.

Parameters:
  • skill_id (str) – The skill to be fetched

  • _request_timeout (Optional[int])

Return type:

Skill

list(search_term: str | None = None, *, limit: int = 100, name: str | None = None, only_evaluators: bool = False) Iterator[root.generated.openapi_client.models.skill_list_output.SkillListOutput]

Iterate through the skills.

Note that call will list only publicly available global skills, and those models within organization that are available to the current user (or all if the user is an admin).

Parameters:
  • limit (int) – Number of entries to iterate through at most.

  • name (Optional[str]) – Specific name the returned skills must match.

  • only_evaluators (bool) – Match only Skills with is_evaluator=True.

  • search_term (Optional[str]) – Can be used to limit returned skills.

Return type:

Iterator[root.generated.openapi_client.models.skill_list_output.SkillListOutput]

run(skill_id: str, variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult

Run a skill with optional variables, model parameters, and a skill version id. If no skill version id is given, the latest version of the skill will be used. If model parameters are not given, Skill model params will be used. If the skill has no model params the default model parameters will be used.

Returns a dictionary with the following keys: - llm_output: the LLM response of the skill run - validation: the result of the skill validation

Parameters:
  • skill_id (str)

  • variables (Optional[Dict[str, str]])

  • model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]])

  • skill_version_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult

test(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.Validator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]

Test a skill definition with a test dataset and return the result.

For description of the rest of the arguments, please refer to create method.

Parameters:
  • test_dataset_id (str)

  • prompt (str)

  • model (ModelName)

  • fallback_models (Optional[List[ModelName]])

  • pii_filter (bool)

  • validators (Optional[List[root.validators.Validator]])

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])

  • data_loaders (Optional[List[root.data_loader.DataLoader]])

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]

test_existing(skill_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]

Test an existing skill.

Note that only one of the test_data and test_data_set must be provided.

Parameters:
  • test_data (Optional[List[List[str]]]) – Actual data to be used to test the skill.

  • test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.

  • skill_id (str)

  • _request_timeout (Optional[int])

Return type:

List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]

update(skill_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, prompt: str | None = None, is_evaluator: bool | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) Skill

Update existing skill instance and return the result.

For description of the rest of the arguments, please refer to create method.

Parameters:
  • skill_id (str) – The skill to be updated

  • change_note (Optional[str])

  • data_loaders (Optional[List[root.data_loader.DataLoader]])

  • fallback_models (Optional[List[ModelName]])

  • input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])

  • model (Optional[ModelName])

  • name (Optional[str])

  • pii_filter (Optional[bool])

  • prompt (Optional[str])

  • is_evaluator (Optional[bool])

  • reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])

  • model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]])

  • evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])

  • objective_id (Optional[str])

  • _request_timeout (Optional[int])

Return type:

Skill

client
versions
class root.skills.Versions(client: Awaitable[root.generated.openapi_aclient.ApiClient] | root.generated.openapi_client.api_client.ApiClient)

Version listing (sub)API

Note that this should not be directly instantiated.

Parameters:

client (Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient])

async alist(skill_id: str) root.generated.openapi_aclient.models.paginated_skill_list.PaginatedSkillList

Asynchronously list all versions of a skill.

Parameters:

skill_id (str) – The skill to list the versions for

Return type:

root.generated.openapi_aclient.models.paginated_skill_list.PaginatedSkillList

list(skill_id: str) root.generated.openapi_client.models.paginated_skill_list.PaginatedSkillList

List all versions of a skill.

Parameters:

skill_id (str) – The skill to list the versions for

Return type:

root.generated.openapi_client.models.paginated_skill_list.PaginatedSkillList

root.skills.ModelName