root.skills¶
Attributes¶
Classes¶
Wrapper for a single Evaluator. |
|
Wrapper for a single Skill. |
|
Wrapper for a single Evaluator. |
|
Evaluator demonstration |
|
Evaluators (sub) API |
|
Input variable definition. |
|
Additional model parameters. |
|
Reference variable definition. |
|
Wrapper for a single Skill. |
|
Skills API |
|
Version listing (sub)API |
Module Contents¶
- class root.skills.ACalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None)¶
- Parameters:
name (str)
prompt (str)
model (ModelName)
pii_filter (bool)
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.ADataLoader]])
- data_loaders = None¶
- input_variables = None¶
- model¶
- name¶
- pii_filter = False¶
- prompt¶
- reference_variables = None¶
- class root.skills.ACalibrateBatchResult¶
Bases:
pydantic.BaseModel
- mae_errors_model: Dict[str, float]¶
- mae_errors_prompt: Dict[str, float]¶
- results: List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]¶
- rms_errors_model: Dict[str, float]¶
- rms_errors_prompt: Dict[str, float]¶
- class root.skills.AEvaluator¶
Bases:
root.generated.openapi_aclient.models.skill.Skill
Wrapper for a single Evaluator.
For available attributes, please check the (automatically generated) superclass documentation.
- async arun(response: str | None = None, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None) root.generated.openapi_aclient.models.EvaluatorExecutionResult ¶
Asynchronously run the evaluator.
- Parameters:
response (Optional[str]) – LLM output.
request (Optional[str]) – The prompt sent to the LLM.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator.
variables (Optional[dict[str, str]]) – Optional additional variable mappings for the evaluator. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
- Return type:
root.generated.openapi_aclient.models.EvaluatorExecutionResult
- class root.skills.APresetEvaluatorRunner(client: root.generated.openapi_aclient.ApiClient, skill_id: str, eval_name: str, evaluator_version_id: str | None = None)¶
- Parameters:
client (root.generated.openapi_aclient.ApiClient)
skill_id (str)
eval_name (str)
evaluator_version_id (Optional[str])
- evaluator_version_id = None¶
- skill_id¶
- class root.skills.ASkill¶
Bases:
root.generated.openapi_aclient.models.skill.Skill
Wrapper for a single Skill.
For available attributes, please check the (automatically generated) superclass documentation.
- async aevaluate(*, response: str, request: str | None = None, contexts: List[str] | None = None, variables: dict[str, str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult ¶
Asynchronously run all validators attached to a skill.
- Parameters:
response (str) – LLM output.
request (Optional[str]) – The prompt sent to the LLM. Optional.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator
variables (Optional[dict[str, str]]) – Optional additional variable mappings for evaluators. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds.
- Return type:
root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult
- async aopenai_base_url() str ¶
Asynchronously get the OpenAI compatibility API URL for the skill.
Currently only OpenAI chat completions API is supported using the base URL.
- Return type:
str
- async arun(variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None) root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult ¶
Asynchronously run a skill.
- Parameters:
variables (Optional[Dict[str, str]]) – Dictionary mapping the prompt template variables to their values. For example, if the prompt is “tell me about {{subject}}”, then variables={“subject”: “history”} would generate “tell me about history”.
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – Optional model parameters (e.g. temperature).
- Return type:
root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult
- class root.skills.CalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None)¶
- Parameters:
name (str)
prompt (str)
model (ModelName)
pii_filter (bool)
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.DataLoader]])
- data_loaders = None¶
- input_variables = None¶
- model¶
- name¶
- pii_filter = False¶
- prompt¶
- reference_variables = None¶
- class root.skills.CalibrateBatchResult¶
Bases:
pydantic.BaseModel
- mae_errors_model: Dict[str, float]¶
- mae_errors_prompt: Dict[str, float]¶
- results: List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]¶
- rms_errors_model: Dict[str, float]¶
- rms_errors_prompt: Dict[str, float]¶
- class root.skills.Evaluator¶
Bases:
root.generated.openapi_client.models.skill.Skill
Wrapper for a single Evaluator.
For available attributes, please check the (automatically generated) superclass documentation.
- run(response: str | None = None, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None) root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult ¶
Run the evaluator.
- Parameters:
response (Optional[str]) – LLM output.
request (Optional[str]) – The prompt sent to the LLM.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator.
variables (Optional[dict[str, str]]) – Optional additional variable mappings for evaluators. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
- Return type:
root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult
- class root.skills.EvaluatorDemonstration¶
Bases:
pydantic.BaseModel
Evaluator demonstration
Demonstrations are used to train an evaluator to adjust its behavior.
- justification: str | None = None¶
- output: str¶
- prompt: str | None = None¶
- score: float¶
- class root.skills.Evaluators(client: root.generated.openapi_aclient.ApiClient | root.generated.openapi_client.api_client.ApiClient)¶
Evaluators (sub) API
Note
The construction of the API instance should be handled by accesing an attribute of a
root.client.RootSignals
instance.- Parameters:
client (Union[root.generated.openapi_aclient.ApiClient, root.generated.openapi_client.api_client.ApiClient])
- class Eval(*args, **kwds)¶
Bases:
enum.Enum
Create a collection of name/value pairs.
Example enumeration:
>>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3
Access them by:
attribute access:
>>> Color.RED <Color.RED: 1>
value lookup:
>>> Color(1) <Color.RED: 1>
name lookup:
>>> Color['RED'] <Color.RED: 1>
Enumerations can be iterated over, and know how many members they have:
>>> len(Color) 3
>>> list(Color) [<Color.RED: 1>, <Color.BLUE: 2>, <Color.GREEN: 3>]
Methods can be added to enumerations, and members can have their own attributes – see the documentation for details.
- Answer_Correctness = 'd4487568-4243-4da8-9c76-adbaf762dbe0'¶
- Answer_Relevance = '0907d422-e94f-4c9c-a63d-ec0eefd8a903'¶
- Answer_Semantic_Similarity = 'ff350bce-4b07-4af7-9640-803c9d3c2ff9'¶
- Clarity = '9976d9f3-7265-4732-b518-d61c2642b14e'¶
- Coherence = 'e599886c-c338-458f-91b3-5d7eba452618'¶
- Conciseness = 'be828d33-158a-4e92-a2eb-f4d96c13f956'¶
- Confidentiality = '2eaa0a02-47a9-48f7-9b47-66ad257f93eb'¶
- Context_Precision = '9d1e9a25-7e76-4771-b1e3-40825d7918c5'¶
- Context_Recall = '8bb60975-5062-4367-9fc6-a920044cba56'¶
- Engagingness = '64729487-d4a8-42d8-bd9e-72fd8390c134'¶
- Faithfulness = '901794f9-634c-4852-9e41-7c558f1ff1ab'¶
- Formality = '8ab6cf1a-42b5-4a23-a15c-21372816483d'¶
- Harmlessness = '379fee0a-4fd1-4942-833b-7d78d78b334d'¶
- Helpfulness = '88bc92d5-bebf-45e4-9cd1-dfa33309c320'¶
- JSON_Content_Accuracy = 'b6a9aeff-c888-46d7-9e9c-7cf8cb461762'¶
- JSON_Empty_Values_Ratio = '03829088-1799-438e-ae30-1db60832e52d'¶
- JSON_Property_Completeness = 'e5de37f7-d20c-420f-8072-f41dce96ecfc'¶
- JSON_Property_Name_Accuracy = '740923aa-8ffd-49cc-a95d-14f831243b25'¶
- JSON_Property_Type_Accuracy = 'eabc6924-1fec-4e96-82ce-c03bf415c885'¶
- Non_toxicity = 'e296e374-7539-4eb2-a74a-47847dd26fb8'¶
- Originality = 'e72cb54f-548a-44f9-a6ca-4e14e5ade7f7'¶
- Persuasiveness = '85bb6a74-f5dd-4130-8dcc-cffdf72327cc'¶
- Politeness = '2856903a-e48c-4548-b3fe-520fd88c4f25'¶
- Precision = '767bdd49-5f8c-48ca-8324-dfd6be7f8a79'¶
- Quality_of_Writing_Creative = '060abfb6-57c9-43b5-9a6d-8a1a9bb853b8'¶
- Quality_of_Writing_Professional = '059affa9-2d1c-48de-8e97-f81dd3fc3cbe'¶
- Relevance = 'bd789257-f458-4e9e-8ce9-fa6e86dc3fb9'¶
- Safety_for_Children = '39a8b5ba-de77-4726-a6b0-621d40b3cdf5'¶
- Sentiment_recognition = 'e3782c1e-eaf4-4b2d-8d26-53db2160f1fd'¶
- Truthfulness = '053df10f-b0c7-400b-892e-46ce3aa1e430'¶
- async acalibrate(*, name: str, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput] ¶
Asynchronously run calibration set for an evaluator definition. See the create evaluator method for more details on the parameters.
- Parameters:
name (str)
test_dataset_id (Optional[str])
test_data (Optional[List[List[str]]])
prompt (str)
model (ModelName)
pii_filter (bool)
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.ADataLoader]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
- async acalibrate_batch(*, evaluator_definitions: List[ACalibrateBatchParameters], test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, parallel_requests: int = 1, _request_timeout: int | None = None) ACalibrateBatchResult ¶
Asynchronously run calibration for a set of prompts and models
- Parameters:
evaluator_definitions (List[ACalibrateBatchParameters]) – List of evaluator definitions.
test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.
test_data (Optional[List[List[str]]]) – Snapshot of data to be used to test the skill.
parallel_requests (int) – Number of parallel requests.
_request_timeout (Optional[int])
- Return type:
Returns a model with the results and errors for each model and prompt.
- async acalibrate_existing(evaluator_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput] ¶
Asynchronously run calibration set on an existing evaluator.
- Parameters:
evaluator_id (str)
test_dataset_id (Optional[str])
test_data (Optional[List[List[str]]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
- async acreate(predicate: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, overwrite: bool = False) AEvaluator ¶
Asynchronously create a new evaluator and return the result
- Parameters:
predicate (str) – The question / predicate that is provided to the semantic quantification layer to
model (Optional[ModelName])
name (Optional[str]) – Name of the skill (defaulting to <unnamed>)
objective_id (Optional[str]) – Optional pre-existing objective id to assign to the evaluator.
intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set.
model – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)
fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.
pii_filter (bool) – Whether to use PII filter or not.
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of reference variables for the skill.
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]]) – An optional list of input variables for the skill.
data_loaders (Optional[List[root.data_loader.ADataLoader]]) – An optional list of data loaders
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – An optional set of additional parameters to the model (e.g., temperature).
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]]) – An optional list of evaluator demonstrations to guide the evaluator’s behavior.
overwrite (bool) – Whether to overwrite a skill with the same name if it exists.
- Return type:
- async aget_by_name(name: str) AEvaluator ¶
Asynchronously get an evaluator instance by name.
Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators.
- Parameters:
name (str)
- Return type:
- async arun(evaluator_id: str, *, request: str | None = None, response: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, evaluator_version_id: str | None = None, variables: dict[str, str] | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.EvaluatorExecutionResult ¶
Asynchronously run the evaluator.
- Parameters:
evaluator_id (str) – The ID of the evaluator to run.
request (Optional[str]) – The prompt sent to the LLM.
response (Optional[str]) – LLM output.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators.
functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation.
expected_output (Optional[str]) – Optional expected output for the evaluator.
evaluator_version_id (Optional[str]) – Version ID of the evaluator to run. If omitted, the latest version is used.
variables (Optional[dict[str, str]]) – Optional additional variable mappings for the evaluator. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
_request_timeout (Optional[int]) – Optional timeout for the request.
- Return type:
root.generated.openapi_aclient.models.EvaluatorExecutionResult
- async aupdate(evaluator_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, predicate: str | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) AEvaluator ¶
Asynchronously update an evaluator and return the result
See the create method for more information on the arguments.
- Parameters:
evaluator_id (str)
change_note (Optional[str])
data_loaders (Optional[List[root.data_loader.ADataLoader]])
fallback_models (Optional[List[ModelName]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])
model (Optional[ModelName])
name (Optional[str])
pii_filter (Optional[bool])
predicate (Optional[str])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]])
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])
objective_id (Optional[str])
_request_timeout (Optional[int])
- Return type:
- calibrate(*, name: str, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput] ¶
Run calibration set for an evaluator definition. See the create evaluator method for more details on the parameters.
- Parameters:
name (str)
test_dataset_id (Optional[str])
test_data (Optional[List[List[str]]])
prompt (str)
model (ModelName)
pii_filter (bool)
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.DataLoader]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
- calibrate_batch(*, evaluator_definitions: List[CalibrateBatchParameters], test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, parallel_requests: int = 1, _request_timeout: int | None = None) CalibrateBatchResult ¶
Run calibration for a set of prompts and models
- Parameters:
evaluator_definitions (List[CalibrateBatchParameters]) – List of evaluator definitions.
test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.
test_data (Optional[List[List[str]]]) – Snapshot of data to be used to test the skill.
parallel_requests (int) – Number of parallel requests. Uses ThreadPoolExecutor if > 1.
_request_timeout (Optional[int])
- Return type:
Returns a model with the results and errors for each model and prompt.
- calibrate_existing(evaluator_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput] ¶
Run calibration set on an existing evaluator.
- Parameters:
evaluator_id (str)
test_dataset_id (Optional[str])
test_data (Optional[List[List[str]]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput]
- create(predicate: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, overwrite: bool = False) Evaluator ¶
Create a new evaluator and return the result
- Parameters:
predicate (str) – The question / predicate that is provided to the semantic quantification layer to
model (Optional[ModelName])
name (Optional[str]) – Name of the skill (defaulting to <unnamed>)
objective_id (Optional[str]) – Optional pre-existing objective id to assign to the evaluator.
intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set.
model – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)
fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails.
pii_filter (bool) – Whether to use PII filter or not.
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of reference variables for the skill.
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]]) – An optional list of input variables for the skill.
data_loaders (Optional[List[root.data_loader.DataLoader]]) – An optional list of data loaders
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – An optional set of additional parameters to the model (e.g., temperature).
guide (An optional list of evaluator demonstrations to) – the evaluator’s behavior.
overwrite (bool) – Whether to overwrite a skill with the same name if it exists.
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])
- Return type:
- get_by_name(name: str) Evaluator ¶
Get an evaluator instance by name.
Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators.
- Parameters:
name (str)
- Return type:
- run(evaluator_id: str, *, request: str | None = None, response: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, evaluator_version_id: str | None = None, variables: dict[str, str] | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult ¶
Run the evaluator.
- Parameters:
evaluator_id (str) – The ID of the evaluator to run.
request (Optional[str]) – The prompt sent to the LLM.
response (Optional[str]) – LLM output.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators.
functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation.
expected_output (Optional[str]) – Optional expected output for the evaluator.
evaluator_version_id (Optional[str]) – Version ID of the evaluator to run. If omitted, the latest version is used.
variables (Optional[dict[str, str]]) – Optional additional variable mappings for the evaluator. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
_request_timeout (Optional[int]) – Optional timeout for the request.
- Return type:
root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult
- update(evaluator_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, predicate: str | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) Evaluator ¶
Update an evaluator and return the result
See the create method for more information on the arguments.
- Parameters:
evaluator_id (str)
change_note (Optional[str])
data_loaders (Optional[List[root.data_loader.DataLoader]])
fallback_models (Optional[List[ModelName]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])
model (Optional[ModelName])
name (Optional[str])
pii_filter (Optional[bool])
predicate (Optional[str])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]])
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])
objective_id (Optional[str])
_request_timeout (Optional[int])
- Return type:
- EvaluatorName¶
- client¶
- versions¶
- class root.skills.InputVariable¶
Bases:
pydantic.BaseModel
Input variable definition.
name within prompt gets populated with the provided variable.
- name: str¶
- class root.skills.ModelParams¶
Bases:
pydantic.BaseModel
Additional model parameters.
All fields are made optional in practice.
- temperature: float | None = None¶
- class root.skills.PresetEvaluatorRunner(client: root.generated.openapi_client.api_client.ApiClient, skill_id: str, eval_name: str, evaluator_version_id: str | None = None)¶
- Parameters:
client (root.generated.openapi_client.api_client.ApiClient)
skill_id (str)
eval_name (str)
evaluator_version_id (Optional[str])
- evaluator_version_id = None¶
- skill_id¶
- class root.skills.ReferenceVariable¶
Bases:
pydantic.BaseModel
Reference variable definition.
name within prompt gets populated with content from dataset_id.
- dataset_id: str¶
- name: str¶
- class root.skills.Skill¶
Bases:
root.generated.openapi_client.models.skill.Skill
Wrapper for a single Skill.
For available attributes, please check the (automatically generated) superclass documentation.
- evaluate(*, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult ¶
Run all validators attached to a skill.
- Parameters:
response (str) – LLM output.
request (Optional[str]) – The prompt sent to the LLM. Optional.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator
variables (Optional[dict[str, str]]) – Optional additional variable mappings for evaluators. For example, if the evaluator predicate is “evaluate the output based on {subject}: {output}”, then variables={“subject”: “clarity”}.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds.
- Return type:
root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult
- run(variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None) root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult ¶
Run a skill.
- Parameters:
variables (Optional[Dict[str, str]]) – Dictionary mapping the prompt template variables to their values. For example, if the prompt is “tell me about {{subject}}”, then variables={“subject”: “history”} would generate “tell me about history”.
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – Optional model parameters (e.g. temperature).
- Return type:
root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult
- property openai_base_url: str¶
Get the OpenAI compatibility API URL for the skill.
Currently only OpenAI chat completions API is supported using the base URL.
- Return type:
str
- class root.skills.Skills(client: root.generated.openapi_aclient.ApiClient | root.generated.openapi_client.api_client.ApiClient)¶
Skills API
Note
The construction of the API instance should be handled by accesing an attribute of a
root.client.RootSignals
instance.- Parameters:
client (Union[root.generated.openapi_aclient.ApiClient, root.generated.openapi_client.api_client.ApiClient])
- async acreate(prompt: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, system_message: str = '', fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.AValidator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, is_evaluator: bool | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, overwrite: bool = False, _request_timeout: int | None = None) ASkill ¶
Asynchronously create a new skill and return the result
- Parameters:
prompt (str) – The prompt template that is provided to the model
name (Optional[str]) – Name of the skill (defaulting to <unnamed>)
intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set
model (Optional[ModelName]) – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)
system_message (str) – The system instruction to give to the model (mainly useful with OpenAI compatibility API)
fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails
pii_filter (bool) – Whether to use PII filter or not
validators (Optional[List[root.validators.AValidator]]) – An optional list of validators; not available if objective_id is set
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of reference variables for the skill
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]]) – An optional list of input variables for the skill
is_evaluator (Optional[bool]) – Whether this skill is an evaluator skill
data_loaders (Optional[List[root.data_loader.ADataLoader]]) – An optional list of data loaders.
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – An optional set of additional parameters to the model (e.g., temperature)
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]]) – Optional list of demonstrations for evaluator skills
objective_id (Optional[str]) – Optional pre-existing objective id to assign to the skill
overwrite (bool) – Whether to overwrite a skill with the same name if it exists
_request_timeout (Optional[int]) – Optional timeout for the request in seconds
- Return type:
- async adelete(skill_id: str) None ¶
Asynchronously delete the skill.
- Parameters:
skill_id (str)
- Return type:
None
- async aevaluate(skill_id: str, *, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult ¶
Asynchronously run all validators attached to a skill.
- Parameters:
response (str) – LLM output.
request (Optional[str]) – The prompt sent to the LLM. Optional.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator
variables (Optional[dict[str, str]]) – Optional variables for the evaluator prompt template
skill_version_id (Optional[str]) – Skill version id. If omitted, the latest version is used.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds.
skill_id (str)
- Return type:
root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult
- async aget(skill_id: str, _request_timeout: int | None = None) ASkill ¶
Asynchronously get a Skill instance by ID.
- Parameters:
skill_id (str)
_request_timeout (Optional[int])
- Return type:
- async alist(search_term: str | None = None, *, limit: int = 100, name: str | None = None, only_evaluators: bool = False, only_root_evaluators: bool = False) AsyncIterator[root.generated.openapi_aclient.models.skill_list_output.SkillListOutput] ¶
Asynchronously iterate through the skills.
- Parameters:
limit (int) – Number of entries to iterate through at most.
name (Optional[str]) – Specific name the returned skills must match.
only_evaluators (bool) – Returns only evaluators.
only_root_evaluators (bool) – Returns only Root Signals defined evaluators.
search_term (Optional[str]) – Can be used to limit returned skills.
- Return type:
AsyncIterator[root.generated.openapi_aclient.models.skill_list_output.SkillListOutput]
- async arun(skill_id: str, variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult ¶
Asynchronously run a skill.
- Parameters:
variables (Optional[Dict[str, str]]) – Dictionary mapping the prompt template variables to their values. For example, if the prompt is “tell me about {{subject}}”, then variables={“subject”: “history”} would generate “tell me about history”.
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]]) – Optional model parameters to override the skill’s default parameters
skill_version_id (Optional[str]) – Optional version ID of the skill to run. Defaults to the latest version.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds
skill_id (str)
- Return type:
root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult
- async atest(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.AValidator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput] ¶
Asynchronously test a skill definition with a test dataset and return the result.
For description of the rest of the arguments, please refer to create method.
- Parameters:
test_dataset_id (str)
prompt (str)
model (ModelName)
fallback_models (Optional[List[ModelName]])
pii_filter (bool)
validators (Optional[List[root.validators.AValidator]])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.ADataLoader]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]
- async atest_existing(skill_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput] ¶
Asynchronously test an existing skill.
Note that only one of the test_data and test_data_set must be provided.
- Parameters:
test_data (Optional[List[List[str]]]) – Ephemeral data to be used to test the skill.
test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.
skill_id (str)
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput]
- async aupdate(skill_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.ADataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, prompt: str | None = None, is_evaluator: bool | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_aclient.models.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) ASkill ¶
Asynchronously update existing skill instance and return the result.
For description of the rest of the arguments, please refer to acreate method.
- Parameters:
skill_id (str)
change_note (Optional[str])
data_loaders (Optional[List[root.data_loader.ADataLoader]])
fallback_models (Optional[List[ModelName]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]])
model (Optional[ModelName])
name (Optional[str])
pii_filter (Optional[bool])
prompt (Optional[str])
is_evaluator (Optional[bool])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]])
model_params (Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]])
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])
objective_id (Optional[str])
_request_timeout (Optional[int])
- Return type:
- create(prompt: str = '', *, name: str | None = None, intent: str | None = None, model: ModelName | None = None, system_message: str = '', fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.Validator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, is_evaluator: bool | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, overwrite: bool = False, _request_timeout: int | None = None) Skill ¶
Create a new skill and return the result.
- Parameters:
prompt (str) – The prompt template that is provided to the model
name (Optional[str]) – Name of the skill (defaulting to <unnamed>)
intent (Optional[str]) – The intent of the skill (defaulting to name); not available if objective_id is set
model (Optional[ModelName]) – The model to use (defaults to ‘root’, which means Root Signals default at the time of skill creation)
system_message (str) – The system instruction to give to the model (mainly useful with OpenAI compatibility API)
fallback_models (Optional[List[ModelName]]) – The fallback models to use in case the primary model fails
pii_filter (bool) – Whether to use PII filter or not
validators (Optional[List[root.validators.Validator]]) – An optional list of validators; not available if objective_id is set
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]]) – An optional list of reference variables for the skill
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]]) – An optional list of input variables for the skill
is_evaluator (Optional[bool]) – Whether this skill is an evaluator skill
data_loaders (Optional[List[root.data_loader.DataLoader]]) – An optional list of data loaders.
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – An optional set of additional parameters to the model (e.g., temperature)
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]]) – Optional list of demonstrations for evaluator skills
objective_id (Optional[str]) – Optional pre-existing objective id to assign to the skill
overwrite (bool) – Whether to overwrite a skill with the same name if it exists
_request_timeout (Optional[int]) – Optional timeout for the request in seconds
- Return type:
- delete(skill_id: str) None ¶
Delete the skill.
- Parameters:
skill_id (str)
- Return type:
None
- evaluate(skill_id: str, *, response: str, request: str | None = None, contexts: List[str] | None = None, functions: List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest] | None = None, expected_output: str | None = None, variables: dict[str, str] | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult ¶
Run all validators attached to a skill.
- Parameters:
response (str) – LLM output.
request (Optional[str]) – The prompt sent to the LLM. Optional.
contexts (Optional[List[str]]) – Optional documents passed to RAG evaluators
functions (Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]]) – Optional function definitions to LLM tool call validation
expected_output (Optional[str]) – Optional expected output for the evaluator
variables (Optional[dict[str, str]]) – Optional variables for the evaluator prompt template
skill_version_id (Optional[str]) – Skill version id. If omitted, the latest version is used.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds.
skill_id (str)
- Return type:
root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult
- get(skill_id: str, _request_timeout: int | None = None) Skill ¶
Get a Skill instance by ID.
- Parameters:
skill_id (str)
_request_timeout (Optional[int])
- Return type:
- list(search_term: str | None = None, *, limit: int = 100, name: str | None = None, only_evaluators: bool = False, only_root_evaluators: bool = False) Iterator[root.generated.openapi_client.models.skill_list_output.SkillListOutput] ¶
Iterate through the skills.
- Parameters:
limit (int) – Number of entries to iterate through at most.
name (Optional[str]) – Specific name the returned skills must match.
only_evaluators (bool) – Returns only evaluators.
only_root_evaluators (bool) – Returns only Root Signals defined evaluators.
search_term (Optional[str]) – Can be used to limit returned skills.
- Return type:
Iterator[root.generated.openapi_client.models.skill_list_output.SkillListOutput]
- run(skill_id: str, variables: Dict[str, str] | None = None, *, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, skill_version_id: str | None = None, _request_timeout: int | None = None) root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult ¶
Run a skill.
- Parameters:
variables (Optional[Dict[str, str]]) – Dictionary mapping the prompt template variables to their values. For example, if the prompt is “tell me about {{subject}}”, then variables={“subject”: “history”} would generate “tell me about history”.
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]]) – Optional model parameters to override the skill’s default parameters
skill_version_id (Optional[str]) – Optional version ID of the skill to run. Defaults to the latest version.
_request_timeout (Optional[int]) – Optional timeout for the request in seconds
skill_id (str)
- Return type:
root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult
- test(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: List[ModelName] | None = None, pii_filter: bool = False, validators: List[root.validators.Validator] | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput] ¶
Test a skill definition with a test dataset and return the result.
For description of the rest of the arguments, please refer to create method.
- Parameters:
test_dataset_id (str)
prompt (str)
model (ModelName)
fallback_models (Optional[List[ModelName]])
pii_filter (bool)
validators (Optional[List[root.validators.Validator]])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])
data_loaders (Optional[List[root.data_loader.DataLoader]])
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]
- test_existing(skill_id: str, *, test_dataset_id: str | None = None, test_data: List[List[str]] | None = None, _request_timeout: int | None = None) List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput] ¶
Test an existing skill.
Note that only one of the test_data and test_data_set must be provided.
- Parameters:
test_data (Optional[List[List[str]]]) – Ephemeral data to be used to test the skill.
test_dataset_id (Optional[str]) – ID of the dataset to be used to test the skill.
skill_id (str)
_request_timeout (Optional[int])
- Return type:
List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput]
- update(skill_id: str, *, change_note: str | None = None, data_loaders: List[root.data_loader.DataLoader] | None = None, fallback_models: List[ModelName] | None = None, input_variables: List[InputVariable] | List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest] | None = None, model: ModelName | None = None, name: str | None = None, pii_filter: bool | None = None, prompt: str | None = None, is_evaluator: bool | None = None, reference_variables: List[ReferenceVariable] | List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest] | None = None, model_params: ModelParams | root.generated.openapi_client.models.model_params_request.ModelParamsRequest | None = None, evaluator_demonstrations: List[EvaluatorDemonstration] | None = None, objective_id: str | None = None, _request_timeout: int | None = None) Skill ¶
Update existing skill instance and return the result.
For description of the rest of the arguments, please refer to create method.
- Parameters:
skill_id (str)
change_note (Optional[str])
data_loaders (Optional[List[root.data_loader.DataLoader]])
fallback_models (Optional[List[ModelName]])
input_variables (Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]])
model (Optional[ModelName])
name (Optional[str])
pii_filter (Optional[bool])
prompt (Optional[str])
is_evaluator (Optional[bool])
reference_variables (Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]])
model_params (Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]])
evaluator_demonstrations (Optional[List[EvaluatorDemonstration]])
objective_id (Optional[str])
_request_timeout (Optional[int])
- Return type:
- client¶
- versions¶
- class root.skills.Versions(client: root.generated.openapi_aclient.ApiClient | root.generated.openapi_client.api_client.ApiClient)¶
Version listing (sub)API
Note that this should not be directly instantiated.
- Parameters:
client (Union[root.generated.openapi_aclient.ApiClient, root.generated.openapi_client.api_client.ApiClient])
- async alist(skill_id: str) root.generated.openapi_aclient.models.paginated_skill_list.PaginatedSkillList ¶
Asynchronously list all versions of a skill.
- Parameters:
skill_id (str)
- Return type:
root.generated.openapi_aclient.models.paginated_skill_list.PaginatedSkillList
- list(skill_id: str) root.generated.openapi_client.models.paginated_skill_list.PaginatedSkillList ¶
List all versions of a skill.
- Parameters:
skill_id (str)
- Return type:
root.generated.openapi_client.models.paginated_skill_list.PaginatedSkillList
- root.skills.ModelName¶