root.skills =========== .. py:module:: root.skills Attributes ---------- .. autoapisummary:: root.skills.ModelName Classes ------- .. autoapisummary:: root.skills.ACalibrateBatchParameters root.skills.ACalibrateBatchResult root.skills.AEvaluator root.skills.APresetEvaluatorRunner root.skills.ASkill root.skills.CalibrateBatchParameters root.skills.CalibrateBatchResult root.skills.Evaluator root.skills.EvaluatorDemonstration root.skills.Evaluators root.skills.InputVariable root.skills.ModelParams root.skills.PresetEvaluatorRunner root.skills.ReferenceVariable root.skills.Skill root.skills.Skills root.skills.Versions Module Contents --------------- .. py:class:: ACalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None) .. py:attribute:: data_loaders .. py:attribute:: input_variables .. py:attribute:: model .. py:attribute:: name .. py:attribute:: pii_filter .. py:attribute:: prompt .. py:attribute:: reference_variables .. py:class:: ACalibrateBatchResult Bases: :py:obj:`pydantic.BaseModel` .. py:attribute:: mae_errors_model :type: Dict[str, float] .. py:attribute:: mae_errors_prompt :type: Dict[str, float] .. py:attribute:: results :type: List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput] .. py:attribute:: rms_errors_model :type: Dict[str, float] .. py:attribute:: rms_errors_prompt :type: Dict[str, float] .. py:class:: AEvaluator Bases: :py:obj:`root.generated.openapi_aclient.models.skill.Skill` Wrapper for a single Evaluator. For available attributes, please check the (automatically generated) superclass documentation. .. py:method:: arun(response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]] = None, expected_output: Optional[str] = None) -> root.generated.openapi_aclient.models.EvaluatorExecutionResult :async: Asynchronously run the evaluator. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators :param functions: Optional list of evaluator execution functions. :param expected_output: Optional expected output for the evaluator. .. py:class:: APresetEvaluatorRunner(client: Awaitable[root.generated.openapi_aclient.ApiClient], skill_id: str, eval_name: str, skill_version_id: Optional[str] = None) .. py:attribute:: skill_id .. py:attribute:: skill_version_id .. py:class:: ASkill Bases: :py:obj:`root.generated.openapi_aclient.models.skill.Skill` Wrapper for a single Skill. For available attributes, please check the (automatically generated) superclass documentation. .. py:method:: aevaluate(*, response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, variables: Optional[dict[str, str]] = None, functions: Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult :async: Asynchronously run all validators attached to a skill. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators :param variables: Optional variables for the evaluator prompt template .. py:method:: aopenai_base_url() -> str :async: Asynchronously get the OpenAI compatibility API URL for the skill. Currently only OpenAI chat completions API is supported using the base URL. .. py:method:: arun(variables: Optional[Dict[str, str]] = None) -> root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult :async: Asynchronously run a skill with optional variables. :param variables: The variables to be provided to the skill. .. py:class:: CalibrateBatchParameters(name: str, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None) .. py:attribute:: data_loaders .. py:attribute:: input_variables .. py:attribute:: model .. py:attribute:: name .. py:attribute:: pii_filter .. py:attribute:: prompt .. py:attribute:: reference_variables .. py:class:: CalibrateBatchResult Bases: :py:obj:`pydantic.BaseModel` .. py:attribute:: mae_errors_model :type: Dict[str, float] .. py:attribute:: mae_errors_prompt :type: Dict[str, float] .. py:attribute:: results :type: List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput] .. py:attribute:: rms_errors_model :type: Dict[str, float] .. py:attribute:: rms_errors_prompt :type: Dict[str, float] .. py:class:: Evaluator Bases: :py:obj:`root.generated.openapi_client.models.skill.Skill` Wrapper for a single Evaluator. For available attributes, please check the (automatically generated) superclass documentation. .. py:method:: run(response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]] = None, expected_output: Optional[str] = None, variables: Optional[dict[str, str]] = None) -> root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult Run the evaluator. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators :param functions: Optional list of evaluator execution functions. :param expected_output: Optional expected output for the evaluator. :param variables: Optional variables for the evaluator prompt template .. py:class:: EvaluatorDemonstration Bases: :py:obj:`pydantic.BaseModel` Evaluator demonstration Demonstrations are used to train an evaluator to adjust its behavior. .. py:attribute:: justification :type: Optional[str] :value: None .. py:attribute:: output :type: str .. py:attribute:: prompt :type: Optional[str] :value: None .. py:attribute:: score :type: float .. py:class:: Evaluators(client: Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient]) Evaluators (sub) API .. note:: The construction of the API instance should be handled by accesing an attribute of a :class:`root.client.RootSignals` instance. .. py:class:: Eval(*args, **kwds) Bases: :py:obj:`enum.Enum` Create a collection of name/value pairs. Example enumeration: >>> class Color(Enum): ... RED = 1 ... BLUE = 2 ... GREEN = 3 Access them by: - attribute access: >>> Color.RED - value lookup: >>> Color(1) - name lookup: >>> Color['RED'] Enumerations can be iterated over, and know how many members they have: >>> len(Color) 3 >>> list(Color) [, , ] Methods can be added to enumerations, and members can have their own attributes -- see the documentation for details. .. py:attribute:: Answer_Correctness :value: 'd4487568-4243-4da8-9c76-adbaf762dbe0' .. py:attribute:: Answer_Relevance :value: '0907d422-e94f-4c9c-a63d-ec0eefd8a903' .. py:attribute:: Answer_Semantic_Similarity :value: 'ff350bce-4b07-4af7-9640-803c9d3c2ff9' .. py:attribute:: Clarity :value: '9976d9f3-7265-4732-b518-d61c2642b14e' .. py:attribute:: Coherence :value: 'e599886c-c338-458f-91b3-5d7eba452618' .. py:attribute:: Conciseness :value: 'be828d33-158a-4e92-a2eb-f4d96c13f956' .. py:attribute:: Confidentiality :value: '2eaa0a02-47a9-48f7-9b47-66ad257f93eb' .. py:attribute:: Context_Precision :value: '9d1e9a25-7e76-4771-b1e3-40825d7918c5' .. py:attribute:: Context_Recall :value: '8bb60975-5062-4367-9fc6-a920044cba56' .. py:attribute:: Engagingness :value: '64729487-d4a8-42d8-bd9e-72fd8390c134' .. py:attribute:: Faithfulness :value: '901794f9-634c-4852-9e41-7c558f1ff1ab' .. py:attribute:: Formality :value: '8ab6cf1a-42b5-4a23-a15c-21372816483d' .. py:attribute:: Harmlessness :value: '379fee0a-4fd1-4942-833b-7d78d78b334d' .. py:attribute:: Helpfulness :value: '88bc92d5-bebf-45e4-9cd1-dfa33309c320' .. py:attribute:: JSON_Content_Accuracy :value: 'b6a9aeff-c888-46d7-9e9c-7cf8cb461762' .. py:attribute:: JSON_Empty_Values_Ratio :value: '03829088-1799-438e-ae30-1db60832e52d' .. py:attribute:: JSON_Property_Completeness :value: 'e5de37f7-d20c-420f-8072-f41dce96ecfc' .. py:attribute:: JSON_Property_Name_Accuracy :value: '740923aa-8ffd-49cc-a95d-14f831243b25' .. py:attribute:: JSON_Property_Type_Accuracy :value: 'eabc6924-1fec-4e96-82ce-c03bf415c885' .. py:attribute:: Non_toxicity :value: 'e296e374-7539-4eb2-a74a-47847dd26fb8' .. py:attribute:: Originality :value: 'e72cb54f-548a-44f9-a6ca-4e14e5ade7f7' .. py:attribute:: Persuasiveness :value: '85bb6a74-f5dd-4130-8dcc-cffdf72327cc' .. py:attribute:: Politeness :value: '2856903a-e48c-4548-b3fe-520fd88c4f25' .. py:attribute:: Precision :value: '767bdd49-5f8c-48ca-8324-dfd6be7f8a79' .. py:attribute:: Quality_of_Writing_Creative :value: '060abfb6-57c9-43b5-9a6d-8a1a9bb853b8' .. py:attribute:: Quality_of_Writing_Professional :value: '059affa9-2d1c-48de-8e97-f81dd3fc3cbe' .. py:attribute:: Relevance :value: 'bd789257-f458-4e9e-8ce9-fa6e86dc3fb9' .. py:attribute:: Safety_for_Children :value: '39a8b5ba-de77-4726-a6b0-621d40b3cdf5' .. py:attribute:: Sentiment_recognition :value: 'e3782c1e-eaf4-4b2d-8d26-53db2160f1fd' .. py:attribute:: Truthfulness :value: '053df10f-b0c7-400b-892e-46ce3aa1e430' .. py:method:: acalibrate(*, name: str, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput] :async: Asynchronously run calibration set on an existing evaluator. .. py:method:: acalibrate_batch(*, evaluator_definitions: List[ACalibrateBatchParameters], test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, parallel_requests: int = 1, _request_timeout: Optional[int] = None) -> ACalibrateBatchResult :async: Asynchronously run calibration for a set of prompts and models :param evaluator_definitions: List of evaluator definitions. :param test_dataset_id: ID of the dataset to be used to test the skill. :param test_data: Actual data to be used to test the skill. :param parallel_requests: Number of parallel requests. Uses ThreadPoolExecutor if > 1. Returns a dictionary with the results and errors for each model and prompt. .. py:method:: acalibrate_existing(evaluator_id: str, *, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_aclient.models.evaluator_calibration_output.EvaluatorCalibrationOutput] :async: Asynchronously run calibration set on an existing evaluator. .. py:method:: acreate(predicate: str = '', *, name: Optional[str] = None, intent: Optional[str] = None, model: Optional[ModelName] = None, fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]] = None, objective_id: Optional[str] = None, overwrite: bool = False) -> AEvaluator :async: Asynchronously create a new evaluator and return the result :param predicate: The question / predicate that is provided to the semantic quantification layer to :param transform it into a final prompt before being passed to the model (not used: :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to ) :param objective_id: Already created objective id to assign to the eval skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to 'root', which means Root Signals default at the time of skill creation) :param fallback_models: The fallback models to use in case the primary model fails. :param pii_filter: Whether to use PII filter or not. :param reference_variables: An optional list of input variables for the skill. :param input_variables: An optional list of reference variables for the skill. :param data_loaders: An optional list of data loaders, which populate the reference variables. :param model_params: An optional set of additional parameters to the model. :param overwrite: Whether to overwrite a skill with the same name if it exists. .. py:method:: aget_by_name(name: str) -> AEvaluator :async: Asynchronously get an evaluator instance by name. Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators. .. py:method:: arun(evaluator_id: str, *, request: str, response: str, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]] = None, expected_output: Optional[str] = None, evaluator_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_aclient.models.EvaluatorExecutionResult :async: Asynchronously run an evaluator using its id and an optional version id . If no evaluator version id is given, the latest version of the evaluator will be used. Returns a dictionary with the following keys: - score: a value between 0 and 1 representing the score of the evaluator .. py:method:: calibrate(*, name: str, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, prompt: str, model: ModelName, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput] Run calibration set on an existing evaluator. .. py:method:: calibrate_batch(*, evaluator_definitions: List[CalibrateBatchParameters], test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, parallel_requests: int = 1, _request_timeout: Optional[int] = None) -> CalibrateBatchResult Run calibration for a set of prompts and models :param evaluator_definitions: List of evaluator definitions. :param test_dataset_id: ID of the dataset to be used to test the skill. :param test_data: Actual data to be used to test the skill. :param parallel_requests: Number of parallel requests. Uses ThreadPoolExecutor if > 1. Returns a dictionary with the results and errors for each model and prompt. .. py:method:: calibrate_existing(evaluator_id: str, *, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_client.models.evaluator_calibration_output.EvaluatorCalibrationOutput] Run calibration set on an existing evaluator. .. py:method:: create(predicate: str = '', *, name: Optional[str] = None, intent: Optional[str] = None, model: Optional[ModelName] = None, fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]] = None, objective_id: Optional[str] = None, overwrite: bool = False) -> Evaluator Create a new evaluator and return the result :param predicate: The question / predicate that is provided to the semantic quantification layer to :param transform it into a final prompt before being passed to the model (not used: :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to ) :param objective_id: Already created objective id to assign to the eval skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to 'root', which means Root Signals default at the time of skill creation) :param fallback_models: The fallback models to use in case the primary model fails. :param pii_filter: Whether to use PII filter or not. :param reference_variables: An optional list of input variables for the skill. :param input_variables: An optional list of reference variables for the skill. :param data_loaders: An optional list of data loaders, which populate the reference variables. :param model_params: An optional set of additional parameters to the model. :param overwrite: Whether to overwrite a skill with the same name if it exists. .. py:method:: get_by_name(name: str) -> Evaluator Get an evaluator instance by name. Args: name: The evaluator to be fetched. Note this only works for uniquely named evaluators. .. py:method:: run(evaluator_id: str, *, request: str, response: str, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]] = None, expected_output: Optional[str] = None, evaluator_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_client.models.evaluator_execution_result.EvaluatorExecutionResult Run an evaluator using its id and an optional version id . If no evaluator version id is given, the latest version of the evaluator will be used. Returns a dictionary with the following keys: - score: a value between 0 and 1 representing the score of the evaluator .. py:attribute:: EvaluatorName .. py:attribute:: client .. py:attribute:: versions .. py:class:: InputVariable Bases: :py:obj:`pydantic.BaseModel` Input variable definition. `name` within prompt gets populated with the provided variable. .. py:attribute:: name :type: str .. py:class:: ModelParams Bases: :py:obj:`pydantic.BaseModel` Additional model parameters. All fields are made optional in practice. .. py:attribute:: temperature :type: Optional[float] :value: None .. py:class:: PresetEvaluatorRunner(client: root.generated.openapi_client.api_client.ApiClient, skill_id: str, eval_name: str, skill_version_id: Optional[str] = None) .. py:attribute:: skill_id .. py:attribute:: skill_version_id .. py:class:: ReferenceVariable Bases: :py:obj:`pydantic.BaseModel` Reference variable definition. `name` within prompt gets populated with content from `dataset_id`. .. py:attribute:: dataset_id :type: str .. py:attribute:: name :type: str .. py:class:: Skill Bases: :py:obj:`root.generated.openapi_client.models.skill.Skill` Wrapper for a single Skill. For available attributes, please check the (automatically generated) superclass documentation. .. py:method:: evaluate(*, response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult Run all validators attached to a skill. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators .. py:method:: run(variables: Optional[Dict[str, str]] = None) -> root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult Run a skill with optional variables. :param variables: The variables to be provided to the skill. .. py:property:: openai_base_url :type: str Get the OpenAI compatibility API URL for the skill. Currently only OpenAI chat completions API is supported using the base URL. .. py:class:: Skills(client: Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient]) Skills API .. note:: The construction of the API instance should be handled by accesing an attribute of a :class:`root.client.RootSignals` instance. .. py:method:: acreate(prompt: str = '', *, name: Optional[str] = None, intent: Optional[str] = None, model: Optional[ModelName] = None, system_message: str = '', fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, validators: Optional[List[root.validators.AValidator]] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, is_evaluator: Optional[bool] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]] = None, objective_id: Optional[str] = None, overwrite: bool = False, _request_timeout: Optional[int] = None) -> ASkill :async: Asynchronously create a new skill and return the result :param prompt: The prompt that is provided to the model (not used :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to ) :param objective_id: Already created objective id to assign to the skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to 'root', which means Root Signals default at the time of skill creation) :param fallback_models: The fallback models to use in case the primary model fails. :param system_message: The system instruction to give to the model (mainly useful with OpenAI compatibility API). :param pii_filter: Whether to use PII filter or not. :param validators: An optional list of validators; not available if objective_id is set. :param reference_variables: An optional list of input variables for the skill. :param input_variables: An optional list of reference variables for the skill. :param is_evaluator: Whether the skill is an evaluator or not. Evaluators should have prompts that cause model to return :param data_loaders: An optional list of data loaders, which populate the reference variables. :param model_params: An optional set of additional parameters to the model. :param overwrite: Whether to overwrite a skill with the same name if it exists. .. py:method:: acreate_chat(skill_id: str, *, chat_id: Optional[str] = None, name: Optional[str] = None, history_from_chat_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.skill_chat.ASkillChat :async: Asynchronously create and store chat object with the given parameters. :param skill_id: The skill to chat with. :param chat_id: Optional identifier to identify the chat. If not supplied, one is automatically generated. :param name: Optional name for the chat. :param history_from_chat_id: Optional chat_id to copy chat history from. .. py:method:: adelete(skill_id: str) -> None :async: Asynchronously delete the skill from the registry. :param skill_id: The skill to be deleted. .. py:method:: aevaluate(skill_id: str, *, response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_aclient.models.EvaluatorExecutionFunctionsRequest]] = None, skill_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_aclient.models.validator_execution_result.ValidatorExecutionResult :async: Asynchronously run all validators attached to a skill. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators :param skill_version_id: Skill version id. If omitted, the latest version is used. .. py:method:: aget(skill_id: str, _request_timeout: Optional[int] = None) -> ASkill :async: Asynchronously get a Skill instance by ID. :param skill_id: The skill to be fetched .. py:method:: alist(search_term: Optional[str] = None, *, limit: int = 100, name: Optional[str] = None, only_evaluators: bool = False) -> AsyncIterator[root.generated.openapi_aclient.models.skill_list_output.SkillListOutput] :async: Asynchronously iterate through the skills. Note that call will list only publicly available global skills, and those models within organization that are available to the current user (or all if the user is an admin). :param limit: Number of entries to iterate through at most. :param name: Specific name the returned skills must match. :param only_evaluators: Match only Skills with is_evaluator=True. :param search_term: Can be used to limit returned skills. .. py:method:: arun(skill_id: str, variables: Optional[Dict[str, str]] = None, *, model_params: Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]] = None, skill_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_aclient.models.skill_execution_result.SkillExecutionResult :async: Asynchronously run a skill with optional variables, model parameters, and a skill version id. If no skill version id is given, the latest version of the skill will be used. If model parameters are not given, Skill model params will be used. If the skill has no model params the default model parameters will be used. Returns a dictionary with the following keys: - llm_output: the LLM response of the skill run - validation: the result of the skill validation .. py:method:: atest(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, validators: Optional[List[root.validators.AValidator]] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput] :async: Asynchronously test a skill definition with a test dataset and return the result. For description of the rest of the arguments, please refer to create method. .. py:method:: atest_existing(skill_id: str, *, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_aclient.models.skill_test_output.SkillTestOutput] :async: Asynchronously test an existing skill. Note that only one of the test_data and test_data_set must be provided. :param test_data: Actual data to be used to test the skill. :param test_dataset_id: ID of the dataset to be used to test the skill. .. py:method:: aupdate(skill_id: str, *, change_note: Optional[str] = None, data_loaders: Optional[List[root.data_loader.ADataLoader]] = None, fallback_models: Optional[List[ModelName]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_aclient.models.input_variable_request.InputVariableRequest]]] = None, model: Optional[ModelName] = None, name: Optional[str] = None, pii_filter: Optional[bool] = None, prompt: Optional[str] = None, is_evaluator: Optional[bool] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_aclient.models.reference_variable_request.ReferenceVariableRequest]]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_aclient.models.ModelParamsRequest]] = None, evaluator_demonstrations: Optional[List[EvaluatorDemonstration]] = None, objective_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> ASkill :async: Asynchronously update existing skill instance and return the result. For description of the rest of the arguments, please refer to create method. :param skill_id: The skill to be updated .. py:method:: create(prompt: str = '', *, name: Optional[str] = None, intent: Optional[str] = None, model: Optional[ModelName] = None, system_message: str = '', fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, validators: Optional[List[root.validators.Validator]] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, is_evaluator: Optional[bool] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]] = None, objective_id: Optional[str] = None, overwrite: bool = False, _request_timeout: Optional[int] = None) -> Skill Create a new skill and return the result :param prompt: The prompt that is provided to the model (not used :param if using OpenAI compatibility API): :param name: Name of the skill (defaulting to ) :param objective_id: Already created objective id to assign to the skill. :param intent: The intent of the skill (defaulting to name); not available if objective_id is set. :param model: The model to use (defaults to 'root', which means Root Signals default at the time of skill creation) :param fallback_models: The fallback models to use in case the primary model fails. :param system_message: The system instruction to give to the model (mainly useful with OpenAI compatibility API). :param pii_filter: Whether to use PII filter or not. :param validators: An optional list of validators; not available if objective_id is set. :param reference_variables: An optional list of input variables for the skill. :param input_variables: An optional list of reference variables for the skill. :param is_evaluator: Whether the skill is an evaluator or not. Evaluators should have prompts that cause model to return :param data_loaders: An optional list of data loaders, which populate the reference variables. :param model_params: An optional set of additional parameters to the model. :param overwrite: Whether to overwrite a skill with the same name if it exists. .. py:method:: create_chat(skill_id: str, *, chat_id: Optional[str] = None, name: Optional[str] = None, history_from_chat_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.skill_chat.SkillChat Create and store chat object with the given parameters. :param skill_id: The skill to chat with. :param chat_id: Optional identifier to identify the chat. If not supplied, one is automatically generated. :param name: Optional name for the chat. :param history_from_chat_id: Optional chat_id to copy chat history from. .. py:method:: delete(skill_id: str) -> None Delete the skill from the registry. :param skill_id: The skill to be deleted. .. py:method:: evaluate(skill_id: str, *, response: str, request: Optional[str] = None, contexts: Optional[List[str]] = None, functions: Optional[List[root.generated.openapi_client.models.evaluator_execution_functions_request.EvaluatorExecutionFunctionsRequest]] = None, variables: Optional[dict[str, str]] = None, skill_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_client.models.validator_execution_result.ValidatorExecutionResult Run all validators attached to a skill. :param response: LLM output. :param request: The prompt sent to the LLM. Optional. :param contexts: Optional documents passed to RAG evaluators :param variables: Optional variables for the evaluator prompt template :param skill_version_id: Skill version id. If omitted, the latest version is used. .. py:method:: get(skill_id: str, _request_timeout: Optional[int] = None) -> Skill Get a Skill instance by ID. :param skill_id: The skill to be fetched .. py:method:: list(search_term: Optional[str] = None, *, limit: int = 100, name: Optional[str] = None, only_evaluators: bool = False) -> Iterator[root.generated.openapi_client.models.skill_list_output.SkillListOutput] Iterate through the skills. Note that call will list only publicly available global skills, and those models within organization that are available to the current user (or all if the user is an admin). :param limit: Number of entries to iterate through at most. :param name: Specific name the returned skills must match. :param only_evaluators: Match only Skills with is_evaluator=True. :param search_term: Can be used to limit returned skills. .. py:method:: run(skill_id: str, variables: Optional[Dict[str, str]] = None, *, model_params: Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]] = None, skill_version_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> root.generated.openapi_client.models.skill_execution_result.SkillExecutionResult Run a skill with optional variables, model parameters, and a skill version id. If no skill version id is given, the latest version of the skill will be used. If model parameters are not given, Skill model params will be used. If the skill has no model params the default model parameters will be used. Returns a dictionary with the following keys: - llm_output: the LLM response of the skill run - validation: the result of the skill validation .. py:method:: test(test_dataset_id: str, prompt: str, model: ModelName, *, fallback_models: Optional[List[ModelName]] = None, pii_filter: bool = False, validators: Optional[List[root.validators.Validator]] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput] Test a skill definition with a test dataset and return the result. For description of the rest of the arguments, please refer to create method. .. py:method:: test_existing(skill_id: str, *, test_dataset_id: Optional[str] = None, test_data: Optional[List[List[str]]] = None, _request_timeout: Optional[int] = None) -> List[root.generated.openapi_client.models.skill_test_output.SkillTestOutput] Test an existing skill. Note that only one of the test_data and test_data_set must be provided. :param test_data: Actual data to be used to test the skill. :param test_dataset_id: ID of the dataset to be used to test the skill. .. py:method:: update(skill_id: str, *, change_note: Optional[str] = None, data_loaders: Optional[List[root.data_loader.DataLoader]] = None, fallback_models: Optional[List[ModelName]] = None, input_variables: Optional[Union[List[InputVariable], List[root.generated.openapi_client.models.input_variable_request.InputVariableRequest]]] = None, model: Optional[ModelName] = None, name: Optional[str] = None, pii_filter: Optional[bool] = None, prompt: Optional[str] = None, is_evaluator: Optional[bool] = None, reference_variables: Optional[Union[List[ReferenceVariable], List[root.generated.openapi_client.models.reference_variable_request.ReferenceVariableRequest]]] = None, model_params: Optional[Union[ModelParams, root.generated.openapi_client.models.model_params_request.ModelParamsRequest]] = None, evaluator_demonstrations: Optional[List[EvaluatorDemonstration]] = None, objective_id: Optional[str] = None, _request_timeout: Optional[int] = None) -> Skill Update existing skill instance and return the result. For description of the rest of the arguments, please refer to create method. :param skill_id: The skill to be updated .. py:attribute:: client .. py:attribute:: versions .. py:class:: Versions(client: Union[Awaitable[root.generated.openapi_aclient.ApiClient], root.generated.openapi_client.api_client.ApiClient]) Version listing (sub)API Note that this should not be directly instantiated. .. py:method:: alist(skill_id: str) -> root.generated.openapi_aclient.models.paginated_skill_list.PaginatedSkillList :async: Asynchronously list all versions of a skill. :param skill_id: The skill to list the versions for .. py:method:: list(skill_id: str) -> root.generated.openapi_client.models.paginated_skill_list.PaginatedSkillList List all versions of a skill. :param skill_id: The skill to list the versions for .. py:data:: ModelName