Type alias CriteriaEvalChainConfig

CriteriaEvalChainConfig: EvalConfig & {
    evaluatorType: "criteria";
    criteria?: Criteria | Record<string, string>;
    feedbackKey?: string;
    llm?: Toolkit;
}

Configuration to load a "CriteriaEvalChain" evaluator, which prompts an LLM to determine whether the model's prediction complies with the provided criteria.

Type declaration

  • evaluatorType: "criteria"
  • Optional criteria?: Criteria | Record<string, string>

    The "criteria" to insert into the prompt template used for evaluation. See the prompt at https://smith.langchain.com/hub/langchain-ai/criteria-evaluator for more information.

  • Optional feedbackKey?: string

    The feedback (or metric) name to use for the logged evaluation results. If none provided, we default to the evaluationName.

  • Optional llm?: Toolkit

    The language model to use as the evaluator.

Param

The criteria to use for the evaluator.

Param

The language model to use for the evaluator.

Returns

The configuration for the evaluator.

Example

const evalConfig = {
evaluators: [{
evaluatorType: "criteria",
criteria: "helpfulness"
}]
};

Example

const evalConfig = {
evaluators: [{
evaluatorType: "criteria",
criteria: { "isCompliant": "Does the submission comply with the requirements of XYZ"
}]
};

Generated using TypeDoc