Testing LLM apps would be incomplete without the scenarios to test on. You can specify these scenarios in your test dataset.

Each dataset consists of samples. Each sample has

  • id (string; optional): Unique identifier for the sample
  • inputs (object): Input parameters that define a scenario. The key is the name of the parameter, and the value is the value of the parameter for the sample
  • expected (string; optional): Used as the ground truth for this sample

Datasets can be specified in the configuration file directly, or imported from a file or HTTP endpoint.

Specify dataset directly

In this example, the LLM is asked to extract the user’s name from an incoming message. The message is provided under the user_message parameter in the dataset.

The LLM prompt can use this value in the prompt through the {{user_message}} placeholder.

"dataset": {
  "samples": [
    {
      "inputs": {
        "user_message": "Hi my name is John Doe."
      },
      "expected": "John Doe"
    },
    {
      "inputs": {
        "user_message": "This is Alice. Is anybody here?"
      },
      "expected": "Alice"
    }
  ]
}

Import from JSONL file

Specify a path to the JSONL file. Each line of the file should be a valid JSON object. On import, the keys of this JSON will be converted into inputs of the sample.

If using relative paths, the path is treated relatively to the configuration file.

"dataset": {
  "path": "HumanEval.jsonl"
}

Import Empirical JSON format

If your dataset follows the Empirical JSON format, you can import that from a file or HTTP endpoint.

"dataset": {
  "path": "https://assets.empirical.run/datasets/json/spider-tiny.json"
}