AcademyDatasets
AcademyDatasets

Datasets

How datasets fit into the loop

So far, we've covered the first two steps of the AI engineering loop: tracing your application and monitoring its behavior live. Those give you visibility into what your system is actually doing and give you inspiration for improvement.

Now the question becomes: when you spot something worth improving, how do you test a change before deploying it to production? The next three steps of the loop cover exactly this, and it starts with datasets.

A dataset is a collection of test cases that you run your application against each time you make a change ("an experiment"). Instead of deploying and hoping for the best, you get a repeatable, consistent check across a set of inputs that represent real-world usage.

The dataset item

A dataset is made up of items, each item represents one test case: a situation your application should be able to handle. Generally, an item has three fields:

  • Input (required)
  • Expected output (optional)
  • Metadata (optional)

The three fields of a dataset item

Dataset Fields — When They Are Used

A good mental model is:

FieldPurpose
InputInput needed for the task you're testing
MetadataAny additional context that's helpful when scoring the result, or to associate the dataset item with a use case
Expected outputDefines what a correct or good response looks like

Common expected output patterns

Whether you need an expected output, and what it looks like, depends on which type of evaluator you use.

Reference-based versus reference-free evaluators

Some evaluators check the output against a predefined expected output (reference-based). Others assess the output without needing a ground truth to compare against (reference-free).

Exact match

The expected output is the literal correct answer. For example:

  • A classification task where the correct label is "billing_inquiry"
  • An extraction task where the expected entities are ["Paris", "Thursday"]

Reference answer

The expected output is a gold-standard response that shows what a good output looks like. The evaluator can compare the test's output against this example, for instance by checking semantic similarity or whether the key points match.

Evaluation criteria

The expected output is a list of checks or requirements the output should satisfy. For example:

  • "must mention the refund policy"
  • "must include a link to the help center"

The evaluator checks whether the output meets these criteria.

Nothing

Sometimes no expected output is required at all. If you're just checking whether:

  • the tone is professional
  • the response is safe
  • the output follows a required format

Your dataset items don't need anything other than an input as you will use a reference-free evaluator.

Combination of the above

Because you can run a combination of different evaluators on a single dataset item, a dataset item's expected output field can also contain multiple types of reference data. The expected output is a JSON field, so you can store multiple types of reference data without a problem.

What makes a good dataset

A good dataset mirrors what your system will encounter in production. If passing the dataset gives you confidence before deploying, it's doing its job.

Clear in scope. Each dataset should have a well-defined scope. That can be end-to-end if you treat internal steps as implementation details, or it can target an individual step like retrieval or summarization if that's the part you're trying to improve. You'll likely end up with multiple datasets, each with a clear purpose.

Granular datasetsEnd-to-end datasets
Faster and cheaper to run, easier to reason aboutCatch issues that only surface when steps interact

The right size for the workflow. Some datasets are small and fast enough to run on every push as part of your CI/CD pipeline. Others are larger and more comprehensive, and are useful to run periodically but too slow for every minor change.

PurposeTypical size
Exploring a single issue~10 items
Testing model capability boundaries~10 complex unsolved examples
CI checks on larger changes100–1000, covering production distribution
Guardrail penetration testingLarge and growing, add cases as they surface in production

Where to start

Start with the most concrete examples you have, then expand coverage once you know what you are trying to test.

  1. Pull examples from production traces that you spotted and would like to improve, either as-is or anonymized or transformed by AI.
  2. Add hand-written cases based on predefined requirements, edge cases, or behaviors your agent must handle reliably.
  3. Generate synthetic examples with AI once you know which dimensions you want to cover more broadly.

What comes next

Once you have a dataset, the next step is running your system against it to see how changes affect output quality. This is what experiments are for.


Was this page helpful?