ablate turns deep learning experiments into structured, human-readable reports.

ablate#

ablate PyPI Version ablate Python Versions ablate GitHub License

ablate turns deep learning experiments into structured, human-readable reports. It is built around five principles:

  • composability: sources, queries, blocks, and exporters can be freely combined

  • immutability: query operations never mutate runs in-place, enabling safe reuse and functional-style chaining

  • extensibility: sources, blocks, and exporters are designed to be easily extended with custom implementations

  • readability: reports are generated with humans in mind: shareable, inspectable, and format-agnostic

  • minimal friction: no servers, no databases, no heavy integrations: just Python and your existing logs

Currently, ablate supports the following sources and exporters:

Installation#

Install ablate using pip:

pip install ablate

The following optional dependencies can be installed to enable additional features:

  • ablate[clearml] to use ClearML as an experiment source

  • ablate[mlflow] to use MLflow as an experiment source

  • ablate[tensorboard] to use TensorBoard as an experiment source

  • ablate[wandb] to use WandB as an experiment source

  • ablate[jupyter] to use ablate in a Jupyter notebook

Quickstart#

ablate is built around five composable modules:

Creating a Report#

To create your first Report, define one or more experiment sources. For example, the built in Mock can be used to simulate runs:

1import ablate
2
3source = ablate.sources.Mock(
4    grid={"model": ["vgg", "resnet"], "lr": [0.01, 0.001]},
5    num_seeds=2,
6)

Each run in the mock source has accuracy, f1, and loss metrics, along with a seed parameter as well as the manually defined parameters model and lr. Next, the runs can be loaded and processed using functional-style queries to e.g., sort by accuracy, group by seed, aggregate the results by mean, and finally collect all results into a single list:

1runs = (
2    ablate.queries.Query(source.load())
3    .sort(ablate.queries.Metric("accuracy", direction="max"))
4    .groupdiff(ablate.queries.Param("seed"))
5    .aggregate("mean")
6    .all()
7)

Now that the runs are loaded and processed, a Report comprising multiple blocks can be created to structure the content:

 1report = ablate.Report(runs)
 2report.add(ablate.blocks.H1("Model Performance"))
 3report.add(
 4    ablate.blocks.Table(
 5        columns=[
 6            ablate.queries.Param("model", label="Model"),
 7            ablate.queries.Param("lr", label="Learning Rate"),
 8            ablate.queries.Metric("accuracy", direction="max", label="Accuracy"),
 9            ablate.queries.Metric("f1", direction="max", label="F1 Score"),
10            ablate.queries.Metric("loss", direction="min", label="Loss"),
11        ]
12    )
13)

Finally, the report can be exported to a desired format such as Markdown:

1ablate.exporters.Markdown().export(report)

This will produce a report.md file with the following content:

# Model Performance

| Model   |   Learning Rate |   Accuracy |   F1 Score |    Loss |
|:--------|----------------:|-----------:|-----------:|--------:|
| resnet  |           0.01  |    0.94285 |    0.90655 | 0.0847  |
| vgg     |           0.01  |    0.92435 |    0.8813  | 0.0895  |
| resnet  |           0.001 |    0.9262  |    0.8849  | 0.0743  |
| vgg     |           0.001 |    0.92745 |    0.90875 | 0.08115 |

Combining Sources#

To compose multiple sources, they can be added together using the + operator as they represent lists of Run objects:

1runs1 = ablate.sources.Mock(...).load()
2runs2 = ablate.sources.Mock(...).load()
3
4all_runs = runs1 + runs2 # combines both sources into a single list of runs

Functional Queries#

ablate queries are functionally pure such that intermediate results are not modified and can be reused:

 1runs = ablate.sources.Mock(...).load()
 2
 3sorted_runs = Query(runs).sort(ablate.queries.Metric("accuracy", direction="max"))
 4
 5filtered_runs = sorted_runs.filter(
 6    ablate.queries.Metric("accuracy", direction="max") > 0.9
 7)
 8
 9sorted_runs.all() # still contains all runs sorted by accuracy
10filtered_runs.all() # only contains runs with accuracy > 0.9

Composing Reports#

By default, ablate reports populate blocks based on the global list of runs passed to the report during initialization. To create more complex reports, blocks can be populated with a custom list of runs using the runs parameter:

 1report = ablate.Report(sorted_runs.all())
 2report.add(ablate.blocks.H1("Report with Sorted Runs and Filtered Runs"))
 3report.add(ablate.blocks.H2("Sorted Runs"))
 4report.add(
 5    ablate.blocks.Table(
 6        columns=[
 7            ablate.queries.Param("model", label="Model"),
 8            ablate.queries.Param("lr", label="Learning Rate"),
 9            ablate.queries.Metric("accuracy", direction="max", label="Accuracy"),
10        ]
11    )
12)
13report.add(ablate.blocks.H2("Filtered Runs"))
14report.add(
15    ablate.blocks.Table(
16        runs = filtered_runs.all(), # use filtered runs only for this block
17        columns=[
18            ablate.queries.Param("model", label="Model"),
19            ablate.queries.Param("lr", label="Learning Rate"),
20            ablate.queries.Metric("accuracy", direction="max", label="Accuracy"),
21        ]
22    )
23)

Extending ablate#

ablate is designed to be extensible, allowing you to create custom sources, blocks, and exporters by implementing their respective abstract classes.

To contribute to ablate, please refer to the contribution guide.

Overview#

Development