The open-source tool for ML experiment comparison



Self-hosted and full metadata access.

Explore & Compare

Easily search, group and aggregate metrics by any hyperparameter.


Activity view and full experiments dashboard for all experiments.

How it works?

- Aim is a python package.
- Use it to track any dictionaries and metrics.
- Only two functions to integrate with your training code.
- Works with any python script and ML framework.
- Stores metadata logs locally.
- Comes with the most powerful experiment comparison UI.

pip3 install aim
from aim import Run
run = Run()

# Save inputs, hparams or any other 'key: value' pairs
run['hparams'] = hyperparam_dict

for step in range(10):
  # Log metrics to visualize performance
  run.track(metric_value, name='metric_name', epoch=epoch_number)
aim up


Dashboard and Explore: Full Research context at hand

Use the dashboard to see your activities, instantly search by clicking on activity slots, search by run/experiment.


Use Explore to view groups of experiments, compare and play with the runs/metrics.


Explore is the most advanced open source AI experiment comparison tool available!

Search, Group and Aggregate

Search through everything you have tracked using the Aim pythonic query language. Super easy to use.

Group and Aggregate 1000s of metrics to quickly see the trends for hyperparameter sensitive runs.


Use subplots to compare different metrics of the same runs

Divide into subplots and monitor metrics from different perspectives.


Let's build Aim together

We need your help to constantly improve Aim for the community. If you are already using Aim or just getting started, join us to help build beautiful and effective open source tools for you.

> Join our Slack

> Get involved on GitHub

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Aim Newsletter

Subscribe to our newsletter to receive regular updates about our latest releases, tutorials and blog posts!