Self-hosted and full metadata access.
Explore & Compare
Easily search, group and aggregate metrics by any hyperparameter.
Activity view and full experiments dashboard for all experiments.
How it works?
- Aim is a python package.
- Use it to track any dictionaries and metrics.
- Only two functions to integrate with your training code.
- Works with any python script and ML framework.
- Stores metadata logs locally.
- Comes with the most powerful experiment comparison UI.
pip3 install aim
from aim import Run run = Run() # Save inputs, hparams or any other 'key: value' pairs run['hparams'] = hyperparam_dict for step in range(10): # Log metrics to visualize performance run.track(metric_value, name='metric_name', epoch=epoch_number)
Dashboard and Explore: Full Research context at hand
Use the dashboard to see your activities, instantly search by clicking on activity slots, search by run/experiment.
Use Explore to view groups of experiments, compare and play with the runs/metrics.
Explore is the most advanced open source AI experiment comparison tool available!
Search, Group and Aggregate
Search through everything you have tracked using the Aim pythonic query language. Super easy to use.
Group and Aggregate 1000s of metrics to quickly see the trends for hyperparameter sensitive runs.
Use subplots to compare different metrics of the same runs
Divide into subplots and monitor metrics from different perspectives.