An easy-to-use &
supercharged open-source
ML experiment tracker

Aim logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.

⭐️ Star Aim on GitHub Out now: Aim 3.10
Get started
in under a minute
and on any environment
Check out  our   GitHub repo  or 
documentation to learn more
pip install aim
from aim import Run
# Initialize a new run
run = Run()
# Log run parameters
run["hparams"] = {
    "learning_rate": 0.001,
    "batch_size": 32,
# Log metrics
for i in range(10):
    run.track(i, name='loss', step=i, context={ "subset":"train" })
    run.track(i, name='acc', step=i, context={ "subset":"train" })
aim up

Why use Aim?

Compare runs easily to build models faster
  • Group and aggregate 100s of metrics
  • Analyze and learn correlations
  • Query with easy pythonic search
Check out our GitHub
Deep dive into details
of each run for easy debugging
  • Explore hparams, metrics, images,
    distributions, audio, text, ...
  • Track plotly and matplotlib plots
  • Analyze system resource usage
Check out our GitHub
Have all relevant information
centralized for easy governance
  • Centralized dashboard to view all your runs
  • Use SDK to query/access tracked runs
  • You own your data - Aim is open source and self hosted.
Check out our GitHub


Play with Aim before installing it!
Check out our demos to see the full functionality

Machine translation

Training logs of a neural translation model (from WMT'19 competition).


Training logs of 'lightweight' GAN, proposed in ICLR 2021.

FastSpeech 2

Training logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech".

Simple MNIST

Simple MNIST training logs.

Subscribe to
Our Newsletter

Subscribe to our newsletter to receive regular updates about our latest releases, tutorials and blog posts!
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.