
An easy-to-use & supercharged open-source experiment tracker
Aim logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.
Check out our GithubGet started
in under a minute and on any environment
pip install aim
import aim
import math
# Initialize a new run
run = aim.Run()
# Log hyper-parameters
run["hparams"] = {
"learning_rate": 0.001,
"batch_size": 32,
}
# Log metrics
for step in range(100):
run.track(math.sin(step), name='sine')
run.track(math.cos(step), name='cosine')
aim up
Now check out our GitHub repo or documentation to learn more
Why use Aim?
Compare runs easily to build models faster
Compare runs easily to build models faster
- Group and aggregate 100s of metrics
- Analyze and learn correlations
- Query with easy pythonic search

Deep dive into details of each run for easy debugging
Deep dive into details of each run for easy debugging
- Explore hparams, metrics, images, distributions, audio, text, ...
- Track plotly and matplotlib plots
- Analyze system resource usage

Have all relevant information centralized for easy governance
Have all relevant information centralized for easy governance
- Centralized dashboard to view all your runs
- Use SDK to query/access tracked runs
- You own your data - Aim is open source and self hosted.

Demos
Play with Aim before installing it! Check out our demos to see the full functionality
Machine translation
Training logs of a neural translation model(from WMT'19 competition).
lightweight-GAN
Tranining logs of 'lightweight' GAN, proposed in ICLR 2021.
FastSpeech 2
Training logs of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech".
Simple MNIST
Simple MNIST training logs.
Subscribe to Our Newsletter
Subscribe to our newsletter to receive regular updates about our latest releases, tutorials and blog posts!

