An easy-to-use & supercharged open-source experiment tracker

Aim logs your training runs, enables a beautiful UI to compare them and an API to query them programmatically.

Check out our Github
pytorchpytorch-lightninghuggingfacetensorflowkerasxgboostamazonmlflowtensorboard

Get started
in under a minute and on any environment

pip install aim
import aim
import math

# Initialize a new run
run = aim.Run()

# Log hyper-parameters
run["hparams"] = {
    "learning_rate": 0.001,
    "batch_size": 32,
}

# Log metrics
for step in range(100):
    run.track(math.sin(step), name='sine')
    run.track(math.cos(step), name='cosine')
aim up

Now check out our GitHub repo or documentation to learn more

Why use Aim?

Compare runs easily to build models faster

Compare runs easily to build models faster

  • Group and aggregate 100s of metrics
  • Analyze and learn correlations
  • Query with easy pythonic search
Check out our Github
AimStack

Deep dive into details of each run for easy debugging

Deep dive into details of each run for easy debugging

  • Explore hparams, metrics, images, distributions, audio, text, ...
  • Track plotly and matplotlib plots
  • Analyze system resource usage
Check out our Github
AimStack

Have all relevant information centralized for easy governance

Have all relevant information centralized for easy governance

  • Centralized dashboard to view all your runs
  • Use SDK to query/access tracked runs
  • You own your data - Aim is open source and self hosted.
Check out our Github
AimStack

Subscribe to Our Newsletter

Subscribe to our newsletter to receive regular updates about our latest releases, tutorials and blog posts!

Subscribe