New Blog: Bringing (Gen) AI from Laptop to Production with MLRun

MLRun Community Edition

Build, deploy, and scale AI, without managing infrastructure

Why use MLRun CE?

Works with your stack

Plug into your existing tools—Kubeflow, Spark, Kafka, Grafana, and more. No lock-in, no rewrites. Need more? Add any 3rd party tool to MLRun’s flexible, open architecture.

Infrastructure, simplified

Run locally or on Kubernetes with a single command. No need to configure pipelines, containers, or clusters from scratch.

Effortless scaling for any workload

Whether you’re running batch jobs or real-time applications, MLRun handles scaling, resource allocation, and performance tuning for you.

At the core of MLRun CE

MLRun is an open-source AI orchestration framework for managing ML and generative AI applications across their lifecycle. It automates data preparation, model tuning, customization, validation and optimization of ML models, LLMs and live AI applications over elastic resources.

MLRun is an open-source AI orchestration framework for managing ML and generative AI applications across their lifecycle. It automates data preparation, model tuning, customization, validation and optimization of ML models, LLMs and live AI applications over elastic resources.

Get started with working AI pipelines

Use ready-to-run examples for LLMs, batch pipelines and real-time applications

MLRun connects orchestration, data, compute and monitoring in one unified workflow

Components

  • Projects management
  • Batch & real time functions
  • Experiment tracking
  • Model monitoring
  • Alerts & notifications