Plug into your existing tools—Kubeflow, Spark, Kafka, Grafana, and more. No lock-in, no rewrites. Need more? Add any 3rd party tool to MLRun’s flexible, open architecture.
Run locally or on Kubernetes with a single command. No need to configure pipelines, containers, or clusters from scratch.
Whether you’re running batch jobs or real-time applications, MLRun handles scaling, resource allocation, and performance tuning for you.
MLRun is an open-source AI orchestration framework for managing ML and generative AI applications across their lifecycle. It automates data preparation, model tuning, customization, validation and optimization of ML models, LLMs and live AI applications over elastic resources.
MLRun is an open-source AI orchestration framework for managing ML and generative AI applications across their lifecycle. It automates data preparation, model tuning, customization, validation and optimization of ML models, LLMs and live AI applications over elastic resources.
Use ready-to-run examples for LLMs, batch pipelines and real-time applications