New Blog: Bringing (Gen) AI from Laptop to Production with MLRun

Fine-Tuning in MLRun: How to Get Started

How to fine tune an existing LLM quickly and easily with MLRun, with two practical hands-on examples.

Fine-tuning is the practice of training a pre-existing AI model on new, focused data. By enhancing the model’s domain-specific performance, organizations can make their LLMs production-ready and turn their generative AI applications into a competitive differentiator. In this blog, we’ll explore how MLRun simplifies and accelerates fine-tuning workflows with two practical, hands-on examples, which you can easily follow and replicate.

What is Fine-Tuning? Why Does it Matter?

Fine-tuning is a machine learning method where a pre-trained model is further trained on a specialized dataset to adapt it to specific tasks or domains. Fine-tuning involves modifying the model’s internal parameters based on new data (rather than the model’s output), to enhance its performance for particular applications. This makes the model more specialized for specific tasks and valuable for business use cases.

Fine-tuning is considered a resource-efficient method because it leverages pre-trained models, rather than having to train a new model from scratch. However, resources for the fine-tuning process itself need to be managed efficiently to ensure cost-effectiveness.

In AI pipelines, fine-tuning takes place in the development pipelines, after data is collected and initial models are trained. Before deploying the model, it’s recommended to evaluate the fine-tuned model and ensure it meets required standards.

How Can MLRun Help Fine-Tune Your LLM?

MLRun provides pre-made functions that will tune, track the model/dataset, dynamically allocate GPUs in a K8s cluster, etc. Then, MLRun can be used to serve the newly fine tuned model at an endpoint. MLRun can then be used to monitor the model over time with custom metrics/guardrails (see example #1 below).

You can find these functions in these resources. Below we show examples of how to fine-tune with MLRun. There are also more demos and resources in the end:

How to Fine-Tune with MLRun

Let’s take a look at two examples of how to fine-tune with MLRun. Follow along with the tutorials on your own:

Example 1: Automated Monitoring and Fine-Tuning Loop

Fine-tuning can take place after an application is developed and in the monitoring phase. By observing how the model performs in production, such as how it handles edge cases, evolving user behavior, or domain-specific nuance, teams can fine-tune the model to correct drift, improve accuracy and adapt to changing conditions.This ongoing refinement ensures the model stays aligned with business goals and user expectations over time, allowing for continuous improvement based on real-world feedback.

Here’s how it works:

  • The deployed model is monitored in real time.
  • When the monitor detects poor performance it triggers a workflow that initiates fine-tuning.
  • After training, the improved model is automatically redeployed, and its performance is reassessed.

This setup creates a continuous learning loop where the model self-corrects based on real-world usage, ensuring it stays aligned with domain-specific behavior.

In this example, with a banking gen AI chatbot, the application is evaluated to ensure the chatbot only responds to banking-related queries. If it answers irrelevant questions, an automated feedback loop using ORPO kicks in to fine-tune and redeploy the model.

(ORPO (Odds Ratio Preference Optimization) integrates supervised fine-tuning (SFT) and preference alignment by leveraging a simple log odds ratio term to create a penalty for disfavored responses and a strong adaptation signal for the chosen response. This approach is computationally efficient and doesn’t require a separate reference model or reward model, making it a simpler and more powerful alternative to methods like DPO or RLHF.)

Example 2: Lightweight Fine-Tuning Pipeline

Fine-tuning can help adapt a model to a required use case, before application deployment. This allows for more accurate, relevant and context-aware responses tailored to the specific needs of the target domain or user group.

Here’s how it works:

  • A small dataset is created or obtained.
  • Fine-tuning is performed. For example, via LoRA adapters and 8-bit quantization, to reduce training overhead. (LoRA (Low-rank adaptation)  adapts ML models for specific uses without retraining the entire model).
  • Training progress and results are automatically tracked and stored.
  • Once tuned, the model is deployed and tested for tone adaptation.

This is ideal for quick iterations, experimentation with model behavior (e.g., tone or persona), or domain adaptation without full-scale retraining.

In this example, the model’s outputs are transformed to emulate a specific tone of voice (in this example – pirate speak). The pre-trained LLM (LLaMA 2 7B) is fine-tuned using a customized dataset (Databricks Dolly-15k).

More Resources:

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
June 19, 2025
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
June 19, 2025
MLRun Customer Support Gen AI Copilot
Zeev Rispler
June 19, 2025

MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking

MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and resource-efficient.

Today we’re announcing MLRun 1.8, now available to the community. This latest version adds to the series of improvements to LLM monitoring released in 1.7, with in-platform alerts. Plus, several more improvements to help to track and evaluate models, and navigate the platform with ease. 

Read all the details below:

1. In-Platform Alerts

MLRun v1.7 introduced a flexible monitoring infrastructure, the ability to monitor unstructured data, metrics customization, and more.

MLRun v1.8 builds on these capabilities and now includes monitoring alerts built into the MLRun UI.

Users can set up alerts on criteria such as:

  • Performance degradation
  • Resource spikes
  • Compliance indicators
  • And more

Once alerted, users can click through to the flagged issues and investigate directly in MLRun, without having to context switch to external monitoring systems.

2. Experiment Tracking for Document-based Models

Experiment tracking is used to measure metrics, compare results, reproduce experiments and optimize models. This is a core MLRun capability.

Now, MLRun v1.8 supports experiment tracking for document-based models, like LLMs. This is enabled through the LangChain API, which is integrated into vector databases.

Users can track their documents as artifacts, with metadata like:

  • Loader type
  • Producer information
  • Collection details
  • And more

3. Model Evaluation Before Deployment

Debugging LLMs is a complicated process. It requires: 1) Deployment 2) Realizing there’s an issue 3) Identifying the root cause 4) Analysis and evaluation 5) Fixing 6) Redeploying. This process is long, technologically complex and resource-intensive. It’s also prone to potential errors.

In MLRun v.1.8, this process is shorter and  more resource-efficient. Users can now monitor and evaluate models before deploying them. MLRun runs the model, returning performance results without consuming unnecessary compute resources.

4. Enhanced UI Experience with Pagination

Managing large-scale projects across teams requires a reliable and user-friendly system.

Following user requests, MLRun v1.8 includes pagination, to enhance responsiveness and reduce scrolling and performance bottlenecks arising from long page loading times.

Join the Community Conversation

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
June 5, 2025
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
June 5, 2025
MLRun Customer Support Gen AI Copilot
Zeev Rispler
June 5, 2025

Bringing (Gen) AI from Laptop to Production with MLRun

Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to production in just a few lines of code.

MLRun is an open-source framework that orchestrates the entire generative AI lifecycle, from development to deployment in Kubernetes. In this article, we’ll show how MLRun replaces manual deployment processes, allowing you to get from your notebook to production in just a few lines of code.

What is the Traditional AI Application Lifecycle?

As a data professional, you’re probably familiar with the following process:

  1. You want to run a batch fine-tuning job for your LLM, but your code requires a lot of memory, CPUs and/or GPUs. It also needs a number of Python requirements packages to run and fine-tune the LLM.
  2. You must run your code on your K8s cluster because your local computer doesn’t have enough resources. For this, you need to create a K8s resource and maybe a new Docker image with the new Python requirements.
  3. Once you’ve successfully run the function on the K8s cluster, you need to version and track your experiment results (in this case the LLM and fine-tune job results). This is essential to understand where and why you need to improve your fine-tune job.
  4. In some projects, the model inference is done in a batch, in others it’s in real-time. If this is a real-time deployment, you need to create a K8s resource that serves the model with the user prompts or create a batch job that does the same. Both should run in the K8s cluster for production testing, and you’ll need to manage those resources by yourself.
  5. Once you serve the model, you need to monitor and test how your model is behaving and if the model outputs meet your criteria for deployment in production, using accuracy, performance or other custom metrics.
  6. Once your project is ready to deploy in production systems you need to run some of the steps above in the production cluster again.

What are the Challenges in the Traditional AI Lifecycle?

The traditional process described above is fraught with challenges:

  • Engineering Technologies and Resources – Data teams, DevOps and engineers each use different technologies and frameworks. This creates technological friction and inconsistency across AI pipelines and silos between teams, demanding a solution to streamline and automate the entire process.
  • Resource Management – AI models, and especially LLMs, often require substantial memory and GPU resources, which are in low supply and costly. Plus, compute requirements are not consistent throughout the workflow. For example, data processing and training might require more resources. Enterprise teams need a solution to auto-scale and to allocate and monitor deployment resources easily.
  • Versioning and Experiment Tracking – Distributed systems are convoluted and dispersed and teams lack holistic visibility into them, making it complex to track changes, metrics and results for each model or artifact. This requires versioning capabilities and artifact management solutions.
  • Data Privacy – LLMs may handle sensitive user data, which needs to be safeguarded to protect user privacy and abide by compliance requirements. Guardrails must be implemented in any live business application.
  • Monitoring – Production models can degrade over time due to data drift and changing real-world conditions, leading to poor performance. Plus, LLMs might hallucinate or have inherent bias, requiring LiveOps and guardrails.
  • Kubernetes Complexity – Deploying models or running a user workflow in production requires extensive understanding of Kubernetes, like the ability to manage and deploy k8s resources, collect necessary logs  and tune resource requests and limits. Most data professionals typically have expertise in other technologies. As a result, it is challenging to effectively run a job, serve the model and understand how their code is behaving in production for monitoring purposes.

The Core Advantages of MLRun

MLRun addresses these challenges by allowing you to easily run your local code in K8s production environments as a batch job or a remote real-time deployment. MLRun eliminates the need to worry about the complexity of Kubernetes, abstracting and streamlining the process. MLRun also supports scaling and configuring resources, such as GPU, Memory, CPU, etc. It provides a simple way to scale resources, without requiring users to understand the inner workings of Kubernetes.

What’s left is simply to monitor the functionality and behavior of your AI system once it’s live, which can also take place in MLRun.

Here’s how MLRun achieves this:

  • Orchestration – MLRun orchestrates workflows across all AI development and deployment tasks like data preprocessing, model training and fine-tuning, serving, etc. These pipelines are modular and components can be swapped out and replaced, future-proofing the architecture.
  • Auto-Scaling – MLRun allows auto-scaling deployments across the Kubernetes cluster.
  • Containerized Environment – MLRun packages models, code and dependencies into containers for Kubernetes-based deployment.
  • Serverless Model Serving – MLRun integrates with Nuclio, a high-performance serverless framework, to enable lightweight and scalable deployments.
  • Version Control – MLRun provides built-in versioning for datasets, code and models, ensuring reproducibility.
  • Artifact Management System – MLRun manages the artifact registry and enables managing artifacts by types (models, datasets and others), labels and tags. In addition MLRun stores relevant metadata such as model features, stats and more.
  • Real-Time Monitoring – MLRun integrates monitoring capabilities to track model performance, latency and resource utilization of individual workflows and deployments, and more – in real time.
  • Logs Forwarding – MLRun supports logs forwarding, and a clear and easy UI logs screen for debugging and checking your deployment logs.
  • MLRun integrates seamlessly –  with popular ML and deep learning frameworks like TensorFlow, PyTorch, Hugging Face and scikit-learn.

What is the AI model Lifecycle with MLRun? 

Here’s what the same process looks like, but with MLRun:

Before MLRun After MLRun
You want to run a batch fine-tuning job for your LLM, but your code requires a lot of memory, CPU, GPUs. It also needs a number of Python requirements packages to run and fine-tune the LLM. By using MLRun this flow is very simple. You only need to connect your local IDE to MLRun, create a project, create an MLRun function set and run your code using the relevant resources. With this flow, you can develop and run your code in a Kubernetes from the beginning of the development phase with only a few code lines.
You must run your code on your K8s cluster because your local computer doesn’t have enough resources. For this, you need to create a K8s resource and maybe a new Docker image with the new Python requirements. To run your code in a Kubernetes cluster, create an MLRun function that runs your Python code. Then,  add the amount of resources (memory, CPU and GPU), and add Python requirements.
MLRun will use those values and run your fine-tuning job in Kubernetes and manage the deployment.
Once you’ve successfully run the function on the K8s cluster, you need to version and track your experiment results (LLM and the fine-tune job results). This is essential to understand where and why you need to improve your fine-tune job. Now that you have a model that has been fine-tuned by the MLRun function, you can track the model artifact as part of the MLRun model artifactory, with the model version, labels or the model metrics.
In some projects, the model inference is done in a batch, in others it’s in real-time. If this is a real-time deployment, you need to create a K8s resource that serves the model with the user prompts or create a batch job that does the same. Both should run in the K8s cluster for production testing, and you need to manage those resources by yourself. In some projects, the model inference is done in a batch, in others it’s in real-time. In MLRun, you can do both. You can serve your LLM in real-time or collect the prompts and run the same in batch for the LLM evaluations, in just a couple of lines of code.
Once you serve the model, you need to monitor and test how your model is behaving and if the model outputs meet your criteria for deployment in production, using accuracy, performance or other custom metrics. Once you serve the model, monitor your LLM outputs and inputs and check the model performance and usage by enabling MLRun model monitoring. This is an essential part of the model development, helping you better understand if you need to retrain the model or the model outputs so they meet your criteria for deployment in production.
Once your project is ready to deploy in production systems you need to run some of the steps above in the production cluster again Once your project is ready for production, you can easily move your project from dev system and move the same project configuration to production system, by using MLRun CI/CD automation.

MLRun can take your code and run and manage your functions and artifacts in Kubernetes environments from your first deployment. This allows you to focus on development and decreases the time needed to deploy AI projects in production, while maintaining a production-first mindset approach.

How to Get Started with MLRun

1. On your laptop, install MLRun and configure your remote environment. Now you have your MLRun environment ready to develop your project from your laptop to production.

2. Create your MLRun project by using the MLRun SDK.

3. Run your Python code as an MLRun function. For a remote or batch function you can run your code locally or on your k8s cluster from the beginning of the development phase (always keep production mindset approach). You can also log models and different artifact types to your system experiment tracking management.

4. Based on the run and the experiment tracking you can monitor your result and make the way to production more easy and convenient.

More Resources:

See also

MLRun simplifies and automates the various stages of the AI lifecycle. Here are some key use cases where you can use MLRun:

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
April 16, 2025
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
April 16, 2025
MLRun Customer Support Gen AI Copilot
Zeev Rispler
April 16, 2025

MLRun Customer Support Gen AI Copilot

A generative AI copilot is an interactive gen AI assistant that is designed to amplify human capabilities while working together interactively. The term “gen AI co-pilot” is inspired by the aviation concept of a copilot, who assists the main pilot to ensure smooth and successful flying. You can develop your own copilot with open-source MLRun, which will orchestrate the AI pipelines at scale with pre-built components.

In this blog post, we’ll dive into the concept of a gen AI copilot and show a demo of building one with MLRun.

What is a Gen AI Copilot?

A copilot in generative AI is an AI-powered assistant designed to work interactively and collaboratively with humans in real-time to enhance our capabilities. This could include conducting tasks like automating repetitive assignments, generating drafts, retrieving information, transcription of conversations, analyzing data, providing insights, writing and testing code, or generating content. With a copilot, we can work faster, more effectively and at a larger scale.

Generative AI Copilot Examples

Some of the most popular copilots in use today are:

  • Microsoft Copilot: Assists with document creation, data analysis and communication.
  • GitHub Copilot: A coding assistant that helps developers write, debug and optimize code.
  • Design Copilots: Tools like Canva’s AI features that assist in creating visually appealing designs based on user input.
  • Customer Support Copilots: AI systems that help agents by suggesting responses, retrieving data, or automating routine queries.

Customer Support Gen AI Copilot Workflows

A gen AI copilot leverages LLMs to understand user input, process it, and generate relevant outputs for tasks such as answering questions, creating content, or writing code. It combines specialized tools or APIs to tailor responses. With RAG, it can also fetch and incorporate real-time data, ensuring accuracy and relevance.

The system adapts through user feedback, integrates with external tools for automation, and maintains privacy and compliance standards to deliver secure, efficient, and personalized assistance across various domains.

Workflows are the sequences of tasks or actions that the copilot automates or assists with, based on user input and specific goals. They typically involve multi-step operations, integrations with external tools, and contextual understanding to ensure tasks are completed effectively.

 

A customer support copilot, for example, might include the following workflows:

  1. Client Profile Retrieval Automatically fetch detailed client information, such as name, address, account details, family status, preferences and previous engagements with the organization. This involves retrieving data from CRM systems, previous interactions (e.g., emails, chats, or calls), and other internal databases. The goal is to provide the support representative with a holistic view of the client to personalize the conversation.
  2. Transcripting the conversation – Creating a transcript of the conversation so it can be used for further analysis and any required follow ups.
  3. Retrieving information from online and internal sources Identifying requirements in the call, like documents or benchmarks, and bringing them to the human representative to use on the call and enhance the customer experience.
  4. Follow-up Email Management – Automating personalized email communications with action items based on the conversation. The copilot will also ensure these emails are clear, concise and aligned with the tone and professionalism of the organization.

5. Data Compliance and Logging – Ensuring all client interactions adhere to regulatory standards. For example, automatically logging the client interaction into the organization’s system while ensuring compliance with data protection and regulatory standards (e.g., GDPR, HIPAA), flagging any sensitive or non-compliant elements for review and maintaining a secure audit trail for accountability.

Why Build a Co-Pilot with MLRun?

MLRun is an open-source AI orchestration framework that simplifies and accelerates the development and deployment of AI models. Building a copilot with MLRun allows for:

  1. End-to-End AI Workflow Management – MLRun provides an integrated environment to manage the entire machine learning lifecycle: data preparation, model training and validation, deployment and monitoring.
  2. Scalability – MLRun leverages K8s for scalable and distributed processing, enables scalable, event-driven workflows without infrastructure overhead and works with public cloud vendors for elasticity.
  3. Collaboration and Reproducibility – MLRun facilitates collaboration among data scientists, ML engineers and developers by organizing code configurations and experiments in shared environments, versioning and automations. 
  4. Customizability – Every copilot has unique requirements. MLRun enables the creation of tailored pipelines and algorithms specific to the co-pilot’s domain (e.g., customer support, code generation).
  5. Pre-Built Components – MLRun provides ready-to-use functions and templates for common machine learning tasks preprocessing, model training, evaluation, real-time or batch inference pipelines, monitoring and logging, and more.
  6. Real-Time Capabilities – MLRun integrates with real-time data streams and deploys optimized serving functions for fast and reliable inference.
  7. Monitoring and Observability – MLRun offers comprehensive monitoring for co-pilots in production, hallucination, bias, toxicity, performance and more. It also provides tools to retrain and redeploy models as needed.

Use Case Example: Wealth Management Customer Support Copilot

Customer service copilots can serve multiple use cases, from a 24/7 support call center to escalation management to global multilingual support. In the example below, you can see a demo of an MLRun copilot. It shows what such a copilot could look like in a private banking client relationship management scenario.

Meet Miss Chen, who recently invested in green energy bonds and is looking for advice on reinvesting additional funds. Together with the copilot, the banker identifies and recommends a relevant investment opportunity based on the client’s history. In addition, the co-pilot helps the agent anticipate future opportunities, like biotech investments, based on client interests, which expands the bank’s role in the client’s portfolio.

 The banker also proactively shares research materials from reputable sources, retrieved by the copilot, to support informed decision-making. This fosters a sense of trust and expertise while generating more business for the bank.

The copilot emphasizes personalized service, strategic investment advice and proactive support for the client’s needs. It helps the human agent provide personal touches, such as acknowledging the client’s daughter’s achievements and offering tailored solutions, to build trust and loyalty. This long-term retention through proactive service ensures steady revenue from high-net-worth clients. 

In the end, the co-pilot can create a hyper-personalized follow-up email based on the conversation for accountability and to close the deal.

You can watch the demo of this copilot here.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
February 27, 2025
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
February 27, 2025
MLRun Customer Support Gen AI Copilot
Zeev Rispler
February 27, 2025

How to Connect MLRun to an External Monitoring Application

Launching MLRun 1.7: Gen AI and LLM Monitoring

As organizations transition from experimenting with LLMs to deploying gen AI applications and driving business value, data professionals face operationalization challenges. These include hallucinations, bias, model misuse, PII leakage, harmful content, inaccuracy, and more. Detecting and addressing these issues requires robust monitoring solutions in the AI pipeline. 

By ensuring monitoring is part of AI pipeline orchestration, data professionals can implement a continuous feedback loop. The monitoring results can be used to fine-tune models, ensuring they are high-performing, reliable and accurate. This ensures risks are mitigated before reaching production, allowing for the integrity and operational stability of gen AI applications. 

MLRun can integrate with any monitoring application, regardless of its ecosystem. This means users can use MLRun to orchestrate their gen AI application, including tasks like data preparation, model tuning, customization, validation and model optimization. Then, they can view monitoring results either in MLRun or their monitoring application of choice, and feed the results back to the AI pipeline.

How to Integrate Your Monitoring Application with MLRun: 3 Steps to Success

Integrating MLRun with an external monitoring application is simple and straightforward. Here’s how it works:

Step 1: Find the SDK or API of Your External Application

Integrating with your monitoring application takes place through their SDK or API. Explore and identify your application’s SDK or find the API endpoints, request payloads and response structure in the documentation.

Step 2: Define a Python Class for Integration

In MLRun, implement a Python class that inherits from MLRun’s ModelMonitoringApplication base class.

This class must include the do_tracking method, which defines the logic for interacting with the external application through the API or SDK.

The do_tracking method returns a list of key-value metrics and outcomes, including details like detected drift or model performance metrics. This abstraction ensures compatibility with any monitoring application.

Step 3: Register and Deploy the Monitoring Function

After defining the Python class, register it as a monitoring function in MLRun. Use the set_model_monitoring_function method to add the function to your MLRun project and deploy it.

Once deployed, the monitoring application integrates seamlessly into the MLRun workflow.

You can see an example of how this works with open-source Evidently right here.

Why Integrate Your Monitoring Application with MLRun?

MLRun offers several key advantages for integrating external monitoring applications:

  1. Generic and Modular Design – Integrate any monitoring tool, whether it’s open-source, an industry-standard application or a custom-built solution.
  2. Ease of Integration – Developers can rely on SDKs or APIs provided by monitoring tools, ensuring compatibility without extensive rework.
  3. Centralized Monitoring – All monitoring activities, regardless of the tool, are centralized within the MLRun environment, allowing for fine-tuning of the LLM.
  4. Scalability – Organizations can adapt as their monitoring needs evolve, leveraging MLRun to integrate new tools as required.

Get Started Now

Model monitoring is foundational for maintaining reliable gen AI applications. MLRun simplifies the process by offering a generic, modular approach to integrating external monitoring applications. Whether your organization uses a market-leading tool or a custom-built solution, MLRun can fit seamlessly into your monitoring strategy.

Get started with MLRun today.

Join the Conversation

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
January 30, 2025
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
January 30, 2025
MLRun Customer Support Gen AI Copilot
Zeev Rispler
January 30, 2025

Launching MLRun 1.7: Gen AI and LLM Monitoring

V1.7 brings significant LLM monitoring enhancements, helping users ensure the integrity and operational stability of LLMs in production environments.

As the open-source maintainers of MLRun, we’re proud to announce the release of MLRun v1.7.  MLRun is an open-source AI orchestration tool that accelerates the deployment of gen AI applications, with features such as LLM monitoring, data management, guardrails and more. We provide ready-made scenarios that can be easily implemented by teams in organizations. This new release is packed with powerful features designed to make gen AI deployments more flexible and faster than ever before.

Specifically, V1.7 brings significant LLM monitoring enhancements, helping users ensure the integrity and operational stability of LLMs in production environments. Additional updates introduce performance optimizations, multi-project management, and more.

Read all the details below:

1. Flexible Monitoring Infrastructure

MLRun 1.7 introduces a new, flexible monitoring infrastructure that enables seamless integration of external tools and applications into AI pipelines, using APIs and pre-built integration points. This includes tools for external logging, alerting, metrics systems, etc. 

For instance, users can now:

  • Track custom metrics that are specifically tailored to business needs, such as user-defined success metrics or domain-specific KPIs.
  • Integrate with open-source tools like Evidently, which enables advanced tracking of model performance metrics (e.g., distribution shifts, data quality, and accuracy).
  • Leverage external logging services to centralize logs and improve the visibility of pipeline activities

2. Better Monitoring of Unstructured Data

Given that LLMs primarily handle unstructured data, one of the key advances in MLRun 1.7 is its enhanced ability to enable tracking this kind of data with more precision.

A common way to monitor LLMs is to create another model that would act as a judge. See a demo of how this works.

3. Endpoint Metrics UI and Customization

MLRun 1.7 introduces a new endpoint metrics UI. Its expanded endpoint monitoring capabilities allow users to:

  • Select and investigate different endpoint metrics, such as accuracy and response times.
  • View various metrics related to model endpoints, such as the number of activations or event counts.
  • Visualize trends through time series and histogram views
  • Customize the monitoring time frame, such as looking at data from the past week or another specified period.

For example, a time-series chart could indicate a bottleneck in the inference pipeline or model scaling issues.

The ability to track, visualize, and analyze endpoint performance enables teams to adjust operational parameters or retrain models as soon as performance starts to degrade. This reduces downtime or adverse effects in production environments.

With these capabilities, users can now customize their monitoring stacks per their business and tech stack requirements. Future releases will continue to enhance these capabilities, with more features and integrations for monitoring. This will allow for even greater flexibility and user control. So please share your feedback, so we can extend them based on your needs.

Spotlight: Gen AI Banking Chatbot Demo

See a gen AI banking chatbot that uses MLRun’s new monitoring capabilities for fine-tuning, ensuring it only answers banking-related questions. This helps address the risks associated with gen AI, like hallucinations, inaccuracies, bias, harmful content, and more.

Watch the demo here.

5. Simplified Docker Deployment Workflow

Version 1.7 simplifies the process of deploying Docker images, making it easier for users to run applications and models. Previously, deploying applications or models via Docker required manual configuration, with open-source Nuclio, and integration steps. Now, users can simply provide a Docker image and deploy it with minimal setup.

This improvement opens up development workflow possibilities. For example, users can more easily integrate custom UIs or dashboards that can interact with deployed models, allowing for more advanced and customized monitoring capabilities.

6. Cross-Project View

For enterprises working on multiple projects across diverse teams, keeping track of workflows and active jobs can become overwhelming. MLRun 1.7 introduces a cross-project view that consolidates all activities across projects into a single, centralized dashboard.

The cross-project view provides real-time visibility into all active jobs, workflows, and ML models across different projects. Users can:

  • Monitor multiple projects to see which workflows and jobs are running, completed, or failed.
  • Identify issues in specific projects quick and more effectively

This is especially valuable for organizations with complex environments where multiple teams may be working on different but interrelated projects.

7. Community-Driven Innovations and Performance Enhancements

Finally, MLRun 1.7 introduces improvements based on the invaluable feedback from you, our community users. We listened to the requirements and are releasing features that provide value in areas the community cares about most. This version introduces improved UI responsiveness, more efficient handling of large datasets, and a host of usability fixes. We look forward to your continued feedback on this version and the upcoming ones as well.

Join the Conversation

We’re looking forward to hearing your feedback about MLRun 1.7 and your future needs for the upcoming versions. Join the community and share your insights and requirements.

Read the full changelog.

Explore MLRun 1.7.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
November 1, 2024
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
November 1, 2024
MLRun Customer Support Gen AI Copilot
Zeev Rispler
November 1, 2024

LLM as a Judge: Practical Example with Open-Source MLRun

LLM as a Judge: Practical Example with Open-Source MLRun

LLMs can be used for evaluating other models, which is a method known as “LLM as a Judge”. This approach leverages the unique capabilities of LLMs to assess and monitor the performance and accuracy of models. In this blog, we will show a practical example of operationalizing and de-risking an LLM as a Judge in with the open-source MLRun platform.

Brief Reminder: What is LLM as a Judge?

“LLM as a judge” refers to using LLMs to evaluate the performance and output of AI models. The LLM can analyze the results based on predefined metrics such as accuracy, relevance, or efficiency. It may also be used to compare the quality of generated content, analyze how models handle specific tasks, or provide insights into strengths and weaknesses.

Why Use LLM as a Judge?

LLM as a Judge is an evaluation approach that helps bring applications to production and derives value from them much faster. This is because LLM as a Judge allows for:

  • Availability – LLMs operate 24/7, providing instant feedback in time-sensitive contexts.
  • Adaptability – Prompt engineering allows easily adjusting evaluation criteria.

What to Look Out for When Using LLM as a Judge

When using a Large Language Model (LLM) as a judge for evaluating other models, several significant risks must be carefully considered to avoid faulty conclusions:

  • Bias propagation – LLMs are trained on vast datasets that may contain inherent biases related to race, gender, or culture. If these biases are not addressed, they can directly affect the evaluation process, leading to unjust or skewed assessments of the models being tested.
  • Over-reliance on language and syntax – The LLM may favor models that produce more fluent or persuasive language over those that generate more accurate or innovative content. This creates the risk of misleading results.
  • Hallucinations – When the LLM generates plausible-sounding but incorrect or irrelevant information. This becomes problematic during model evaluation as the LLM might misinterpret the data or generate false positives/negatives in its assessment.
  • Ground truth or benchmarking – The LLM might inaccurately assess models in specialized fields like law, medicine, or science. Without access to verifiable facts or empirical data, the LLM may rely too heavily on its own internal reasoning processes, which can be flawed, resulting in unreliable judgments.
  • Model drift -Updates to the LLM or changes in its underlying data can shift its evaluation standards over time, leading to inconsistency in assessments.
  • Model Updates – When using third-party LLMs, updates to the model might modify performance, even breaking it.

Addressing these risks requires thorough validation, human oversight, careful design of evaluation criteria and evaluating the model Judge for the task. This will ensure reliable and fair outcomes when using an LLM as an evaluator.

How to Operationalize Your LLM as a Judge in MLRun

In this example, we’ll show how to implement LLM as a Judge as part of your monitoring system with MLRun. You can view the full steps with code examples here.

Here’s how it works:

  1. Create a LLM as a Judge monitoring application (or use the one shown in the demo).
  2. Set it in the MLRun project as a monitor application.
  3. Deploy it and enjoy.

To prompt engineer the judge you can follow the best practices here:

  1. Create an evaluation set the judge can be scored on.
  2. Build a prompt with multiple explanations about the metric, scores and add multiple examples the LLM can learn from.
  3. Try it out with a few examples.
  4. Run the evaluation set and check the performance.
  5. Do it periodically to ensure the judge is on track.

Conclusion

LLM as a Judge is a useful method that can scale model evaluation. With MLRun, you can quickly fine-tune and deploy the LLM that will be used as a Judge, so you can operationalize and de-risk your gen AI applications. Follow this demo to see how.

Just getting started with gen AI? Start with MLRun now.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
September 26, 2024
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
September 26, 2024
MLRun Customer Support Gen AI Copilot
Zeev Rispler
September 26, 2024

How to Operationalize Your Own Customized Application for Monitoring LLMs with MLRun

Operationalize Your Own Customized Application for Monitoring LLMs

LLM monitoring helps optimize for accuracy and efficiency, detect bias and ensure security and privacy. But common metrics like BLEU and ROUGE aren’t always accurate enough for LLM monitoring. By developing your own monitoring application, you can customize and tailor the metrics you need, monitor in real-time, integrate with other systems, and more. In this blog post, we explain how to do this with MLRun.

Why Monitor LLMs and Gen AI Applications?

Monitoring generative AI applications and LLMs is an essential step in the AI pipeline. By monitoring, data professionals ensure models are accurate and bring business value. It also helps remove the risks associated with gen AI.

Overall, LLM monitoring can help:

  • Manage resources and reduce operational costs.
  • Optimize for efficiency and accuracy, ensuring model reliability at a given task and checking if it needs to go into another phase of development.
  • Detect errors, biases, or inaccuracies in outputs, ensuring they meet quality standards.
  • Identify and mitigate ethical issues like bias and toxicity, before they become public concerns.
  • Ensure data privacy and security, to prevent data leakage, violation of privacy regulations, and more
  • Meet compliance regulations.
  • Understand how users interact with the model.
  • Build trust among stakeholders.

Key LLM Metrics to Track

There are many trackable LLM metrics, which can help meet the objectives detailed above. These include first-level metrics, model-related metrics, data metrics and more.

If the pipeline is: X -> Model -> Y

  • Data metrics check X.
  • Accuracy metrics check Y and sometimes Y | X (Y given X).
  • Performance check the arrows.

Given this, the common metrics include:

  • Performance Optimization – Latency, throughout, resource utilization (CPU/GPU memory usage), data drift, sensibleness and specificity.
  • LLM Evaluation (Accuracy) – Perplexity, BLEU (Bilingual Evaluation Understudy), ROUGE (Recall-Oriented Understudy for Gisting Evaluation), METEOR (Metric for Evaluation of Translation with Explicit Ordering), F1 score and accuracy.
  • Data Metrics – Data drift

Additional metrics that can be monitored include:

  • User Engagement – Session length, token efficiency
  • Ethical Compliance – Adherence to guidelines, like privacy, non-discrimination, transparency and fairness.

In addition to these, data engineers and scientists can also come up with their own metrics, based on use cases and requirements. This is valuable for monitoring LLMs, since these popular metrics don’t always cover unique LLM monitoring needs.

For example:

  • Logic monitoring metrics, which evaluate the logical processes and decision-making pathways of a system. They include input classification, response consistency, error detection, decision pathway analysis, and performance measurements.
  • Domain-specific metrics or evaluation methods, including industry-specific terminologies, contextual relevance, or specialized linguistic nuances.
  • Bias detection algorithms that operate based on your organization’s ethical standards and regulatory requirements.

Benefits of Operationalizing Your Own Monitoring Application

By developing your own monitoring application, you can monitor LLMs based on the metrics you need, to ensure your LLM is fully-optimized to your use case. This will ensure it brings business value and help avoid LLM risks that have technological and business implications.

By developing and deploying your own monitoring application you can:

  • Tailor evaluation criteria to align closely with your specific use case or domain, maximizing business value.
  • Incorporate real-time monitoring, alerting you about anomalies or performance issues as they occur.
  • Integrate your monitoring application seamlessly with other internal systems or workflows
  • Future-proof to adapt as new models and technologies emerge, keeping your application relevant and up-to-date.
  • Generate customized reports tailored to your organization’s specific needs, providing actionable insights and data-driven decision-making.

How to Easily Develop a Monitoring Application for Your LLM with MLRun

Open-source MLRun provides a radically simplified solution, allowing anyone to develop and deploy their own monitoring application in a few simple lines of code. Inherit the `MonitoringApplication` class, implement one method and that’s it!

You can see the full tutorial with code snippets and examples in the MLRun documentation.

Get started with MLRun now.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
July 24, 2024
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
July 24, 2024
MLRun Customer Support Gen AI Copilot
Zeev Rispler
July 24, 2024

Deploying Hugging Face LLM Models with MLRun

Deploying Hugging Face LLM Models with MLRun

Hugging Face has become a leading model repository, offering user-friendly tools for building, training and deploying ML models and LLMs models. In combination with MLRun, an open-source platform that automates data prep, tuning, validating and optimizing ML models and LLMs over elastic resources, Hugging Face empowers data scientists and engineers to bring their models to production more quickly and efficiently.

This blog post introduces Hugging Face and MLRun, demonstrating the benefits of using them together. It is based on the webinar “How to Easily Deploy Your Hugging Face Models to Production”, which includes a live demo of deploying a Hugging Face model with MLRun. The demo covers data preparation, a real application pipeline, post-processing and model retraining.

You can also watch the webinar, featuring Julien Simon, Chief Evangelist at Hugging Face, Noah Gift, MLOps expert and author, and Yaron Haviv, co-founder and CTO of Iguazio (acquired by McKinsey).

Hugging Face and LLMs

Hugging Face has gained recognition for its open-source library, Transformers, which provides easy access to pre-trained models. These include LLMs like BERT, GPT-2, GPT-3, T5 and others. These models can be used for various MLP tasks such as text generation, classification, translation, summarization and more.

By providing a repository of pre-trained models that users can fine-tune for specific applications, Hugging Face significantly reduces the time and resources required to develop powerful NLP systems. This enables a broader range of organizations to leverage advanced language technologies, thus democratizing access to LLMs.

The impact of Hugging Face’s LLMs spans various industries, including healthcare, finance, education and entertainment. For instance, in healthcare, LLMs can assist in analyzing medical records, extracting relevant information and supporting clinical decision-making. In finance, these models can enhance customer service through chatbots and automate the analysis of financial documents.

Now let’s see how Hugging Face LLMs can be operationalized.

Deploying Your Hugging Face LLM Model with MLRun

MLRun is an open-source MLOps orchestration framework that enables managing continuous ML and gen AI applications across their lifecycle, quickly and at scale. Capabilities include:

  • Automating data preparation, tuning, validation and model optimization
  • Deploying scalable real-time serving and application pipelines that include models, data and business logic
  • Built-in observability and monitoring for data, models and resources
  • Automated retraining and re-tuning
  • Flexible deployment options (multi-cloud, hybrid and on-prem)

Using MLRun with Hugging Face

Deploying Hugging Face models to production is streamlined with MLRun. Below, we’ll outline the steps to build a serving pipeline with your model and then retrain or calibrate it with a training flow that processes data, optimizes the model and redeploys it.

Workflow #1: Building a Serving Pipeline

  1. Start by setting up a new project in MLRun.
  2. Add a Serving Function – Define a serving function with the necessary steps. A basic serving function may include intercepting a message, pre-processing, performing sentiment analysis with the Hugging Face model and post-processing. You can expand this with additional steps and branching as needed.

Hugging Face models are integrated into MLRun, so you only need to specify the models you want to use.

  1. Simulate Locally – MLRun provides a simulator for your serving function, allowing you to test it locally.
  2. Test the Model – Push requests into the pipeline to verify its functionality. Debug as necessary.
  3. Deploy the model as a real-world endpoint. This involves running a simple command, with MLRun handling the backend processes like building containers, pushing to repositories, and serving the pipeline. This results in an elastic, auto-scaling service.

Workflow #2: Building a Training Pipeline

  1. Begin by creating a new project in MLRun.
  2. Register Training Functions – Define the training functions, including the training methods, evaluation criteria and any other necessary information.
  3. Set the Workflow – Outline the training steps, such as preparing datasets, training based on the prepared data, optimizing the model, and deploying the function. Models can be deployed to various environments (production, development, staging) simultaneously. These workflows can be triggered automatically with CI systems.
  4. Run the Pipeline – Execute the training pipeline, which can be monitored through MLRun’s UI. Since MLRun supports Hugging Face, training artifacts are saved for comparisons, experiment tracking, and more.
  5. Test the Pipeline – Verify that the model’s predictions have changed following the training.
  6. Deploy the newly trained model.

Integrating Hugging Face with MLRuns significantly shortens the model development, training, testing, deployment,and monitoring processes. This helps operationalize gen AI, effectively and efficiently.

Learn more about MLRun and Hugging Face for your gen AI workflows.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
June 16, 2024
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
June 16, 2024
MLRun Customer Support Gen AI Copilot
Zeev Rispler
June 16, 2024

Open Source MLOps and LLMOps Orchestration with MLRun: Quick Start Tutorial

Open Source MLOps and LLMOps Orchestration with MLRun: Quick Start Tutorial

MLRun is an open-source MLOps and gen AI orchestration framework designed to manage and automate the machine learning lifecycle. This includes everything from data ingestion and preprocessing to model training, deployment and monitoring, as well as de-risking. MLRun provides a unified framework for data scientists and developers to transform their ML code into scalable, production-ready applications.

In this blog post, we’ll show you how to get started with MLRun: creating a dataset, training the model, serving and deploying. You can also follow along by watching the video this blog post is based on or through the docs.

When starting your first MLRun project, don’t forget to star us on GitHub.

Now let’s get started.

Creating Your First MLRun Project

An MLRun project helps organize and manage the various components and stages of an ML or gen AI workflow in an automated and streamlined manner. It integrates components like datasets, code, models and configurations into a single container. By doing so, it supports collaboration, ensures version control, enhances reproducibility and allows for logging and monitoring.

  1. Install and import MLRun. More details on how to do it.
  2. Create a project with project = mlrun.get_or_create_project(name=”quick-tutorial”, user_project=True).

This will create the project object, which will be used to add and execute functions.

  1. Now for the dataset. This only requires a simple script with one Python function that grabs a dataset from scikit-learn and returns it as a pandas dataframe.

%%writefile data-prep.py

 

import pandas as pd

from sklearn.datasets import load_breast_cancer

 

def breast_cancer_generator():

    “””

    A function which generates the breast cancer dataset

    “””

    breast_cancer = load_breast_cancer()

    breast_cancer_dataset = pd.DataFrame(

        data=breast_cancer.data, columns=breast_cancer.feature_names

    )

    breast_cancer_labels = pd.DataFrame(data=breast_cancer.target, columns=[“label”])

    breast_cancer_dataset = pd.concat(

        [breast_cancer_dataset, breast_cancer_labels], axis=1

    )

 

    return breast_cancer_dataset, “label”

This is regular Python. MLRun will automatically log the returning data set and a label column name. 4. Create an MLRun function using project.set_function, together with the name of the Python file and parameters specifying requirements. These could include running the function as a job with a certain Docker image.

data_gen_fn = project.set_function(

    “data-prep.py”,

    name=”data-prep”,

    kind=”job”,

    image=”mlrun/mlrun”,

    handler=”breast_cancer_generator”,

)

project.save()  # save the project with the latest config

 

  1. Save the project.
  2. Run the function with project.run_function together with the required parameters. For example, for running in a local environment, use (local=True), otherwise it runs at scale in Kubernetes. Notice the `returns` parameter where we specify what MLRun should log from the function’s returning objects.

gen_data_run = project.run_function(

    “data-prep”, 

    local=True,

    returns=[“dataset”, “label_column”],

)

  1. Open the MLRun UI.
  2. View artifacts like the logged data sets, the label column, metadata and more.

Training the Model

Now let’s see how to train a model using the dataset that we just created. Instead of creating a brand new MLRun function, we can import one from the MLRun function hub.

  1. Go to the function hub.

Here’s what it looks like:

 

You will find a number of useful and powerful functions out-of-the-box. We’ll use the Auto trainer function.

  1. Import it by pointing to the marketplace and specifying the function name:

# Import the function

trainer = mlrun.import_function(“hub://auto_trainer”)

In this case, one of the parameters is the data set from our previous run.

trainer_run = project.run_function(

    trainer,

    inputs={“dataset”: data_prep_run.outputs[“dataset”]},

    params={

        “model_class”: “sklearn.ensemble.RandomForestClassifier”,

        “train_test_split_size”: 0.2,

        “label_columns”: data_prep_run.results[“label_column”],

        “model_name”: “breast_cancer_classifier”,

    },

    handler=”train”,

)

 

The default is local=false, which means it will run behind the scenes on Kubernetes.

You will be able to see the pod and the print out statements.

  1. Open the MLRun UI, which will display more details and artifacts. For example, the parameters passed in the evaluation metrics, the model itself and more.

Serving the Model

Now we can serve the trained model.

  1. Type mlrun.new_function and select the kind as serving.

serving_fn = mlrun.new_function(

    “breast_cancer_classsifier_servingserving”,

    image=”mlrun/mlrun”,

    kind=”serving”,

    requirements=[“scikit-learn~=1.3.0”],

)

 

  1. Add your model to the serving function using serving_fun.add_model and the path to the model.
  • The path to the model is the output of the training job.
  • The class name specifies the model’s serving class where the API is.. There are built-in classes in MLRun, like the SciKit-Learn model server, in this example.

serving_fn.add_model(

    “breast_cancer_classifier_endpoint”,

    class_name=”mlrun.frameworks.SKLearnModelServer”,

    model_path=trainer_run.outputs[“model”],,

)

 

In this example, we are using sklearn. But you can choose your preferred framework from this list:

Or customize your own. You can read more about this in the docs.

The example below shows a simple, singular model. There are also more advanced models that include steps for data enrichment, pre-processing, post-processing, data transformations, aggregations and more.

Read more about real-time serving here.

  1. Test the serving function using a mock server that simulates the model deployment. This allows making sure everything is behaving as expected without having to deploy.

# Create a mock (simulator of the real-time function)

server = serving_fn.to_mock_server()

Use the mock server `test` method (server.test) to test the model server.

The last part of the code is the model server, which you can send data inputs to and acts exactly like a model server.

Deploying the Model

Finally, it’s time to deploy to production with a single line of code.

  1. Use the `deploy` method:

serving_fn.deploy()

This will take the code, all the parameters, the pre- and post-processing, etc., package them up in a container deployed on Kubernetes and expose them to an endpoint. The endpoint contains your transformation, pre- and post-processing, business logic, etc. This is all deployed at once, while supporting rolling upgrades, scale, etc.

  1. Now, send data and see if you get a response as expected. Use the serving function `invoke` method (serving_fn.invoke) to send data from the notebook.

That’s it! You now know how to use MLRun to manage and deploy ML models. As you can see, MLRun is more than just training and deploying models to an endpoint. It is an open source machine learning platform that helps build a production-ready application that includes everything from data transformations to your business logic to the model deployments to a lot more.

Start using MLRun today.

Get more tutorials here.

Recent Blog Posts
MLRun v1.8 Release: with Smarter Model Monitoring, Alerts and Tracking
MLRun v1.8 adds features to make LLM and ML evaluation and monitoring more accessible, practical and...
Gilad Shaham
June 16, 2024
Bringing (Gen) AI from Laptop to Production with MLRun
Find out how MLRun replaces manual deployment processes, allowing you to get from your notebook to p...
Gilad Shapira
June 16, 2024
MLRun Customer Support Gen AI Copilot
Zeev Rispler
June 16, 2024