Unify AI Workflows on your MLOps & LLMOps infrastructure

Write Pythonic workflows with baked-in MLOps practices that power all your AI applications. At scale. Trusted by 100s of companies in production.

Trusted by 1,000s of top companies to standardize their AI workflows

I'm sorry, I cannot describe the content of the provided image.AXA logo featuring stylized white letters on a dark blue background with a red diagonal line.Bar chart with vertical lines representing data points, related to ML pipelines and model monitoring.Logo of Leroy Merlin, featuring black text on a green triangle.Rivian logo featuring a geometric emblem and bold text.Teal "adeo" logo on a white background.Devoteam logo in pink and gray, related to digital transformation and consulting services.Frontiers logo with colorful geometric design. Ideal for content related to data science or machine learning themes.Logo of Mann+Hummel, a company specializing in filtration solutions.Blue NIQ logo on a white background.I'm unable to view the image you've uploaded. Could you please describe the main elements of the image so I can help generate alt text for it?Wisetech Global logo, illustrating technology solutions, relevant to machine learning and DevOps.Logo for AISBACH Data Solutions, a company specializing in ML pipelines and data science.Aisera logo in red font, illustrating AI-driven automated solutions.Logo featuring stylized "ALKi" text, possibly related to machine learning or tech solutions.Altenar logo featuring a blue geometric design.Logo of Brevo, previously known as Sendinblue, displayed in green and black text.Logo of Digital Diagnostics, featuring a stylized "O" design.Logo of EarthDaily Agro with a globe graphic, possibly related to data science and machine learning in agriculture.Eikon Therapeutics logo with abstract design, related to data science advancements.Logo of a tech company with stylized text and an abstract design. Suitable for projects in MLOps and data science.Infoplaza logo on a white background, related to data services and technology innovations.Colorful geometric logo with circles and rectangles; related to machine learning tools like ZenML or Kubeflow."IT for Intellectual Property Management text; potential link to data science in IP protection."Logo of Multitel Innovation Centre focusing on technology and innovation.Logo of RiverBank with a blue wave icon, relevant for fintech and data science solutions.Logo of "STANDARD BOTS." Suitable for integrating with machine learning workflows, including MLOps and model deployment."Blue 'two.' logo on a white background, representing brand identity."Wayflyer logo with green swirl design, suitable for data science and ml pipelines context.

Continually Improve Your RAG Apps in Production

Building a retrieval-augmented generation system involves many moving parts – from ingesting documents and creating embeddings to making the right data available to your deployed app. ZenML handles that complexity for you by tracking all your pipelines, and artifacts in a single plane.

This allows you to stay on top, and build a flywheel to continuously improve your RAG performance

Fine-Tune Models with Confidence

ZenML makes fine-tuning large language models reproducible and hassle-free. Define your training pipeline once – ZenML handles data versioning, experiment tracking, and pushing the new model to production.

LLM drifting or there's new data? Simply re-run the pipeline and automatically log comparisons to previous runs

Your LLM Framework Isn't Enough for Production

LLM frameworks help create prototypes,
but real-world systems need:
Automated and reproducible pipelines that handle everything from ingestion of data to evaluation of your models.
Tracking and versioning of your hyperparameters, output artifacts and metadata for each run of your pipelines.
A central control plane where you can compare different versions and assign stages to your project versions.
Speed

Iterate at warp speed

Local to cloud seamlessly. Jupyter to production pipelines in minutes. Smart caching accelerates iterations everywhere. Rapidly experiment with ML and GenAI models.
Learn More
Observability

Auto-track everything

Automatic logging of code, data, and LLM prompts. Version control for ML and GenAI workflows. Focus on innovation, not bookkeeping.
Learn More
You can track all your metadata, data and versions of models with ZenML out of the box
Scale

Limitless Scaling

Scale to major clouds or K8s effortlessly. 50+ MLOps and LLMOps integrations. From small models to large language models, grow seamlessly.
Friendly yellow emoji with open arms and a smiley face.
Friendly yellow emoji with open arms and a smiley face.
Flexibility

Backend flexibility, zero lock-in

Switch backends freely. Deploy classical ML or LLMs with equal ease. Adapt your LLMOps stack as needs evolve.
Learn More
ZenML integrates with GCPZenML allows you to work with Kubernetes on your MLOps projectsZenML integrates natively with AWS and Weights and Biases
Reusability

Shared ML building blocks

Team-wide templates for steps and pipelines. Collective expertise, accelerated development.
Learn More
ZenML allows you to rerun and schedule pipeline runs for machine learning workflowsDashboard displaying machine learning templates for sentiment analysis, LLM fine-tuning, and NLP use cases; relevant to MLOps.
Optimization

Streamline cloud expenses

Stop overpaying on cloud compute. Clear view of resource usage across ML and GenAI projects.
Learn More
ZenML helps you manage costs for your machine learning workflows
Governance

Built-in compliance & security

Comply with the EU AI Act. One-view ML infrastructure oversight. Built-in security best practices.
Learn More
ZenML manages access to all the different parts of your machine learning infrastructure and assets throughout your teamUser roles for file access with name and email displayed, permissions set to "Can Update" and "Can Read".
Whitepaper

ZenML for your Enterprise-Grade MLOps Platform

We have put down our expertise around building production-ready, scalable MLOps platforms, building on insights from our top customers.

Customer Stories

Learn how teams are using ZenML to save time and simplify their MLOps.
I'm sorry, I can't describe the content of this image.
ZenML offers the capability to build end-to-end ML workflows that seamlessly integrate with various components of the ML stack, such as different providers, data stores, and orchestrators. This enables teams to accelerate their time to market by bridging the gap between data scientists and engineers, while ensuring consistent implementation regardless of the underlying technology.
I'm sorry, but I can't help with that.
Harold Giménez
SVP R&D at HashiCorp
Using ZenML

It's extremely simple to plugin ZenML

Just add Python decorators to your existing code and see the magic happen
Icon of a branching pipeline, symbolizing ML pipelines or workflow automation.
Automatically track experiments in your experiment tracker
Icon of a 3D cube representing containerization for ML.
Return pythonic objects and have them versioned automatically
Icon representing code integration for MLOps and machine learning automation.
Track model metadata and lineage
Purple geometric logo symbolizing machine learning workflows and MLOps automation.
Define  data dependencies and modularize your entire codebase
  	@step
def load_rag_documents() -> dict:
    # Load and chunk documents for RAG pipeline
    documents = extract_web_content(url="https://www.zenml.io/")
    return {"chunks": chunk_documents(documents)}
  
  	@step(experiment_tracker="mlflow")
def generate_embeddings(data: dict) -> None:
    
  
  	    # Generate embeddings for RAG pipeline
    embeddings = embed_documents(data['chunks'])
    return {"embeddings": embeddings}

    
  
  	@step(
  settings={"resources": ResourceSettings(memory="2Gi") },
    
  
  	  model=Model(name="my_model")
)
    
  
  	def index_generator(
    embeddings: dict,
) -> str:
    # Generate index for RAG pipeline
  
  	    creds = read_data(client.get_secret("vector_store_credentials"))
    index = create_index(embeddings, creds)
    
  
  	    return index.id
  
  	@pipeline(
  active_stack="my_stack",
  
  	  on_failure=on_failure_hook
)
  
  	def rag_pipeline() -> str:
    documents = load_rag_documents()
    embeddings = generate_embeddings(documents)
    index = index_generator(embeddings)
    return index
  
A purple lock icon on a white background, representing security in machine learning and MLOps.
Remove sensitive information from your code
Purple database icon representing data storage, essential for ML pipelines and feature stores.
Choose resources abstracted  from infrastructure
I'm sorry, I can't generate an alt text for this image without more context on its content. Could you please describe what's in the image?
Easily define alerts for observability
Purple arrows icon representing data exchange, relevant for ML pipelines and containerization for machine learning.
Switch easily between local and  cloud orchestration
No compliance headaches

Your VPC, your data

ZenML is a metadata layer on top of your existing infrastructure, meaning all data and compute stays on your side.
ZenML only has access to metadata; your data remains in your VPCDiagram of ZenML setup with local environments for data scientists, ML engineers, and MLOps, integrating AWS, GCP, and Azure.
ZenML is SOC2 and ISO 27001 Compliant

We Take Security Seriously

ZenML is SOC2 and ISO 27001 compliant, validating our adherence to industry-leading standards for data security, availability, and confidentiality in our ongoing commitment to protecting your ML workflows and data.

Looking to Get Ahead in MLOps & LLMOps?

Subscribe to the ZenML newsletter and receive regular product updates, tutorials, examples, and more.
We care about your data in our privacy policy.
Support

Frequently asked questions

Everything you need to know about the product.
What is the difference between ZenML and other machine learning orchestrators?
Unlike other machine learning pipeline frameworks, ZenML does not take an opinion on the orchestration layer. You start writing locally, and then deploy your pipeline on an orchestrator defined in your MLOps stack. ZenML supports many orchestrators natively, and can be easily extended to other orchestrators. Read more about why you might want to write your machine learning pipelines in a platform agnostic way here.
Does ZenML integrate with my MLOps stack (cloud, ML libraries, other tools etc.)?
As long as you're working in Python, you can leverage the entire ecosystem. In terms of machine learning infrastructure, ZenML pipelines can already be deployed on Kubernetes, AWS Sagemaker, GCP Vertex AI, KubeflowApache Airflow and many more. Artifact, secrets, and container storage is also supported for all major cloud providers.
Does ZenML help in GenAI / LLMOps use-cases?
Yes! ZenML is fully compatabile, and is intended to be used to productionalize LLM applications. There are examples on the ZenML projects repository that showcases our integrations with Llama Index, OpenAI, and Langchain. Check them out here!
How can I build my MLOps/LLMOps platform using ZenML?
The best way is to start simple. The user guides walk you through how to build a miminal cloud MLOps stack. You can then extend with the other numerous components such as experiment tracker, model deployers, model registries and more!
What is the difference between the open source and Pro product?
ZenML is and always will be open-source at its heart. The core framework is freely available on Github and you can run and manage it in-house without using the Pro product. On the other hand, ZenML Pro offers one of the best experiences to use ZenML, and includes a managed version of the OSS product, including some Pro-only features that create the best collaborative experience for many companies that are scaling their ML efforts. You can see a more detailed comparison here.
Still not clear?
Ask us on Slack

Start Your Free Trial Now

No new paradigms - Bring your own tools and infrastructure
No data leaves your servers, we only track metadata
Free trial included - no strings attached, cancel anytime
Dashboard displaying machine learning models, including versions, authors, and tags. Relevant to model monitoring and ML pipelines.