[Docker][DevOps][AI]
July 28, 2025

Docker Compose Adds LLM ‘models’ Support and Cloud Offload for Agentic Apps

LLM models Configuration in Compose

Docker Compose 2.38.0 adds a new top-level models element, enabling developers to define large language models directly within the Compose file. Services can now specify model settings—such as model name, version, and resource requirements—alongside containers, simplifying the local development and deployment of agentic applications without separate ML orchestration tooling (docs.docker.com).

Unified AI Agent Workflows

Developers can declare agents and Model Context Protocol (MCP)–compatible tools in docker-compose.yml, then launch entire agentic stacks with a single docker compose up. Compose integrates out of the box with popular AI frameworks—LangGraph, Embabel, Vercel AI SDK, Spring AI, CrewAI, Google’s ADK, and Agno—making it straightforward to prototype and iterate on intelligent workflows using familiar YAML definitions (sdtimes.com).

Docker Offload for Cloud-Scale Execution

Alongside local support, Docker Offload lets teams offload compute-intensive AI workloads to cloud environments (e.g., Google Cloud Run, Azure Container Apps) with the same Compose CLI. This hybrid model enables seamless scaling: developers build and test locally, then leverage managed cloud resources for production inference, all without changing Compose definitions (sdtimes.com).

Source: Docker Compose gets new features for building and running agents - SD Times

Read Next

Join our community