Preparing Container Images
Prepare, build, and push an agent container image for hosted deployments.
Use the guidelines in this topic for preparing Docker images.
Supported Container Image Architecture
For images, the Generative AI service supports 64-bit x86 architecture, code name amd64. When building container images, use linux/amd64 as the platform type.
For example,
docker buildx build --platform linux/amd64 -t myimage:latestCode Preparation
The Docker container must meet the following requirements to run in the Hosted Deployment environment.
- Host and port
-
- The container must listen on host
0.0.0.0. - The container must listen on port
8080.
- The container must listen on host
- HTTP response content type
-
The container must expose an HTTP-based service that implements REST-style request and response endpoints using methods such as GET, PUT, POST, DELETE, and PATCH with user-defined paths.
The platform determines whether a request expects a streaming response by inspecting the Accept header.
- If the Accept header includes text/event-stream, the endpoint must return a Server-Sent Events (SSE) response with content type text/event-stream.
- Otherwise, the endpoint must return a standard JSON response with content type application/json.
This setup supports an endpoint for both streaming and non-streaming interactions in a consistent and backward-compatible manner.
- Readiness Endpoint
-
The Docker container must expose a readiness endpoint to verify that the application is fully initialized and ready to handle requests.
- Path:
/ready - Purpose: Indicates whether the container is ready to receive traffic
- Response Format: HTTP status code only
- Content Type:
application/json - Success Status Code:
200 OK(application is ready)
- Path:
- Liveness Endpoint
-
The Docker container must expose a liveness endpoint to verify that the application is running correctly and does not require a restart.
- Path:
/health - Purpose: Detects whether the container is alive and functioning
- Response Format: HTTP status code only
- Content Type:
application/json - Success Status Code:
200 OK(application is healthy)
- Path:
- Image Architecture
-
The service supports
amd64.Note
We recommend that you to use base images provided by Oracle Container Registry. The images on that site pass vulnerability scanning. - Reserved environment variables
-
The following environment variables are reserved for system use. Don't define them in the container code:
PORT
K_SERVICE
K_CONFIGURATION
K_REVISION
OCI_RESOURCE_PRINCIPAL_VERSION
OCI_RESOURCE_PRINCIPAL_PRIVATE_PEM
OCI_RESOURCE_PRINCIPAL_RPST
KUBERNETES_*
Image Architecture
The image must be built for the linux/amd64 platform, as shown in the following example.
File access
The container file system is read-only, except for the /tmp directory, which is writable. If your application needs to write files locally, write them to /tmp.
Vulnerability scanning
Before deployment, the image is scanned using OCI Vulnerability Scanning Service. Deployment fails if any critical vulnerabilities are detected.
Oracle recommends using base images from the OCI Container Registry:
https://container-registry.oracle.com/, where images have already passed vulnerability scanning.
Other restrictions
- Custom entry point commands are not supported. Define the entry command in the Dockerfile by using CMD or ENTRYPOINT.
- Volume mapping is not supported. Containers must be stateless, because local file data is not preserved during redeployment or node replacement.
Example code
The following example shows a simple agent developed in LangGraph and wrapped with FastAPI.
from contextlib import asynccontextmanager
import os
import sys
from typing import Any, Dict
from dotenv import load_dotenv
from fastapi import FastAPI
from langchain_core.messages import HumanMessage
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver
from langgraph.prebuilt import create_react_agent
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_MODEL = os.getenv("OPENAI_MODEL", "gpt-4o-mini")
TEMPERATURE = float(os.getenv("OPENAI_TEMPERATURE", "0.7"))
OPENAI_MOCK_MODE_ENV = os.getenv("OPENAI_MOCK_MODE")
if OPENAI_MOCK_MODE_ENV is None:
OPENAI_MOCK_MODE = not bool(OPENAI_API_KEY)
else:
OPENAI_MOCK_MODE = OPENAI_MOCK_MODE_ENV.strip().lower() in (
"1",
"true",
"yes",
"on",
)
app_graph = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global app_graph
if OPENAI_MOCK_MODE and not OPENAI_API_KEY:
print(
"OPENAI_API_KEY is not set; running in OPENAI_MOCK_MODE.",
file=sys.stderr,
)
yield
return
if not OPENAI_API_KEY:
raise RuntimeError("OPENAI_API_KEY is not set in environment.")
model = ChatOpenAI(
model=OPENAI_MODEL,
temperature=TEMPERATURE,
api_key=OPENAI_API_KEY,
streaming=True,
)
app_graph = create_react_agent(
model=model,
tools=[],
checkpointer=MemorySaver(),
)
yield
app = FastAPI(lifespan=lifespan)
@app.post("/chat")
async def chat(body: Dict[str, Any]):
thread_id = body["thread_id"]
msg = body["message"]
if OPENAI_MOCK_MODE and not OPENAI_API_KEY:
return {
"reply": f"[MOCK] OPENAI_API_KEY missing. Echo: {msg}",
"thread_id": thread_id,
}
response = await app_graph.ainvoke(
{"messages": [HumanMessage(content=msg)]},
config={"configurable": {"thread_id": thread_id}},
)
if "messages" in response and response["messages"]:
last_message = response["messages"][-1]
ai_content = getattr(last_message, "content", str(last_message))
else:
ai_content = "I'm not sure how to respond to that."
return {"reply": ai_content}
@app.get("/health")
async def health():
return {
"status": "Healthy",
"mode": "mock" if (OPENAI_MOCK_MODE and not OPENAI_API_KEY) else "normal",
}
@app.get("/ready")
async def ready():
if OPENAI_MOCK_MODE and not OPENAI_API_KEY:
return {"status": "Ready", "mode": "mock"}
return {"status": "Ready"}
Project Structure
project_directory/
├── agent_example.py # Your main agent code
├── pyproject.toml # Dependencies for your agent
├── Dockerfile # docker file for building image
├── uv.lock # auto generated by uv
└── __init__.py # Makes the directory a Python package
Build Container Image
Following is an example of Docker file in a project directory.
FROM python:3.11-slim
# Set working directory
WORKDIR /app
# Install uv
COPY --from=ghcr.io/astral-sh/uv:latest /uv /bin/uv
# Copy project files
COPY pyproject.toml uv.lock ./
COPY *.py ./
# Install dependencies using uv
RUN uv sync --frozen
# Expose port
EXPOSE 8080
# Run the application using uv
CMD ["uv", "run", "python", "agent_example.py"]Build the docker image in amd64 architecture
docker buildx build --platform linux/amd64 -t my_agent:v1 .Push Image to Registry
Create a container registry. See Overview of Container Registry.
Use docker CLI to push docker images to the container registry.
Step 1: Sign in to container registry Example code:
docker login kix.ocir.ioStep 2: Tag the image using container registry URL and namespace. Example code:
docker tag my_agent:v1 ap-osaka-1.ocir.io/{your_tenancy_namespace}/my_agent:v1Step 3: Push image Example code:
docker push ap-osaka-1.ocir.io/<your_tenancy_namespace>/my_agent:v1Scan Images for Vulnerabilities (Recommended and Optional)
It is not uncommon for the operating system packages included in images to have vulnerabilities. Managing these vulnerabilities enables you to strengthen the security posture of your system, and respond quickly when new vulnerabilities are discovered.
See Scanning Images for Vulnerabilities to create a scan recipe and a scan target.