A Docker image is a read-only package that contains everything needed to run an application: the base operating system files, runtime, libraries, dependencies, and your code. A container is what you get when you actually run that image. The image is the artifact sitting on disk; the container is the live process with its own filesystem, network interface, and process tree.
The confusion between the two usually comes from the fact that you interact with both through similar commands and they share the same underlying files. But the distinction matters as soon as you start asking questions like "why did my changes disappear when I restarted the container?" or "why is this image 2GB?" Understanding the layer mechanics behind both concepts makes everything else in Docker click.
How Docker images work
An image is not a single file. It is a stack of read-only filesystem layers, where each layer represents a set of file changes produced by a single instruction in a Dockerfile.
Take this Dockerfile as an example:
123456FROM python:3.14-slimWORKDIR /appCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txtCOPY . .CMD ["python", "main.py"]
Each line produces a separate layer. The FROM line pulls the base image (which
itself is multiple layers). COPY requirements.txt . adds a layer containing
just that file. RUN pip install adds a layer with all the installed packages.
COPY . . adds a layer with your application code.
These layers are stacked using a union filesystem that merges them into a single coherent view. When you look at the filesystem inside a container, you see one directory tree, but Docker is actually compositing multiple read-only layers underneath.
You can inspect the layers yourself:
12docker image inspect python:3.14-slim \--format '{{json .RootFS.Layers}}' | jq .
![Docker image inspect result for Python 3.14 slim]
Each entry is a content-addressable layer identified by its SHA256 digest. This
is why Docker is efficient with disk space: if two images share the same base
layers (for instance, both use python:3.14-slim), those layers are stored only
once on disk and shared between them.
How containers work
When you run an image with docker run, Docker creates a container by adding a
thin, writable layer on top of the image's read-only layers. This writable layer
is where all filesystem changes that happen at runtime are recorded: log files
written by the application, temp files, anything modified or created during
execution.
1234567891011┌─────────────────────────────┐│ Writable container layer │ ← your runtime changes├─────────────────────────────┤│ App code layer (COPY . .) │ ← read-only├─────────────────────────────┤│ pip install layer │ ← read-only├─────────────────────────────┤│ requirements.txt layer │ ← read-only├─────────────────────────────┤│ python:3.12-slim base │ ← read-only└─────────────────────────────┘
This is called the copy-on-write mechanism. When a running container reads a file, Docker serves it directly from whichever read-only layer contains it. No copying happens. But when the container modifies a file that exists in a lower layer, Docker copies that file into the writable layer first, then applies the change. The original file in the read-only layer remains untouched, so other containers using the same image are completely unaffected.
This design has a practical consequence that catches people off guard: when you stop and remove a container, the writable layer is deleted along with it. Any files you created or modified inside the container are gone. This is by design (containers are meant to be ephemeral), but it's also why Docker volumes exist for data that needs to persist.
Seeing the difference in practice
The following commands illustrate the relationship between the two:
12345# Pull an image (the read-only artifact)docker pull nginx:latest# List images on your systemdocker images
12REPOSITORY TAG IMAGE ID CREATED SIZEnginx latest 3b25b682ea82 2 weeks ago 192MB
At this point nothing is running. The image is just bytes on disk. Now start a container from that image:
12345# Run a container (the live instance)docker run -d --name web nginx:latest# List running containersdocker ps
12CONTAINER ID IMAGE COMMAND STATUS NAMESa4f8c9e12345 nginx:latest "/docker-entrypoint.…" Up 3 seconds web
The image still exists unchanged. The container is a separate thing: a running process with its own writable filesystem layer, network stack, and process namespace. You can run ten containers from the same image and each one gets its own writable layer, while they all share the same underlying read-only layers.
12345# Create a file inside the running containerdocker exec web sh -c 'echo "hello" > /tmp/test.txt'# The file exists in this containerdocker exec web cat /tmp/test.txt
1hello
12345# Start a second container from the same imagedocker run -d --name web2 nginx:latest# The file doesn't exist in the second containerdocker exec web2 cat /tmp/test.txt
1cat: /tmp/test.txt: No such file or directory
This confirms that containers are isolated from each other, even when they're built from the same image.
When this distinction matters
Understanding the image/container boundary becomes especially important in a few situations.
-
Building CI/CD pipelines. Images are what you version, tag, and push to a registry. Containers are what your orchestrator runs in production. Your pipeline should produce an image artifact, push it to a registry, and then your deployment system creates containers from it. If you're making changes by SSH-ing into a running container, you've broken this model and those changes will disappear on the next deploy.
-
Debugging unexpected image size. Every Dockerfile instruction creates a layer, and layers are additive. If you install a package and then delete it in a separate
RUNinstruction, the package still exists in the earlier layer, even though it's invisible in the final filesystem. This is why multi-stage builds and careful layer ordering matter for production images. -
Diagnosing disappeared data. If you wrote data inside a container and it vanished after a restart, you now know why: the writable layer was discarded. For any data that needs to outlive the container (databases, uploaded files, application state), you need a Docker volume that exists independently of the container lifecycle.
Common pitfalls
The biggest misconception is that docker commit is a reasonable workflow for
creating images. It works technically (it snapshots a container's writable layer
into a new image), but it produces opaque, unreproducible images with no clear
lineage. Always use a Dockerfile instead, which makes your image definition
version-controlled, auditable, and rebuildable from scratch.
Another common mistake is treating containers as lightweight VMs. If you find
yourself SSH-ing into a container to install packages, edit config files, and
restart services, you're working against the model rather than with it.
Containers are meant to be disposable. Make your changes in the Dockerfile or
configuration layer, rebuild the image, and replace the container.
Where to go from here
Once your containers are running in production, the next challenge is understanding what's happening inside them. Docker logs are ephemeral by default, resource usage is invisible without instrumentation, and a misbehaving container in a cluster of dozens is hard to find without proper observability.
Dash0's infrastructure monitoring tracks container resource usage alongside real-time logs and distributed traces, giving you full visibility into your containerized workloads from a single place.
Start a free trial today to monitor your containers, pods, and clusters in one view.