You have a Docker container you don't need anymore, and docker rm either errors out or leaves you wondering if you also need to clean up volumes, networks, or the image. Removing one container is a single command. Removing dozens of dead ones from a long-running dev machine is a different question, and force removing a running container has consequences for any data inside it.
The commands below handle every version of this: stopped containers, running ones, bulk cleanup with docker container prune, filter-based deletion, and the volume cleanup most guides skip.
Remove a single container
If the container is already stopped, this is the whole story:
1docker rm my-container
1my-container
docker rm and docker container rm are aliases. They do exactly the same thing. The longer form exists because Docker's CLI was reorganized into management commands (docker container, docker image, docker network, etc.) and the short forms were kept for backwards compatibility. Use whichever you prefer.
You can pass the container name or its ID, and you can list multiple containers in one call:
1docker rm web-1 web-2 worker-3
If you don't know the name or ID, list every container on the host with docker ps -a (the -a flag includes stopped containers, which docker ps alone hides):
1docker ps -a
12345CONTAINER ID IMAGE STATUS NAMESa4f8c9e12345 nginx:latest Exited (0) 2 hours ago web-1b2c1d8e23456 redis:7 Up 5 minutes cache
Remove a running container
Try to remove a container that's still running and Docker refuses:
1docker rm cache
1Error response from daemon: cannot remove container "/cache": container is running: stop the container before removing or force remove
You have two options. The clean approach is to stop it first, then remove it:
123docker stop cachedocker rm cache
docker stop sends SIGTERM to the main process, waits 10 seconds (configurable with -t), then sends SIGKILL if the process hasn't exited. This gives the application a chance to flush buffers, close connections, and shut down cleanly. For anything stateful, databases, message brokers, queue workers, use this path.
The faster option is -f, which force removes the container without waiting:
1docker rm -f cache
This sends SIGKILL directly. No grace period. The container is gone in a fraction of a second. Fine for stateless containers, ephemeral test environments, or anything where you don't care about clean shutdown. Don't use it on a database container that's mid-write unless you enjoy fsck.
Remove the volumes too
By default, docker rm leaves anonymous volumes behind. These are volumes the container created automatically (typically through VOLUME instructions in the Dockerfile) without a name you assigned. Over months, they accumulate as orphaned data in /var/lib/docker/volumes/ that nothing references and nothing cleans up.
Add -v to delete anonymous volumes along with the container:
1docker rm -v my-container
Named volumes (created with docker volume create or specified by name in docker run -v myvolume:/data) are never removed by docker rm, even with -v. That's deliberate. Named volumes are explicitly managed resources, and Docker assumes you want them to outlive any specific container. Remove them separately with docker volume rm once you're sure nothing needs them.
Remove all stopped containers at once
When you've been iterating in development for a while, docker ps -a looks like a graveyard. The fastest way to clean it up is docker container prune:
1docker container prune
1234567891011WARNING! This will remove all stopped containers.Are you sure you want to continue? [y/N] yDeleted Containers:4a7f7eebae0f63178aff7eb0aa39cd3f0627a203ab2df258c1a00b456cf20063f98f9c2aa1eaf727e4ec9c0283bc7d4aa4762fbdba7f26191f26c97f64090360Total reclaimed space: 212 B
Skip the prompt with -f:
1docker container prune -f
prune only touches stopped containers. Anything currently running is safe.
Selective bulk removal with filters
When you want to remove a subset of containers, docker container prune accepts a --filter flag. The two supported filter keys are until and label.
Remove containers created more than 24 hours ago:
1docker container prune -f --filter "until=24h"
until accepts Go duration strings (10m, 1h30m, 24h), Unix timestamps, and RFC3339 dates. There's a trap worth knowing: the filter matches the container's creation time, not when it stopped. A container created three days ago and stopped five minutes ago will still get pruned by --filter "until=24h". If you specifically need to preserve recently stopped containers for debugging, prune isn't the right tool. Use docker ps -a --filter "status=exited" --format '{{.ID}} {{.Names}} {{.Status}}' and pick what to remove manually.
Remove containers by label, which is useful when you tag containers in your CI pipeline or compose files:
1docker container prune -f --filter "label=environment=staging"
Or invert the filter to remove everything that isn't tagged with a "keep" label:
1docker container prune -f --filter "label!=keep"
For removal that doesn't fit the prune filters (for example, by name pattern or status), combine docker ps and docker rm:
1docker rm $(docker ps -aq --filter "status=exited" --filter "name=test-")
docker ps -aq returns just the container IDs matching the filters, and command substitution feeds them into docker rm. This works the same way in bash, zsh, and PowerShell.
Auto-remove containers when they exit
If you find yourself running docker rm after every docker run, you're doing manual work Docker can do for you. The --rm flag tells Docker to delete the container as soon as the main process exits:
1docker run --rm -it ubuntu:24.04 bash
When you exit the shell, the container is gone. No docker ps -a clutter. This is the right default for one-shot containers: builds, test runs, debug shells. It doesn't fit long-running services, where you usually want the container to stick around if it crashes so you can inspect it.
Common pitfalls
Removing a container does not remove its image. docker rm only deletes the container, which is a thin writable layer on top of an image. The image itself stays on disk and shows up in docker images. To remove that, see our separate guide on removing Docker images.
Force removing a container mid-write can corrupt data. docker rm -f doesn't unmount cleanly. Bind mounts are usually fine because the host filesystem handles the writes, but anonymous volumes backing a database (Postgres, MySQL, MongoDB) can end up in an inconsistent state if you -f while a write is in flight. I've seen Postgres containers come back up needing manual recovery after this. Stop the container, confirm it exited cleanly, then remove.
Compose containers come back from the dead. If you ran the container with docker compose up and you remove it with docker rm, the next docker compose up recreates it. To genuinely shut down a Compose project, use docker compose down, which stops and removes all containers, networks, and (with -v) the volumes defined in the compose file. If you're trying to debug what a container did before you tore it down, our guide to Docker Compose logs covers the log-collection side of this.
Swarm services restart removed containers. Swarm doesn't manage individual containers; it manages services. Remove a container that belongs to one and the orchestrator immediately spins up a replacement. To actually stop it, scale the service to zero (docker service scale my-service=0) or remove the service entirely (docker service rm my-service).
Final thoughts
The commands above cover the day-to-day cases: single containers, force stops, bulk pruning, and the volume cleanup that keeps /var/lib/docker from growing without bound. Most of removing containers is straightforward. The parts that bite — orphaned anonymous volumes, the until filter using creation time, Swarm restarting what you just deleted — only show up once you're past your first dozen containers.
Once you're running containers in production, the question stops being "how do I remove this container" and becomes "why did this one exit, and what was it doing before it did?" That's a much harder thing to answer once the container is gone — which is why having container resource metrics and Docker logs collected somewhere outside the container matters before you start pruning.
Dash0's infrastructure monitoring ties container resource metrics to real-time logs and distributed traces, so you can trace a removed container back to whatever caused it to fail.
Start a free trial to monitor your container fleet in one view. No credit card required.