On Linux based systems, the .bashrc file’s aliases can be used to speed up and reduce human errors in common Docker workflows. Here, I’ll cover some of the aliases I’ve setup that manipulate Docker containers I run.
Intro
I’ve been an avid user of Docker containers on Debian based (Ubuntu) servers to run a variety of services. In administering these services, I often find myself making typos in long docker-compose, run, stop, or rm, style-commands. As I learned more about Linux system administration, I came across the ~/.bashrc file and learned more about the preferences and macros that can be setup there.
If you’re not familiar, aliases allow you to specify short commands or shortcuts that can execute longer or more complex commands. They’re really helpful for reducing human error and the complexity of common commands that you might need to run.
.bashrc aliases for Docker
Here are my most commonly used aliases for Docker workflows, grouped by similar commands:
Stopping and removing containers
# Stop all containers
dstop() { docker stop $(docker ps -a -q); }
# Remove all containers
drm() { docker rm $(docker ps -a -q); }
# Stop and remove all containers
alias drmf='docker stop $(docker ps -a -q) && docker rm $(docker ps -a -q)'
The three above aliases allow me to stop all running containers, remove all running containers, or do both with one command, respectfully. Of these three, most often I run drmf
, and I use it mostly before performing Dockerfile maintenance or upgrading to newer images.
Script to stop/remove containers, download new images, restart containers with new images, and prune unused images and volumes
# Stop and remove all containers, download latest images, restart containers (in current directory), and prune unused images and volumes
alias updatecontainers="~/scripts/updatecontainers.sh"
This alias references a relatively simple bash script called updatecontainers.sh. The script runs the below commands in succession and allows me to stop a stack of Docker containers, remove the containers, pull new images for the containers, start the stack of Docker containers, and remove any old orphaned containers and dangling volumes that are no longer being used.
echo "Stopping containers"
docker stop $(docker ps -a -q)
echo "Containers stopped"
echo "Removing containers"
docker rm $(docker ps -a -q)
echo "Containers removed"
echo "All containers stopped and removed"
echo "Pulling new images"
docker-compose pull
echo "Pulled new images"
echo "Bringing up containers"
docker-compose up -d
echo "Brought up containers"
echo "Removing old containers and their volumes"
docker system prune -f --volumes
echo "Removed old containers and their volumes"
I’m in between two other potential solutions to this problem, though. Right now, I’m testing Watchtower in a Docker container to handle automated Docker image updates, but I often feel like a lazy Home System Admin for not pinning stable versions of various images instead of pulling new ones nightly.
Script to print the public (WAN) IP address of running containers
# Print the external IP address of each container
alias dipaddress="~/scripts/ipcheckcontainers.sh"
The above alias references a bash script I found on gitHub (citation needed). The script runs these commands in succession:
for container in `docker ps -q`; do
echo -ne '\n'
# show the name of the container
docker inspect --format='{{.Name}}' $container;
# run the command (date in the case)
docker exec -it $container curl ifconfig.me;
done
I like this script/alias quite a bit. It allows me to quickly have each running container print out their public IP address. I often use it for ensuring certain containers have the same IP address as the VPN client container they should route their network through (and to identify when that’s working or not working).
I would love to build some automation, monitoring, and alerting around this (ex. automatically fix the issue if possible, or if that’s not feasible: monitor for this to break, automatically stop containers if VPN client connection isn’t there, send me an alert, and send me a playbook entry I wrote for how to fix it).
Bring up multiple docker-compose files at once
# Detached Docker Compose from directory
# Change the directory here to where your docker-compose file is located
alias dcud="docker-compose -f /srv/downloadstack/docker-compose.yml up -d && docker-compose -f /srv/utilities/docker-compose.yml up -d && docker-compose -f /srv/automation/docker-compose.yml up -d"
# Detached Docker Compose from directory and build
# Change the directory here to where your docker-compose file is located
alias dcudb="docker-compose -f /srv/downloadstack/docker-compose.yml up -d --build && docker-compose -f /srv/utilities/docker-compose.yml up -d --build && docker-compose -f /srv/automation/docker-compose.yml up -d --build"
Finally, here are the aliases I use most often in conjunction with drmf
. I use these in conjunction with that command to bring the services back up after maintenance or updates. I found that appending the docker-compose path using -f
for each service that I run is a neat way to bring up multiple docker-compose files at once. If I add new docker-compose files that need to be running on my server, I update this alias to add an additional command to make sure that new docker-compose file gets brought up too.
I would love to find a more elegant way to do this, but for now I’m quite happy with this simple solution.
Future Ideas
I’d love to split my aliases out to another file and reserve .bashrc for preferences/config only. I would also like to find a more elegant way to bring up and down multiple docker-compose services at once. Also, I’m always looking for new aliases or, even better, automations, to make managing servers less time consuming.