如何在CI/CD流水线中通知VPS拉取新发布的Docker镜像及实现无停机升级
Hey there! You've already nailed the core CI/CD steps—pulling code, setting up environments, building multi-stage Docker images, and pushing to your registry. Let's tackle those two final pieces to get your VPS auto-updating smoothly.
1. Automating VPS Notifications for New Images
There are three solid approaches to let your VPS know when a new image is ready, each with tradeoffs depending on your needs:
Webhook from GitHub Actions
Add a step to your GitHub Actions workflow right after pushing the image that sends a POST request to a listener on your VPS. Here's how:
- In your GitHub Actions workflow:
Add this after yourdocker pushstep:- name: Trigger VPS update uses: fjogeleit/http-request-action@v1 with: url: 'http://your-vps-ip:8080/update' method: 'POST' customHeaders: '{"Authorization": "Bearer YOUR_SECRET_TOKEN"}' - On your VPS:
Set up a simple listener (you can use a tiny bash script withncor a lightweight Node.js/Flask app) that triggers your update script when it receives the valid POST request. For example, a basic bash listener:
Don't forget to add a secret token to authenticate the request so random people can't trigger updates!while true; do nc -l -p 8080 | while read line; do if echo "$line" | grep -q "POST /update"; then # Run your update script here /path/to/your/update-script.sh fi done done
Registry Webhooks
Most container registries (like Docker Hub, GitHub Container Registry) have built-in webhook support. Go to your registry's settings, add your VPS's webhook URL, and it will automatically send a notification every time a new image is pushed. This is cleaner than adding the step to Actions since the registry handles the trigger.
Polling (Simpler but Less Real-Time)
If you don't want to set up webhooks, you can have a cron job on your VPS that periodically checks for new images. For example, a cron task running every 5 minutes:
*/5 * * * * /path/to/check-for-update.sh
The script can compare the local image's digest with the remote one:
LOCAL_DIGEST=$(docker inspect --format='{{.Id}}' your-image:tag) REMOTE_DIGEST=$(docker pull your-image:tag | grep Digest | awk '{print $2}') if [ "$LOCAL_DIGEST" != "$REMOTE_DIGEST" ]; then /path/to/update-script.sh fi
This is easy to set up but has a delay between image push and update.
2. Full Upgrade Flow & Zero-Downtime Updates
Do You Need to Stop the Existing Container First?
If you do a naive update (stop → remove → pull → start), yes, you'll have downtime. But with Docker's features, you can avoid this entirely. Let's cover both approaches:
Basic Upgrade Flow (With Minor Downtime)
If downtime is acceptable (e.g., a personal project), the simple flow works:
- Stop the running container:
docker stop your-container - Remove the stopped container:
docker rm your-container - Pull the latest image:
docker pull your-image:tag - Start a new container with the same config:
docker run -d --name your-container -p 80:80 your-image:tag
Zero-Downtime Smooth Upgrade (Recommended)
For production or services that can't go down, use one of these methods:
Docker Compose Rolling Updates
If you're using Docker Compose, configure rolling updates in your docker-compose.yml:
services: your-service: image: your-image:tag ports: - "80:80" deploy: replicas: 2 # Run at least 2 copies to avoid downtime update_config: parallelism: 1 # Update one container at a time delay: 10s # Wait 10s between updates failure_action: rollback # Roll back if update fails monitor: 30s # Monitor health for 30s after update restart_policy: condition: on-failure healthcheck: test: ["CMD", "curl", "-f", "http://localhost/health"] interval: 10s timeout: 5s retries: 3
Then, when you want to update (triggered by your webhook/cron), just run:
docker compose up -d
Compose will automatically:
- Start a new container with the latest image
- Wait for it to pass the health check
- Stop the old container
- Repeat for all replicas (if you have more than 2)
This ensures at least one container is always running.
Blue-Green Deployment
This is even safer—you run two identical environments (blue = old, green = new), switch traffic to green once you confirm it's working, then delete blue.
- Start the green (new) container on a different port or with a different name:
docker run -d --name your-service-green -p 8080:80 your-image:new-tag - Test the green service to make sure it's working:
curl http://localhost:8080 - Update your reverse proxy (e.g., Nginx) to point traffic from port 80 to 8080
- Once confirmed, stop and remove the blue (old) container:
docker stop your-service-blue && docker rm your-service-blue - Rename the green container to the original name if needed, and update the proxy back to port 80 if you want.
Manual Rolling Update (Without Compose/Swarm)
If you're not using orchestration tools, you can mimic rolling updates manually:
- Start a new container with a temporary port:
docker run -d --name your-service-temp -p 8080:80 your-image:new-tag - Wait for it to be ready
- Update your reverse proxy to add the new container to the upstream pool
- Remove the old container from the proxy pool
- Stop and remove the old container:
docker stop your-service-old && docker rm your-service-old - Rename the temp container to the original name and adjust ports/proxy if needed.
Full Automated Zero-Downtime Flow
Combine the webhook trigger with Docker Compose rolling updates for a fully automated pipeline:
- GitHub Actions builds and pushes the new image to your registry
- Actions sends a webhook to your VPS
- VPS listener runs the update script:
#!/bin/bash cd /path/to/your/compose/dir docker compose pull your-service docker compose up -d --no-deps your-service # Clean up old images to save space docker image prune -f - Docker Compose handles the rolling update with zero downtime
内容的提问来源于stack exchange,提问作者Hossein Fallah




