Tutoriels
Article populaire

Docker & Kubernetes : Notre workflow de déploiement continu

Équipe Wapiki
5 Janvier 2026
10 min de lecture
DockerKubernetesCI/CDDevOpsGitHub Actions

Notre workflow DevOps

Chez Wapiki, nous déployons en production 20+ fois par semaine. Notre workflow automatisé nous permet de livrer rapidement tout en maintenant la qualité.

Architecture CI/CD

1. GitHub Actions pour CI

yaml
# .github/workflows/ci.yml
name: CI Pipeline

on:
  push:
    branches: [main, develop]
  pull_request:
    branches: [main]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Setup Node.js
        uses: actions/setup-node@v3
        with:
          node-version: '18'
          cache: 'npm'

      - name: Install dependencies
        run: npm ci

      - name: Run linter
        run: npm run lint

      - name: Run tests
        run: npm run test:ci

      - name: Build
        run: npm run build

  build-and-push:
    needs: test
    if: github.ref == 'refs/heads/main'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3

      - name: Login to Docker Hub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKER_USERNAME }}
          password: ${{ secrets.DOCKER_PASSWORD }}

      - name: Build and push
        uses: docker/build-push-action@v4
        with:
          context: .
          push: true
          tags: wapiki/keneya-frontend:${{ github.sha }},wapiki/keneya-frontend:latest

2. Dockerfile multi-stage optimisé

dockerfile
# Build stage
FROM node:18-alpine AS builder

WORKDIR /app

COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine

WORKDIR /app

COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./

ENV NODE_ENV=production
EXPOSE 3000

CMD ["node", "dist/main.js"]

Résultat : Image Docker réduite de 1.2GB à 180MB

Déploiement Kubernetes

Architecture cluster

  • **3 nœuds** : 1 master, 2 workers
  • **Ingress** : NGINX Ingress Controller
  • **Load Balancer** : MetalLB
  • **Storage** : Persistent Volumes (SSD)
  • Deployment configuration

    yaml
    # k8s/deployment.yml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: keneya-frontend
      namespace: production
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: keneya-frontend
      template:
        metadata:
          labels:
            app: keneya-frontend
        spec:
          containers:
          - name: frontend
            image: wapiki/keneya-frontend:latest
            ports:
            - containerPort: 3000
            env:
            - name: NODE_ENV
              value: "production"
            - name: API_URL
              valueFrom:
                configMapKeyRef:
                  name: app-config
                  key: api_url
            resources:
              requests:
                memory: "256Mi"
                cpu: "250m"
              limits:
                memory: "512Mi"
                cpu: "500m"
            livenessProbe:
              httpGet:
                path: /health
                port: 3000
              initialDelaySeconds: 30
              periodSeconds: 10
            readinessProbe:
              httpGet:
                path: /ready
                port: 3000
              initialDelaySeconds: 5
              periodSeconds: 5
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: keneya-frontend
      namespace: production
    spec:
      selector:
        app: keneya-frontend
      ports:
      - port: 80
        targetPort: 3000
      type: ClusterIP

    Ingress avec SSL

    yaml
    # k8s/ingress.yml
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: keneya-ingress
      namespace: production
      annotations:
        cert-manager.io/cluster-issuer: "letsencrypt-prod"
        nginx.ingress.kubernetes.io/ssl-redirect: "true"
    spec:
      ingressClassName: nginx
      tls:
      - hosts:
        - keneya.com
        secretName: keneya-tls
      rules:
      - host: keneya.com
        http:
          paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: keneya-frontend
                port:
                  number: 80

    Auto-scaling

    yaml
    # k8s/hpa.yml
    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: keneya-frontend-hpa
      namespace: production
    spec:
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: keneya-frontend
      minReplicas: 3
      maxReplicas: 10
      metrics:
      - type: Resource
        resource:
          name: cpu
          target:
            type: Utilization
            averageUtilization: 70

    Monitoring avec Prometheus & Grafana

    Prometheus configuration

    yaml
    # prometheus/prometheus.yml
    global:
      scrape_interval: 15s
    
    scrape_configs:
      - job_name: 'kubernetes-pods'
        kubernetes_sd_configs:
        - role: pod
        relabel_configs:
        - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
          action: keep
          regex: true

    Dashboards Grafana

    Nous surveillons :

  • 📊 **Trafic** : Requêtes/s, latence P95, P99
  • 💾 **Ressources** : CPU, RAM, disque
  • 🐛 **Erreurs** : Error rate, stack traces
  • 📱 **Business** : Inscriptions, consultations, paiements
  • Déploiement automatisé

    bash
    #!/bin/bash
    # deploy.sh
    
    # 1. Build and push image
    docker build -t wapiki/keneya-frontend:$VERSION .
    docker push wapiki/keneya-frontend:$VERSION
    
    # 2. Update Kubernetes
    kubectl set image deployment/keneya-frontend \
      frontend=wapiki/keneya-frontend:$VERSION \
      -n production
    
    # 3. Wait for rollout
    kubectl rollout status deployment/keneya-frontend -n production
    
    # 4. Health check
    kubectl exec -it deployment/keneya-frontend -n production -- curl localhost:3000/health

    Résultats

  • 🚀 **Déploiements** : 20+ par semaine
  • ⚡ **Rollout time** : <2 minutes
  • 🔄 **Rollback** : <30 secondes
  • 📈 **Uptime** : 99.9%
  • 🐛 **Zero-downtime deployments** : ✅
  • Conclusion

    Un bon workflow DevOps transforme la vélocité d'une équipe. L'investissement initial en automatisation est rapidement rentabilisé.


    *Besoin d'aide pour votre infrastructure ? [Contactez-nous](/contact).*

    Cet article vous a plu ?

    Partagez-le avec votre réseau !