Skip to main content
  1. Posts/
  2. Learning ArgoCD/
  3. Operations & Debugging/

Debugging: Traefik 404 on Minikube (Rancher Desktop Port Conflict)

·1572 words·8 mins
Ravi Singh
Author
Ravi Singh
Software engineer with 15+ years building backend systems and cloud platforms across fintech, automotive, and academia. I write about the things I build, debug, and learn — so I don’t forget them.
Learning ArgoCD - This article is part of a series.
Part 3: This Article

Debugging: Traefik 404 on Minikube (Rancher Desktop Port Conflict)
#

The Symptom
#

After bootstrapping the eu-prod-minikube environment and running minikube tunnel, all services were Synced and Healthy in ArgoCD. TLS certificates were issued. Ingresses had EXTERNAL-IP: 127.0.0.1. Yet every request returned 404.

1
2
curl -sk https://svc1.eu-prod-minikube.ravikrs.local -o /dev/null -w "%{http_code}"
# 404

Debugging Layer by Layer
#

Layer 1 - Is it an ArgoCD sync problem?
#

First check: are all applications actually healthy, or is something not synced?

1
kubectl --context minikube get applications -n argocd
1
2
3
4
5
6
7
8
9
NAME                        SYNC STATUS   HEALTH STATUS
argocd-config               Synced        Healthy
cert-manager                Synced        Healthy
cert-manager-config         Synced        Healthy
alpha-svc1-eu-prod-minikube   Synced        Healthy
alpha-svc2-eu-prod-minikube   Synced        Healthy
reloader                    Synced        Healthy
root-eu-prod-minikube       Synced        Healthy
traefik                     Synced        Healthy

All healthy. Not a sync problem.


Layer 2 - Do the Ingress and Certificate resources exist?
#

1
2
kubectl --context minikube get ingress -A
kubectl --context minikube get certificates -A
1
2
3
4
5
6
7
8
9
NAMESPACE   NAME            CLASS     HOSTS                                   ADDRESS     PORTS
argocd      argocd-server   traefik   argocd.eu-prod-minikube.ravikrs.local   127.0.0.1   80, 443
alpha-prod    svc1            traefik   svc1.eu-prod-minikube.ravikrs.local     127.0.0.1   80, 443
alpha-prod    svc2            traefik   svc2.eu-prod-minikube.ravikrs.local     127.0.0.1   80, 443

NAME                          READY   SECRET
argocd-eu-prod-minikube-tls   True    argocd-eu-prod-minikube-tls
svc1-eu-prod-minikube-tls     True    svc1-eu-prod-minikube-tls
svc2-eu-prod-minikube-tls     True    svc2-eu-prod-minikube-tls

Ingresses have ADDRESS: 127.0.0.1. Certificates are READY: True. The cluster resources look completely healthy.


Layer 3 - Is the connection actually reaching Traefik?
#

A plain 404 could mean:

  • Nothing is listening on 127.0.0.1:443 (connection refused)
  • Something is listening but has no matching route (Traefik’s own 404)

Use verbose curl to see what happens at the TLS layer:

1
curl -vsk https://svc1.eu-prod-minikube.ravikrs.local 2>&1 | grep -E "Connected|SSL|HTTP|title"
1
2
3
* Connected to svc1.eu-prod-minikube.ravikrs.local (127.0.0.1) port 443
* SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256
< HTTP/2 404

Key finding: TLS handshake succeeds and HTTP/2 404 is returned - something IS listening and responding. The 404 is from an HTTP server, not a network error. This rules out the LoadBalancer/routing being broken at the network level.

Check the response body to identify who is returning the 404:

1
2
curl -sk https://svc1.eu-prod-minikube.ravikrs.local
# 404 page not found

content-type: text/plain; charset=utf-8 with x-content-type-options: nosniff

  • this is Traefik’s own 404 response. Traefik is receiving the request, but has no matching router.

Layer 4 - Is Traefik reading the Ingresses?
#

Check Traefik logs for startup errors:

1
kubectl --context minikube logs -n ingress deployment/traefik --tail=20
1
2
3
4
ERR Error configuring TLS: "secret alpha-prod/svc1-eu-prod-minikube-tls does not exist"
ERR Error configuring TLS: "secret argocd/argocd-eu-prod-minikube-tls does not exist"
INF Updated ingress status ingress=svc1 namespace=alpha-prod
INF Updated ingress status ingress=svc2 namespace=alpha-prod

Two things to notice:

  1. The TLS errors are from startup (before cert-manager finished issuing). After the certs appeared, Traefik logged Updated ingress status - it IS watching and reading the Ingresses.
  2. Traefik is updating the Ingress .status.loadBalancer.ingress field - this confirms it has RBAC and is processing the Ingress objects.

Restart Traefik to force a clean start (certs are now ready):

1
2
3
4
kubectl --context minikube rollout restart deployment/traefik -n ingress
sleep 15
curl -sk https://svc1.eu-prod-minikube.ravikrs.local -o /dev/null -w "%{http_code}"
# still 404

Layer 5 - Is IngressClass configured correctly?
#

1
kubectl --context minikube get ingressclass
1
2
NAME      CONTROLLER                      PARAMETERS   AGE
traefik   traefik.io/ingress-controller   <none>       11m

IngressClass exists and has the correct controller. The Ingresses reference ingressClassName: traefik. RBAC check:

1
2
3
kubectl --context minikube auth can-i list ingresses --namespace alpha-prod \
  --as=system:serviceaccount:ingress:traefik
# yes

RBAC is fine. Traefik can read Ingresses from all namespaces.


Layer 6 - Does the Traefik CRD provider work?
#

Standard Kubernetes Ingress uses one provider; Traefik’s own IngressRoute CRD uses another. Test whether the CRD provider routes correctly:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
kubectl --context minikube apply -f - <<'EOF'
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: svc1-test
  namespace: alpha-prod
spec:
  entryPoints:
    - websecure
  routes:
    - match: Host(`svc1.eu-prod-minikube.ravikrs.local`)
      kind: Rule
      services:
        - name: svc1
          port: 80
  tls:
    secretName: svc1-eu-prod-minikube-tls
EOF

sleep 3
curl -sk https://svc1.eu-prod-minikube.ravikrs.local -o /dev/null -w "%{http_code}"
# 404

Even Traefik’s own CRD doesn’t route. This is significant - it means the issue is not specific to the Kubernetes Ingress provider. Traefik is receiving requests but matching no routes at all, regardless of how they are defined.

Try HTTP to rule out TLS-specific problems:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
kubectl --context minikube apply -f - <<'EOF'
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: svc1-test-http
  namespace: alpha-prod
spec:
  entryPoints:
    - web
  routes:
    - match: Host(`svc1.eu-prod-minikube.ravikrs.local`)
      kind: Rule
      services:
        - name: svc1
          port: 80
EOF

curl -s http://svc1.eu-prod-minikube.ravikrs.local -o /dev/null -w "%{http_code}"
# 404

Same 404 on HTTP/port 80. Traefik is not routing ANY request for any prod hostname.


Layer 7 - Is Traefik actually receiving these requests?
#

Check what is listening on ports 80 and 443 on the Mac:

1
lsof -i :80 -i :443 | grep LISTEN
1
2
ssh   41648  ravisingh  41u  IPv4  TCP *:http (LISTEN)
ssh   41648  ravisingh  42u  IPv4  TCP *:https (LISTEN)

There is an SSH process listening on ALL interfaces (*) for both port 80 and 443. This is not Traefik - this is a system SSH process.

When curl connects to 127.0.0.1:443, it hits this SSH listener, NOT Minikube’s Traefik. That’s why the TLS handshake works (the SSH process handles TLS) but every route returns 404 (the SSH process forwards to something that has no routes for *.eu-prod-minikube.ravikrs.local).

Identify what this SSH process is:

1
ps -p 41648 -o command=
1
ssh: /Users/ravisingh/Library/Application Support/rancher-desktop/lima/0/ssh.sock [mux]

It’s Rancher Desktop’s Lima SSH connection. Rancher Desktop maintains a persistent SSH tunnel to its Lima VM (which runs k3s). That SSH tunnel holds the port 80/443 bindings - forwarding traffic to Rancher Desktop’s own k3s Traefik.


Layer 8 - Why does minikube tunnel assign 127.0.0.1?
#

1
minikube profile list
1
2
PROFILE   DRIVER   RUNTIME   IP             VERSION
minikube  docker   docker    192.168.49.2   v1.35.1

Minikube is using the Docker driver. On macOS, the Docker driver runs Minikube inside Rancher Desktop’s Docker daemon. minikube tunnel with the Docker driver assigns 127.0.0.1 as the external IP for LoadBalancer services, expecting Docker’s port forwarding to handle the routing. But Rancher Desktop’s SSH process already owns those ports.

Confirm minikube tunnel is running but has no effect on port bindings:

1
2
3
4
5
ps aux | grep "minikube tunnel"
# minikube tunnel is running (PID 59336)

lsof -i :443 | grep LISTEN
# Only ssh PID 41648 - minikube tunnel created no new listener

Root cause confirmed: 127.0.0.1:443 → Rancher Desktop SSH → Rancher Desktop Traefik → no route for *.eu-prod-minikube.ravikrs.local → 404.


Layer 9 - Verify Traefik works when bypassing the LoadBalancer
#

Use kubectl port-forward to create a direct tunnel to the Traefik pod through the Kubernetes API, completely bypassing Docker networking and the LoadBalancer:

1
2
3
4
kubectl --context minikube port-forward -n ingress svc/traefik 8443:443 8080:80 &
sleep 3
curl -sk https://svc1.eu-prod-minikube.ravikrs.local:8443 -o /dev/null -w "%{http_code}"
# 200

200. Traefik was correctly configured all along. The routing was working inside the cluster. The problem was entirely in how traffic from the Mac reached Traefik.


Root Cause Summary
#

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
curl https://svc1.eu-prod-minikube.ravikrs.local
  → resolves to 127.0.0.1 (/etc/hosts)
  → connects to 127.0.0.1:443
  → hits Rancher Desktop's SSH process (*:443 LISTEN)
  → forwarded to Rancher Desktop's k3s Traefik
  → no route for eu-prod-minikube hostname
  → 404

Expected path (via port-forward):
curl https://svc1.eu-prod-minikube.ravikrs.local:8443
  → connects to 127.0.0.1:8443 (kubectl port-forward listener)
  → tunnelled through kube-apiserver to Traefik pod directly
  → Traefik has the route
  → 200

The conflict:

  • Minikube uses Docker driver → assigns 127.0.0.1 as LoadBalancer IP
  • Rancher Desktop holds *:80 and *:443 via its Lima SSH tunnel
  • Both live in the same Docker environment on the same Mac

Fix
#

1
2
3
4
5
6
7
# Keep this running in a terminal whenever you need to access the prod cluster
kubectl --context minikube port-forward -n ingress svc/traefik 8443:443 8080:80

# Access prod services
open https://svc1.eu-prod-minikube.ravikrs.local:8443
open https://svc2.eu-prod-minikube.ravikrs.local:8443
open https://argocd.eu-prod-minikube.ravikrs.local:8443

kubectl port-forward routes: Mac → kube-apiserver → Traefik pod. It bypasses the LoadBalancer, Docker networking, and the port conflict entirely.

Permanent Fix (if standard ports are required)
#

Switch Minikube to a VM-based driver that does not share Rancher Desktop’s Docker daemon:

1
2
minikube delete
minikube start --driver=qemu2

With qemu2, Minikube runs in its own VM with its own network stack. minikube tunnel creates a proper kernel route to the VM, and port 443 is not contested.


Debugging Instincts Built Here
#

StepWhat we checkedWhat it told us
get applicationsArgoCD sync stateNot a GitOps/sync problem
get ingress, get certificatesResource existenceCluster config is correct
curl -vTLS handshakeSomething IS listening and responding
Response body 404 page not foundWho is returning 404It’s Traefik’s own 404, not a network error
Traefik logsStartup errors, ingress status updatesTraefik IS reading ingresses
IngressRoute CRD testIs it provider-specific?No - CRD routes also fail
HTTP IngressRoute testIs it TLS-specific?No - HTTP also fails
lsof -i :443What owns port 443SSH process, not Traefik
ps -p <pid>Which SSH processRancher Desktop’s Lima tunnel
minikube profile listMinikube driverDocker driver - shares Rancher Desktop
port-forward testDoes Traefik route correctly?Yes - config was correct all along
Learning ArgoCD - This article is part of a series.
Part 3: This Article