Why Can Docker Containers Ping by Name but HTTP Requests Fail?

docker networking docker compose connection refused iptables

Docker containers ping by name but HTTP requests fail because DNS resolution and layer-3 connectivity are both working, but the target container's process binds to 127.0.0.1 (localhost) instead of 0.0.0.0, or you're hitting the wrong port. Ping uses ICMP, which proves only that the network route exists. It tells you nothing about whether a TCP port is open and accepting connections. To fix this, make your application bind to 0.0.0.0 inside the container and ensure that the port you're requesting matches the port the process actually listens on, not the host-mapped port.

The Most Common Cause: Binding to Localhost

Many frameworks default to binding on 127.0.0.1 in development mode. Inside a container, 127.0.0.1 refers to the container's own loopback, so no other container can reach it. You must bind to 0.0.0.0. The following Flask example reproduces the bug and shows the fix.

# Broken: Flask defaults to 127.0.0.1 inside the container.
from flask import Flask
app = Flask(__name__)

if __name__ == "__main__":
    # This only accepts connections from inside this container.
    app.run(port=5000)
# Fixed: Bind to all interfaces so other containers can connect.
from flask import Flask
app = Flask(__name__)

if __name__ == "__main__":
    # 0.0.0.0 makes the server reachable from the Docker network.
    app.run(host="0.0.0.0", port=5000)

Diagnosing from Inside the Calling Container

After you suspect the issue, confirm it by exec-ing into the container that makes the request. Use curl or wget (not ping) because you need to test TCP, not ICMP. If curl isn't installed, you can install it or run a minimal TCP test with shell builtins.

# From the calling container, test TCP connectivity on the actual port.
docker exec -it caller_container sh -c \
  "curl -v http://api_service:5000/health"

# If curl is not available, test with a raw TCP connection.
docker exec -it caller_container sh -c \
  "cat < /dev/tcp/api_service/5000 && echo 'PORT OPEN' || echo 'PORT CLOSED'"

Confirm What the Target Container Actually Listens On

Run netstat or ss inside the target container. If you see 127.0.0.1:5000 instead of 0.0.0.0:5000, that's your problem. You can also inspect from the host with docker inspect to verify port mappings and the container's IP address.

# Check which address the process is bound to inside the target.
docker exec -it api_service sh -c "ss -tlnp"

# Expected output for a working setup:
# LISTEN  0  128  0.0.0.0:5000  0.0.0.0:*  users:(("python",pid=1,fd=3))

# Broken output — bound to loopback only:
# LISTEN  0  128  127.0.0.1:5000  0.0.0.0:*  users:(("python",pid=1,fd=3))

The Container Port vs. Host Port Gotcha

When containers talk to each other on the same Docker network, they use the container's internal port, not the host-mapped port. A ports: "8080:5000" mapping means the host reaches the service at 8080, but another container on the same network must still use port 5000. This trips people up constantly. The EXPOSE directive in a Dockerfile is purely documentation; it doesn't open or publish anything. Only the ports key in Compose (or -p in docker run) creates a host mapping, and even that is irrelevant for container-to-container traffic.

# docker-compose.yml demonstrating correct container-to-container calls.
services:
  api:
    build: ./api
    # Host can reach this at localhost:8080.
    # Other containers reach it at api:5000.
    ports:
      - "8080:5000"

  worker:
    build: ./worker
    environment:
      # Use the internal port, not the host-mapped port.
      API_URL: "http://api:5000"

Network Isolation: Are They on the Same Docker Network?

If you have multiple Compose files or manually created networks, containers might sit on different bridge networks and can't reach each other at all. In that case ping would fail too, but it's worth verifying. You can inspect which networks a container is attached to and place containers on a shared network explicitly.

# List networks a container is attached to.
docker inspect api_service --format '{{json .NetworkSettings.Networks}}' | python3 -m json.tool

# If containers are on different networks, attach one to the other's network.
docker network connect my_shared_network worker_container

# Or define a shared network in docker-compose.yml.
# Both services referencing the same network will be able to resolve each other.

Host Firewall and iptables Rules

On Linux hosts, Docker manipulates iptables to route traffic. If you run a firewall like ufw or firewalld, it can interfere with Docker's forwarding rules, especially after a restart where firewall rules reload but Docker's chains get flushed. If everything else checks out and you're on Linux, inspect the filter and nat chains.

# Check that Docker's iptables chains exist and allow forwarding.
sudo iptables -L DOCKER -n -v
sudo iptables -L DOCKER-ISOLATION-STAGE-1 -n -v

# If chains are missing, restart Docker to recreate them.
sudo systemctl restart docker

# For UFW users: ensure forwarding is enabled in /etc/default/ufw.
# DEFAULT_FORWARD_POLICY="ACCEPT"

Summary Debugging Checklist for Docker HTTP Failures

When containers can ping by name but HTTP fails, work through this sequence. First, verify that the target process binds to 0.0.0.0, not 127.0.0.1. This is the cause roughly 80% of the time. Second, confirm that you're using the container's internal port, not the host-mapped port. Third, ensure that both containers are on the same Docker network. Fourth, on Linux, check that iptables forwarding chains haven't been clobbered by a host firewall. Finally, remember that EXPOSE in a Dockerfile does nothing at runtime; it's metadata, not a network command. If you're using Node.js, Rails, Django, or any framework with a development server, search its docs for the bind address option. Nearly all of them default to localhost and need an explicit directive to listen on all interfaces inside a container.

← Back to all articles