How to wait for a container to get ready?

Chamseddine Benhamed
4 min readDec 14, 2020

You can control the order of service startup and shutdown with the depends_on option. Compose always starts and stops containers in dependency order, where dependencies are determined by depends_on, links…”. However, for startup Compose does not wait until a container is “ready” (whatever that means for your particular application) — only until it’s running !

In this article I’ll propose a solution to resolve that type of problems: Suppose that we have a docker compose with a lot of docker images (hello_world1 and hello_world2 in our study case) each of them generate some logs that should be collected and centralized, for this purpose we use the famous elk stack to centralize logs

  • elasticsearch 6.8.6
  • logstash 6.8.6
  • kibana 6.8.6

docker-compose.yml :

version: '3'

services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node

kibana:
image: docker.elastic.co/kibana/kibana:6.8.6
ports:
- 5601:5601
depends_on:
- elasticsearch

logstash:
image: docker.elastic.co/logstash/logstash:6.8.6
volumes:
- pipelines.yml:/config/pipelines.yml:Z
- logstash.conf:/usr/share/logstash/pipeline/logstash.conf:Z
depends_on:
- elasticsearch


hello-world1:
image: hello-world

hello-world2:
image: hello-world

pipeline.yml :

# This file is where you define your pipelines. You can define multiple.
# For more information on multiple pipelines, see the documentation:
# https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

- pipeline.id: main
path.config: "/usr/share/logstash/pipeline/logstash.conf"

logstash.conf :

input {
udp { port => 5000 }
}

filter{
json{
source => "message"
}
}

output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}

Now our elk stack is set up, we should add logspout (https://github.com/gliderlabs/logspout) to collect all logs generated from our services (the hello_world services) and push them to logstash using udp protocol.

In a nutshell, logspout is a log router for Docker containers that runs inside Docker. It attaches to all containers on a host, then routes their logs wherever you want. It also has an extensible module system.

Note: Adding LOGSPOUT=ignore as a service environment variable means that we’ll ignore the service’s logs so they will not be collected.

version: '3'

services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- LOGSPOUT=ignore

kibana:
image: docker.elastic.co/kibana/kibana:6.8.6
ports:
- 5601:5601
environment:
- LOGSPOUT=ignore
depends_on:
- elasticsearch

logstash:
image: docker.elastic.co/logstash/logstash:6.8.6
volumes:
- pipelines.yml:/config/pipelines.yml:Z
- logstash.conf:/usr/share/logstash/pipeline/logstash.conf:Z
environment:
- LOGSPOUT=ignore
depends_on:
- elasticsearch

logspout:
image: gliderlabs/logspout:v3.2.12
command: 'udp://logstash:5000?filter.name=hello*'
environment:
- LOGSPOUT=ignore
- "RAW_FORMAT={ \"container\": {{ toJSON .Container.Name }}, \"message\": {{ toJSON .Data }} }\n"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- logstash

hello-world1:
image: hello-world
depends_on:
- logspout

hello-world2:
image: hello-world
depends_on:
- logspout

Now if we start our compose file using docker-compose up and we open kibana dashboard (localhost: 5601) then we can figure out that some logs are lost!

Why that? How can I get the starting logs!

The reason behind that is when we launch docker-compose, logstash take a moment to start and logspout starts to collect log data and can’t send it to logstash which is not ready yet! Docker-compose up launches the containers in order, but doesn’t guarantee the readiness of containers in the same order.

There’s some solutions in the internet that propose adding a command to test the readiness of a container before launching the executable (you can use wait-for-it. sh or wait-for to wrap your service’s command), Those solutions work, but they are not the best solution, especially in case you have a lot of services since you should add the script to all services and that’s a boilerplate code!

Proposed solution :

The solution that i propose is to use socat (https://www.redhat.com/sysadmin/getting-started-socat), the socat utility is a relay for bidirectional data transfers between two independent data channels.

version: '3'

services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:6.8.6
ports:
- 9200:9200
- 9300:9300
environment:
- discovery.type=single-node
- LOGSPOUT=ignore

kibana:
image: docker.elastic.co/kibana/kibana:6.8.6
ports:
- 5601:5601
environment:
- LOGSPOUT=ignore
depends_on:
- elasticsearch

logstash:
image: docker.elastic.co/logstash/logstash:6.8.6
volumes:
- pipelines.yml:/config/pipelines.yml:Z
- logstash.conf:/usr/share/logstash/pipeline/logstash.conf:Z
environment:
- LOGSPOUT=ignore
depends_on:
- elasticsearch

socat:
image: alpine/socat
command: 'TCP-LISTEN:5000,reuseaddr,fork TCP:logstash:5000,forever,interval=5'

logspout:
image: gliderlabs/logspout:v3.2.13
command: 'tcp://socat:5000?filter.name=hello*'
environment:
- LOGSPOUT=ignore
- BACKLOG=false
- "RAW_FORMAT={ \"container\": {{ toJSON .Container.Name }}, \"message\": {{ toJSON .Data }}, \"@timestamp\": {{ toJSON .Time }} }\n"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
depends_on:
- socat


hello-world1:
image: hello-world
depends_on:
- logspout

hello-world2:
image: hello-world
depends_on:
- logspout

logstash.conf

input {
tcp { port => 5000 }
}

filter{
json{
source => "message"
}
}

output {
elasticsearch {
hosts => ["elasticsearch:9200"]
}
}

so now logspout sends collected logs to socat which routes them to logstash once it gets ready.

command explanation :

command: 'TCP-LISTEN:5000,reuseaddr,fork TCP:logstash:5000,forever,interval=5'

That command says : i listen at tcp port 5000 from any source, then i verify if logstash:5000 is ready (i check that every 5s), if yes i send buffered data (logs) to the destination, if not i wait until the destionation gets ready :)

Now when we start docker-compose socat will buffer the logs emitted from logspout and then resend them to logstash when its gets ready :D

Thanks for reading my article, if you have a question do not hesitate to contact me at https://www.linkedin.com/in/chamseddinebenhamed/

#Docker-compose #ELK #LogSpout #Socat

--

--