Search:   Dictionary All Posts
Store Your Knowledge at the Brain Bank!

Docker - Docker Compose

By Jamie in Lessons / Programming - General  2.20.19  
Summary - When you run more than one container in Docker you use the Docker Compose tool to build and connect all the different containers. In this case we will use postgres and explain how to extend the postgres Dockerfile image. Add seed folder to have mock data already included in the database.
Docker Compose Quickstart from github
When joining a new team, you first fork then clone the github project. Then you open up the project in Sublime. For the front end you simply do npm install to install all of the dependences of the frontend.

For the backend, if the team is using Docker, all you have to do is run...

docker-compose up --build

this will setup your entire environment including creating the api server, the database, perhaps with some mockup material, as well as all the other containers that were setup in the docker project.

Docker Compose

Docker Compose is a tool that uses .yml file to create a container. So, instead of the container being created by the Image, it is created by the docker-compose.yml file

// in the root directory, create another file called docker-compose.yml
// in your shell, instead of creating the container using docker run -it -p 3000:3000 name-of-project-container use docker-compose build and it will run the docker-compose.yml file instead of the Image file

docker-compose.yml - the file
version: '3.7'

        container_name: backend
        # image: node v#
        build: ./
        command: npm start
        working_dir: /usr/src/local-directory-name
            - "3000:3000"
            - ./:/usr/src/local-directory-name

# go to to get the latest file of docker compose and put it here
# in the first line we are saying the version of docker-compose we are using

# then we create 'services'. Services is an object in Docker you can use to compose your Docker Container. The first step is to name the project you are creating. in this instance we are are creating 1 container and nicknaming it 'backend', then we will give commands to be included within that container.

# we can create the docker container using an image which uses the node version number, but that is not likely to be the case when using docker-compose because we will likely be using more than one image to create a container, so instead...

# instead of using the image node, we will use build from the root directory. 
# This will tell the Docker Build to jump to the root directory, then look for the Dockerfile, then use whatever commands are in the Dockerfile to create the container. When it is done, it will continue on with the commands in the docker-compose.yml file.

# volumes - tells Docker to listen for changes on the local machine files
# the syntax is read like this...  ./  from the root directory of the docker container, watch :/usr/src/directory-name  on the local machine
# now the 'backend' docker container will listen for changes on the local machine and because we have nodemon installed, those changes will automatically update in Docker so that we don't have to rebuild after every change
# think of volumes as 'mounting' what is on our computer to the container

Docker Compose - Basic Commands

docker-compose build
# don't use this, doesn't work as good as...

docker-compose up --build
# this will build from docker-compose.yml file and build the containers
# after the build it will also run the built containers
# when you make changes to your file you will to run this again

docker-compose run name-of-container
# run only runs the first service, so it is used with regular docker when you are not composing multiple services. It is not useful with docker-compose
# run does NOT create any of the ports: for the services. it does this in order to avoid any port collisions with other services
# you can use a specific port with the run command by using this...

docker-compose run -p 3000:3000 name-of-container
# however, the proper command for docker-compose to run container is...

docker-compose up
# this will run the services containers and connect all the ports

docker-compose down
# this will bring down all the containers so that you are working with a clean slate

docker-compose.yml - Add postgres service

version: '3.7'

    # Backend API
        container_name: backend
        build: ./
        command: npm start
        working_dir: /usr/src/name-of-local-project-api-directory
            POSTGRES_URI: postgres://sally:secret@postgres:5432/database-1
            # variable name: service://username:password@host:port/database
            # environment in Docker creates environment variables that can be used with process.env in the working_dir of the backend in this case
            # the above uses the postgres service below
            # links, below, tells Docker to connect this container to postgres container
            - postgres
            # links: is now deprecated. Do not use this anymore, just delete it. Docker has updated so that any service can use another service by simply using its service name

            - "3000:3000"
            - ./:/usr/src/name-of-local-project-api-directory

    # Postgres
            POSTGRES_USER: sally
            POSTGRES_PASSWORD: secret
            POSTGRES_DB: database-1
            POSTGRES_HOST: postgres
        # environment variables can be used in any linked container
        build: ./postgres
        # this tells Docker to look in the postgres folder for the Docker image
        # it will look for a Dockerfile image and build the container from that
            - "5432:5432"

The Postgres Folder


FROM postgres:10.7
# what image from Docker Hub will be used?

ADD ./tables/ /docker-entrypoint-initdb.d/tables/
# copy the tables folder from the local machine and paste into Docker tables file

ADD ./seed/ /docker-entrypoint-initdb.d/seed/

ADD deploy_schemas.sql /docker-entrypoint-initdb.d/
# copy the deploy_schemas.sql and paste into the Docker container
# when deploy_schemas.sql is pasted, it will automatically execute the commands inside of it, because that is what that file does!

\i '/docker-entrypoint-initdb.d/tables/users.sql
\i '/docker-entrypoint-initdb.d/tables/login.sql'
\i '/docker-entrypoint-initdb.d/seed/seed.sql'

#   \i   is a command that will execute a query#  in this case, it will first look at the users.sql file in the tables folder that was copied over when the image created the container.

# in that file it will see a sql command to install a table into the database, so that is what it will do# then it will run the next query, which is the creation of the login.sql file

/tables/ folder - users.sql & login.sql- users.sql


    id serial PRIMARY KEY,   
    name VARCHAR(100),   
    email text UNIQUE NOT NULL,   
    entries BIGINT DEFAULT 0,   


/seed/ folder - seed.sql

INSERT into users (name, email, entries, joined) values ('jamie', '', 3, 2018-01-01)
INSERT into login (hash, email) values ('lasdflkjasdf', '')



1. The sql commands are being executed using the database connection details of the environmental variables created in the docker-compose.yml file in the root directory

2. In order to use pSequel software to view your database on your local machine, you have to stop the pSequel from running locally first. But first 

docker-compose down


brew services stop postgresql


docker-compose up --build