Dev
Dockerfile
Django dev ortamında çalışırken dosyalarımızı /code
altından çalıştırmalıyız. Aksi halde kodda değişiklik yaptığımızda canlı ortama yansımayacaktır.
Uygulamamız hafif bir uygulama ise Docker Alpine kullanabiliriz. Eğer pillow, pandas, numpy
kullanan bir uygulamamız varsa Docker Slim kullanmalıyız. Burada arkadaş meseleyi çok güzel anlatmış.
pillow, pandas, numpy
yüklemelerinde bazen sorun yaşayabiliriz. stackoverflow da biraz araştırma yapmamız gerekiyor.
Postgres, Nginx, Redis, Celery, de Docker Alpine kullanabiliriz.
Copy # pull official base image
FROM python:3.8.1-slim
# set work directory
WORKDIR /code
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev libffi-dev --no-install-recommends git \
python3-dev python3-setuptools
# install dependencies
RUN pip install --upgrade pip
COPY ./req_dev.txt /code/req_dev.txt
RUN pip install -r req_dev.txt
# copy entrypoint.sh
COPY ./entrypoint.sh /code/
# copy project
COPY . /code/
# run entrypoint.sh
ENTRYPOINT ["/code/entrypoint.sh"]
Dosyayı çalıştırabilmeyiz. chmod +x entrypoint.sh
Copy #!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
#python manage.py flush --no-input
python manage.py migrate
#python manage.py collectstatic --no-input --clear
exec "$@"
Docker-compose.yml
Copy version: '3.7'
services:
web:
build:
context: .
command: ./manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- redis
redis:
image: "redis:alpine"
celery:
build: .
command: celery -A bahar worker -l info
volumes:
- .:/code
depends_on:
- redis
env_file:
- ./.env.dev
.env.dev
Copy SECRET_KEY=14xhos6yj!kdtq#4t-p)hjni+ln8^)0xbeii5)usf17b4lsfep
DEBUG=True
#POSTGRES
DATABASES_ENGINE=django.db.backends.postgresql_psycopg2
DATABASES_NAME=dbismi
DATABASES_USER=postgres
DATABASES_PASSWORD=park.123
DATABASES_HOST=10.200.10.1
DATABASES_PORT=5432
Prod
Dockerfile.prod
Copy # pull official base image
FROM python:3.8.0-slim as builder
# set work directory
WORKDIR /usr/src/domain.com
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev libffi-dev --no-install-recommends git \
python3-dev python3-setuptools
# lint
RUN pip install --upgrade pip
COPY . /usr/src/app/
# install dependencies
COPY ./req_prod.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/domain.com/wheels -r req_prod.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.8.0-slim
# create directory for the app user
RUN mkdir -p /home/domain.com
# create the app user
RUN groupadd -r app && useradd -r app -g app
# create the appropriate directories
ENV HOME=/home/domain.com
ENV APP_HOME=/home/domain.com/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# install dependencies
RUN apt-get update && apt-get install -qq -y \
build-essential libpq-dev libffi-dev --no-install-recommends git \
python3-dev python3-setuptools
COPY --from=builder /usr/src/domain.com/wheels /wheels
COPY --from=builder /usr/src/domain.com/req_prod.txt .
RUN pip install --upgrade pip
RUN pip install --no-cache /wheels/*
# copy entrypoint-prod.sh
COPY ./entrypoint.prod.sh $APP_HOME
# copy project
COPY . $APP_HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run entrypoint.prod.sh
ENTRYPOINT ["/home/domain.com/web/entrypoint.prod.sh"]
Slim için
RUN groupadd -r app && useradd -r app -g app
Alpine için
RUN addgroup -S app && adduser -S app -G app
apt-get yerine aşağıdakini kullanmalıyız.
Copy # install psycopg2 dependencies
RUN apk update \
&& apk add postgresql-dev gcc python3-dev musl-dev
# python pillow yüklemek için gerekli.
RUN apk add --no-cache jpeg-dev zlib-dev
RUN apk add --no-cache --virtual .build-deps build-base linux-headers
Docker-compose.prod.yml
Postgres'i de container içinden kullanıyoruz.
Copy version: '3.7'
services:
web:
build:
context: .
dockerfile: Dockerfile.prod
command: gunicorn bahar.wsgi:application --bind 0.0.0.0:8000
volumes:
- static_volume:/usr/src/goldenlifehospital/staticfiles
expose:
- 8000
env_file:
- ./.env.prod
depends_on:
- db
db:
image: postgres:12.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
env_file:
- ./.env.prod.db
nginx:
build: ./nginx
volumes:
- static_volume:/usr/src/goldenlifehospital/staticfiles
# ports: # test için
# - 1337:80
depends_on:
- web
expose:
- "80"
environment:
- VIRTUAL_HOST=domain.com,www.domain.com
- VIRTUAL_NETWORK=nginx-proxy
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=domain.com,www.domain.com
- LETSENCRYPT_EMAIL=mail@mail.com
networks:
default:
external:
name: nginx-proxy
volumes:
postgres_data:
static_volume:
Bilgisayarımızdan http://localhost:1337 den sitemizi test etmek istersek. portu açıp expose yi kapatmalıyız. networks ve enviroment de kapatılacak.
.env.prod
Copy SECRET_KEY=14xhos6yj!kdtq#4t-p)hjni+ln8^)0xbeii5)usf17b4lsfep
DEBUG=True
#POSTGRES
DATABASES_ENGINE=django.db.backends.postgresql_psycopg2
DATABASES_NAME=domain_com
DATABASES_USER=domain_com
DATABASES_PASSWORD=domain_com
DATABASES_HOST=db
DATABASES_PORT=5432
env.prod.db
Copy POSTGRES_USER=domain_com
POSTGRES_PASSWORD=domain_com
POSTGRES_DB=domain_com
Nginx
/nginx dizini altında
Dockerfile
Copy FROM nginx:1.17.4-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
nginx.conf
Copy upstream domain {
server web:8000;
}
server {
server_name domain.com;
server_tokens off;
client_max_body_size 100m;
location / {
proxy_pass http://domain;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_redirect off;
}
location /static/ {
alias /home/domain.com/web/static/;
}
}
server {
server_name www.domain.com;
return 301 $scheme://domain.com$request_uri;
}