Application in a Container

Intro

What is a Container?
  • A light-weight, stand-alone, executable package of a piece of an application.
  • It hase everything to run it i.e. code, runtime, dependecies, system tools, settings and configuration.
  • When application is running in a container we can call it a contenerized application.
  • Contenerized application works consistently across different environments: devlopment, testing, production in a local machine or on the cloud instance with no need for rebuilding or retooling.
  • It ensures continues deployment with no downtime when new versions of the app are being made several times per a day.
  • It's very popular solution in the age of cloud architecutre and distributed computing environments.
When to use Docker:
  • We want to make an application portable.
  • Sharing app with someone else so that someone can run it on its local machine.
  • App deployment on cloud's server.
Deckerizing app workflow:
  1. Preparing DockerFile that contains set of instructions.
  2. Running building command to built Docker Image due to instructions listed in the first step.
  3. Docker reaches the cache memeory to see if some of the instructions were executed earlier so that it doesn't execute it again.
  4. When building first time, we import basic Docker image from the Docker registry.
  5. Then on top of that, we append following elements of app's environment due to Dockerfile's instructions.
  6. When Docker Image is ready, we can run as many Docker containers as we want using that built image.
Virtual Machines vs Docker Containers:
  • Unlike Virtual Machines, Docker containers are not meant to host operating systems.
  • Containers are meant to run a specific task or process like hosting instance of web server, application or database, computation or analysis.
  • Once a task is complete, the container terminates. It lives as long as the process inside is alive.
source: geekflare

Virtual Machine Docker Containers
VMs don't share resources with host operating system. Containers share resources with host operating system.
With Guest Os you start with full-fledged operting sytem and then you can strip out things you don't need for an app. By sharing resources you start wit the basics and you add up what you need for an app.
Docker Architecture:
  1. Docker Daemon on a server - manages docker objects: images, containers, network. Runs on OS. It's built from following components:
    - volumes for persisting data in a docker container.
    - network interface for docker containers,
    - container runtime for managing container lifecycle - it runs and stops a container.
    - builder that builds images from a Dockerfile.
  2. REST API available by Docker Daemon as interface to interact to Daemon - passing docker commands.
  3. Client Docker CLI (Command Line Interface) for communication with docker daemon.
source: geekflare

  • docker build command -> docker daemon looks for docker file to create an image.
  • docker pull command -> docker daemon pulls an image from registry docker hub into docker host.
  • docker run command -> docker daemon runs an image into docker container within docker host.

Multi-container Apps

Docker Compose:
  • It is very helpful when we use different technologies and each of them we want to contenerize in its own isolated environment.
  • We can use Docke Compose for connecting and orchestrating them in order to define and run multi-container Docker applications.
  • Each container needs to have its own Dockerfile that defines its running environment.
  • Each container should be providing a snigle service: user login service, user registraion service, db and so on.
  • All the necessary configuration of how much conatiers (services) make up the application is placed in the compose file format: docker-compose.yml
  • There are 3 steps to make it run:
    1. Writing a Dockerfile with proper app's environment settings - one dockerfile per one service that will be contenerzied.
    2. Defining all the necessary sevices that make up the applicaion in the file:
    docker-compose.yml in a parent directory.
    3. Running it with command: docker-compose up.
  • That way we can define services and link them with each other.
  • Docker Compose is included in Docker Desktop for Windows.
  • Docker Compose can be installed also with pip as well: pip install docker-compose.
  • When containerizing, we can specify which files can be ignored by putting the into .dockerignore:
    - over there we just need to specify directory of it:
    venv/
    __pycache__/
Kubernetes aka k8s:
  • Kubernetes supports different container runtimes.
  • Open-source platform for clustering, running, scaling and managing containerized applications.
  • It groups conatiners into logical services that builds up final application.
  • Key elements:
    - Node - a host that the container runs on.
    - Pod - represents a runnable unit of work. Can store one or more containers. It's connected via network to the kubernetes ecosystem. It has ist own unique IP address and storage namespace.
    - Replication controller - managing number of pods. Contains a pod template for creating any number of pods. That allows to manage pods life cycle including scaling up or down, rolling deployments and monitoring.
    - Service - tells rest of the kubernetes ecosystem including other pods and replication controllers what service your application provides. Storing ip address and ports of your service.
    - Volume - location where containers access and store data.
    - Labels - nametags to identify things and we can query based on these labels. Can be used to indicate stability, roles and other important attributes.
    - Namespace - grouping mechanism that segments pods, replication controllers, volumes and services. It provides a degree of isolation.
  • As to Kubernetes architecutre it has its own CLI, network interface, and volume and it doesn't build images inside of clusters so only thing it needs when clustering docker containers is docker container runtime.
Kubernetes Load Balancer:
  • Kubernetes is able to create multiple instances (pods) of the application such that a traffic can be distributed evenly preventing app from getting crashed.
  • User accessing the application will not directly hit its URL but they will call load balancer that distributes traffic over number of app's intances.
  • That way Kubernetes serves the application scalably - when one app instance crashes it creates another ensuring reliable and scalable app deployment.
Deployment on Kubernetes
  • Checking if kubernetes cluster installed
    kubectl version --client
  • It requires setting config file up: deployment.yaml
  • .yaml extension stands for yet another markup language
  • Deploying applications with Kubernetes can be easily managed using the cli tool called kubectl:
    kubectl apply -f deployment.yaml.
  • Displaying dashboard:
    minikube dashboard.

Features

App includes following features:

  • Docker
  • Python

Demo

Dockerfile:
  • A text document that contains all the commands a user could call on command prompt to assemble an image.
  • Simply put, it lists steps to create Docker Image i.e.:
    - pulling base image from Docker Registry,
    - stating working directory in a container,
    - copying all dependent files into a container working directory,
    - calling commands in order to install dependencies,
    - copying source code into a container working directory,
    - calling commands in order to run an application.
  • Every Dockerfile's step I list above is executed by Docekr builder in the given order: from top to bottom.
  • Docekr builder goes through a Dockerfile and for each instruction or command in there it generates an image layer and stacks it. The more insctructions or commands a Dockerfile includes, the more layers as stack upon each other.
  • Here comes Docker image definition which is a stack of different layers built due to what we got in Dockerfile.
Docker image:
  • For every new Docker image that builder creates, there are check being made against cached images that were built before. In order to explain this mechanizm lets assume a scenario:
    - the Docker file that had been used for building images so far was changed by modifing a one of instructions,
    - images that were built from that Dockerfile before were cached,
    - Docker builder goes through instrucion of Dockerfile and detects modified instruction,
    - Docker builder recognizes what layer in cached image modified instruction affects,
    - Docker builder rebuilts that image layer and all the ones coming after.
  • Caching mechanizm works the best when we place the insctructions that may undergo some modifications after the ones that will be stable and unchanged along the time.
Dockerfile example:
  • Here is the example of Dockerfile:
    # set base image downloaded from Docker repository
    FROM python:3.7
    # set the working directory in the container
    WORKDIR /code
    # copy the file form dockerfile directory to the current working directory of the container
    COPY requirements.txt .
    # running command inside docker container while image running: install dependencies
    RUN pip install -r requirements.txt
    # copy the content of the local src directory to the working directory
    COPY src/ .
    # app will run on port 5000 inside docker contaier
    Expose: 5000
    # making container in a running stage
    CMD [ "python", "./main.py" ]
  • Due to the fact, the application’s dependencies change less frequently than the Python code, the insctruction of installing dependencies precedes the instruction of exporting app's source code to the container.
  • That way, we have the source code layer upon the dependencies layer and any changes to the code won't affect the dependencies layer.
Running an example Dockerfile in terminal:
  • Building image and running a container:
    - changing directory to the one of Dockerfile's:
    cd ./docker_test
    - building image giving an image_name:tag and current directory (dot):
    docker build -t image_name:latest .
    - listing all images:
    docker images
    - running container which runs application on port 5000:
    docker run -it -d -p 5000:5000 --name container_name image_name
    - check all the docker containers:
    docker ps
    - mapping local directory to container container in order to get outputs from container to local directory:
    docker run -v C:\...\docker_test:/code
    - mapping local dir: C:\...\docker_test to container dir: /code
    - as long as container is running, flask app is available on port 5000.
Other Docker commands:
  • Here are some useful docker cmd commands:
    - checking docker after installation:
    docker -v
    docker run hello-world
    - listing all containers:
    docker container ls -a
    docker ps
    - removing container with name:
    docker container rm container_name
    - listing all images:
    docker images
    - removing image with name:
    docker image rm xxx
    - pulling image from docker hub, however it doesn't create a container:
    docker pull python
    docker images python
    docker container stop image_name
    - entering container:
    docker container exec -it image_name bash
  • Getting all dependencies for a Python app:
    - activating Virtual Environment:
    env/scripts>activate
    - listing all lisnalled dependecies:
    pip list
    -pasting all dependencies with their version into a txt file:
    pip freeze > requirements.txt
Pulling image from Docker Hub:
  • Cmd:
    docker pull mongo
    mkdir mongodb_test
    cd mongodb_test
    when running an image, we map volume with -v and ports -p:
    docker run -it -v mongodata:/data/db -p 27017:27017 --name mongodb -d mongo entering container's bash:
    docker exec -it my_mongo bash
    entering mongo shell:
    mongo
    show dbs
    use test
    db.user.insert({"name":"Artur"})
    db.user.find()
    exiting a mongo's shell:
    exit
    exiting the contianer:
    exit
Pushing image to Docker Hub:
  • Terminal:
    at first we need to login into Docker Hub:
    docker login
    then we need to rename a repository and push it to the Docker Hub:
Docker-compose.yml example:
  • Code:
    # declaring verison of docker compose file format
    version: "3.7"
    services:
        flask:
           # building the flask service (container) using the Dockerfile in the flask directory
           build: ./flask
           # give the flask contianer a name
           container_name: flask
           # instructing Docker to alwyas restart the service
           reastart: always
           # setting environment variables we want to pass into the container
           environment:
              - APP_NAME=MyFlaskApp
              - DB_USERNAME=user_name
           # listing ports for internal services on the same network
           expose:
              - 8080
  • Cmd:
    - building all contaiers:
    docker-compose build
    - running all containers:
    docker-compose up
    - when changes to a code we need to rebuild contaienrs:
    docker-compose up --build
    - stopping containers:
    docker-compose down

Setup

Following installation required:

  • Docker installation from https://www.docker.com/
  • Enable Kubernetes in Docker Desktop settings:

  • Download kubectl.exe from:
    https://kubernetes.io/docs/tasks/tools/install-kubectl/
    - get it on C hard drive: C:\kubectl,
    - add this path into System Variables in Envrionment Variables under Path.

Source Code

You can view the source code: HERE