rss.png profile for ebal on Stack Exchange, a network of free, community-driven Q&A sites
Feb
25
2017
Docker Swarm a native clustering system

Docker Swarm

The native Docker Container Orchestration system is Docker Swarm that in simple terms means that you can have multiple docker machines (hosts) to run your multiple docker containers (replicas). It is best to work with Docker Engine v1.12 and above as docker engine includes docker swarm natively.

Docker Swarm logo:
docker-swarm.png

In not so simply terms, docker instances (engines) running on multiple machines (nodes), communicating together (VXLAN) as a cluster (swarm).

Nodes

To begin with, we need to create our docker machines. One of the nodes must be the manager and the others will run as workers. For testing purposes I will run three (3) docker engines:

  • Manager Docker Node: myengine0
  • Worker Docker Node 1: myengine1
  • Worker Docker Node 2: myengine2

Drivers

A docker node is actually a machine that runs the docker engine in the swarm mode. The machine can be a physical, virtual, a virtualbox, a cloud instance, a VPS, a AWS etc etc

As the time of this blog post, officially docker supports natively the below drivers:

  • Amazon Web Services
  • Microsoft Azure
  • Digital Ocean
  • Exoscale
  • Google Compute Engine
  • Generic
  • Microsoft Hyper-V
  • OpenStack
  • Rackspace
  • IBM Softlayer
  • Oracle VirtualBox
  • VMware vCloud Air
  • VMware Fusion
  • VMware vSphere

QEMU - KVM

but there are unofficial drivers also.

I will use the qemu - kvm driver from this github repository: https://github.com/dhiltgen/docker-machine-kvm

The simplest way to add the kvm driver is this:


> cd /usr/local/bin/
> sudo -s
# wget -c https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm
# chmod 0750 docker-machine-driver-kvm

Docker Machines

The next thing we need to do, is to create our docker machines. Look on your distro’s repositories:

# yes | pacman -S docker-machine

Manager


$ docker-machine create -d kvm myengine0

Running pre-create checks...
Creating machine...
(myengine0) Image cache directory does not exist, creating it at /home/ebal/.docker/machine/cache...
(myengine0) No default Boot2Docker ISO found locally, downloading the latest release...
(myengine0) Latest release for github.com/boot2docker/boot2docker is v1.13.1
(myengine0) Downloading /home/ebal/.docker/machine/cache/boot2docker.iso from https://github.com/boot2docker/boot2docker/releases/download/v1.13.1/boot2docker.iso...
(myengine0) 0%....10%....20%....30%....40%....50%....60%....70%....80%....90%....100%
(myengine0) Copying /home/ebal/.docker/machine/cache/boot2docker.iso to /home/ebal/.docker/machine/machines/myengine0/boot2docker.iso...

Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env myengine0

Worker 1


$ docker-machine create -d kvm myengine1
Running pre-create checks...
Creating machine...
(myengine1) Copying /home/ebal/.docker/machine/cache/boot2docker.iso to /home/ebal/.docker/machine/machines/myengine1/boot2docker.iso...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env myengine1

Worker 2

$ docker-machine create -d kvm myengine2
Running pre-create checks...
Creating machine...
(myengine2) Copying /home/ebal/.docker/machine/cache/boot2docker.iso to /home/ebal/.docker/machine/machines/myengine2/boot2docker.iso...
Waiting for machine to be running, this may take a few minutes...
Detecting operating system of created instance...
Waiting for SSH to be available...
Detecting the provisioner...
Provisioning with boot2docker...
Copying certs to the local machine directory...
Copying certs to the remote machine...
Setting Docker configuration on the remote daemon...
Checking connection to Docker...
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env myengine2

List your Machines


$ docker-machine env myengine0
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.42.126:2376"
export DOCKER_CERT_PATH="/home/ebal/.docker/machine/machines/myengine0"
export DOCKER_MACHINE_NAME="myengine0"
# Run this command to configure your shell:
# eval $(docker-machine env myengine0)

$ docker-machine ls

NAME        ACTIVE   DRIVER   STATE     URL                         SWARM   DOCKER    ERRORS
myengine0   -        kvm      Running   tcp://192.168.42.126:2376           v1.13.1
myengine1   -        kvm      Running   tcp://192.168.42.51:2376            v1.13.1
myengine2   -        kvm      Running   tcp://192.168.42.251:2376           v1.13.1

Inspect

You can get the IP of your machines with:


$ docker-machine ip myengine0
192.168.42.126

$ docker-machine ip myengine1
192.168.42.51

$ docker-machine ip myengine2
192.168.42.251

with ls as seen above or use the inspect parameter for a full list of information regarding your machines in a json format:


$ docker-machine inspect myengine0

If you have jq you can filter out some info


$ docker-machine inspect myengine0  | jq .'Driver.DiskPath'

"/home/ebal/.docker/machine/machines/myengine0/myengine0.img"

SSH

To enter inside the kvm docker machine, you can use ssh

Manager


$ docker-machine ssh myengine0 

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           ______ o           __/
                          __/
              ___________/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___  __| | ___   ___| | _____ _ __
| '_  / _  / _ | __| __) / _` |/ _  / __| |/ / _  '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ ___/ ___/ __|_______,_|___/ ___|_|____|_|
Boot2Docker version 1.13.1, build HEAD : b7f6033 - Wed Feb  8 20:31:48 UTC 2017
Docker version 1.13.1, build 092cba3

Worker 1


$ docker-machine ssh myengine1 

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           ______ o           __/
                          __/
              ___________/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___  __| | ___   ___| | _____ _ __
| '_  / _  / _ | __| __) / _` |/ _  / __| |/ / _  '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ ___/ ___/ __|_______,_|___/ ___|_|____|_|
Boot2Docker version 1.13.1, build HEAD : b7f6033 - Wed Feb  8 20:31:48 UTC 2017
Docker version 1.13.1, build 092cba3

Worker 2


$ docker-machine ssh myengine2

                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           ______ o           __/
                          __/
              ___________/
 _                 _   ____     _            _
| |__   ___   ___ | |_|___  __| | ___   ___| | _____ _ __
| '_  / _  / _ | __| __) / _` |/ _  / __| |/ / _  '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__|   <  __/ |
|_.__/ ___/ ___/ __|_______,_|___/ ___|_|____|_|
Boot2Docker version 1.13.1, build HEAD : b7f6033 - Wed Feb  8 20:31:48 UTC 2017
Docker version 1.13.1, build 092cba3

Swarm Cluster

Now it’s time to build a swarm of docker machines!

Initialize the manager

docker@myengine0:~$  docker swarm init --advertise-addr 192.168.42.126

Swarm initialized: current node (jwyrvepkz29ogpcx18lgs8qhx) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join
    --token SWMTKN-1-4vpiktzp68omwayfs4c3j5mrdrsdavwnewx5834g9cp6p1koeo-bgcwtrz6srt45qdxswnneb6i9
    192.168.42.126:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

Join Worker 1

docker@myengine1:~$  docker swarm join
>     --token SWMTKN-1-4vpiktzp68omwayfs4c3j5mrdrsdavwnewx5834g9cp6p1koeo-bgcwtrz6srt45qdxswnneb6i9
>     192.168.42.126:2377

This node joined a swarm as a worker.

Join Worker 2

docker@myengine2:~$   docker swarm join
>     --token SWMTKN-1-4vpiktzp68omwayfs4c3j5mrdrsdavwnewx5834g9cp6p1koeo-bgcwtrz6srt45qdxswnneb6i9
>     192.168.42.126:2377

This node joined a swarm as a worker.

From the manager


docker@myengine0:~$  docker node ls

ID                           HOSTNAME   STATUS  AVAILABILITY  MANAGER STATUS
jwyrvepkz29ogpcx18lgs8qhx *  myengine0  Ready   Active        Leader
m5akhw7j60fru2d0an4lnsgr3    myengine2  Ready   Active
sfau3r42bqbhtz1c6v9hnld67    myengine1  Ready   Active

Info

We can find more information about the docker-machines running the docker info command when you have ssh-ed the nodes:

eg. the swarm part:

manager


Swarm: active
 NodeID: jwyrvepkz29ogpcx18lgs8qhx
 Is Manager: true
 ClusterID: 8fjv5fzp0wtq9hibl7w2v65cs
 Managers: 1
 Nodes: 3
 Orchestration:
  Task History Retention Limit: 5
 Raft:
  Snapshot Interval: 10000
  Number of Old Snapshots to Retain: 0
  Heartbeat Tick: 1
  Election Tick: 3
 Dispatcher:
  Heartbeat Period: 5 seconds
 CA Configuration:
  Expiry Duration: 3 months
 Node Address: 192.168.42.126
 Manager Addresses:
  192.168.42.126:2377

worker1

Swarm: active
 NodeID: sfau3r42bqbhtz1c6v9hnld67
 Is Manager: false
 Node Address: 192.168.42.51
 Manager Addresses:
  192.168.42.126:2377

worker 2

Swarm: active
 NodeID: m5akhw7j60fru2d0an4lnsgr3
 Is Manager: false
 Node Address: 192.168.42.251
 Manager Addresses:
  192.168.42.126:2377

Services

Now it’s time to test our docker swarm by running a container service across our entire fleet!

For testing purposes we chose 6 replicas of an nginx container:


docker@myengine0:~$ docker service create --replicas 6 -p 80:80 --name web nginx

ql6iogo587ibji7e154m7npal

List images

docker@myengine0:~$  docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
nginx               <none>              db079554b4d2        9 days ago          182 MB

List of services

regarding your docker registry or your internet connection, we will see the replicas running:


docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  0/6       nginx:latest

docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  2/6       nginx:latest

docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  3/6       nginx:latest

docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  6/6       nginx:latest
docker@myengine0:~$  docker service ps web

ID            NAME   IMAGE         NODE       DESIRED STATE  CURRENT STATE           ERROR  PORTS
t3v855enecgv  web.1  nginx:latest  myengine1  Running        Running 17 minutes ago
xgwi91plvq00  web.2  nginx:latest  myengine2  Running        Running 17 minutes ago
0l6h6a0va2fy  web.3  nginx:latest  myengine0  Running        Running 16 minutes ago
qchj744k0e45  web.4  nginx:latest  myengine1  Running        Running 17 minutes ago
udimh2bokl8k  web.5  nginx:latest  myengine2  Running        Running 17 minutes ago
t50yhhtngbac  web.6  nginx:latest  myengine0  Running        Running 16 minutes ago

Browser

To verify that our replicas are running as they should:

docker-swarm-nginx.png

Scaling a service

It’s really interesting that we can scale out or scale down our replicas on the fly !

from the manager


docker@myengine0:~$  docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  6/6       nginx:latest

docker@myengine0:~$ docker service ps web
ID            NAME   IMAGE         NODE       DESIRED STATE  CURRENT STATE       ERROR  PORTS
t3v855enecgv  web.1  nginx:latest  myengine1  Running        Running 3 days ago
xgwi91plvq00  web.2  nginx:latest  myengine2  Running        Running 3 days ago
0l6h6a0va2fy  web.3  nginx:latest  myengine0  Running        Running 3 days ago
qchj744k0e45  web.4  nginx:latest  myengine1  Running        Running 3 days ago
udimh2bokl8k  web.5  nginx:latest  myengine2  Running        Running 3 days ago
t50yhhtngbac  web.6  nginx:latest  myengine0  Running        Running 3 days ago

Scale Down

from the manager


$ docker service scale web=3
web scaled to 3

docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  3/3       nginx:latest

docker@myengine0:~$ docker service ps web
ID            NAME   IMAGE         NODE       DESIRED STATE  CURRENT STATE       ERROR  PORTS
0l6h6a0va2fy  web.3  nginx:latest  myengine0  Running        Running 3 days ago
qchj744k0e45  web.4  nginx:latest  myengine1  Running        Running 3 days ago
udimh2bokl8k  web.5  nginx:latest  myengine2  Running        Running 3 days ago

Scale Up

from the manager


docker@myengine0:~$ docker service scale web=8
web scaled to 8
docker@myengine0:~$
docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  3/8       nginx:latest
docker@myengine0:~$
docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  4/8       nginx:latest
docker@myengine0:~$
docker@myengine0:~$ docker service ls
ID            NAME  MODE        REPLICAS  IMAGE
ql6iogo587ib  web   replicated  8/8       nginx:latest
docker@myengine0:~$
docker@myengine0:~$
docker@myengine0:~$ docker service ps web
ID            NAME   IMAGE         NODE       DESIRED STATE  CURRENT STATE           ERROR  PORTS
lyhoyseg8844  web.1  nginx:latest  myengine1  Running        Running 7 seconds ago
w3j9bhcn9f6e  web.2  nginx:latest  myengine2  Running        Running 8 seconds ago
0l6h6a0va2fy  web.3  nginx:latest  myengine0  Running        Running 3 days ago
qchj744k0e45  web.4  nginx:latest  myengine1  Running        Running 3 days ago
udimh2bokl8k  web.5  nginx:latest  myengine2  Running        Running 3 days ago
vr8jhbum8tlg  web.6  nginx:latest  myengine1  Running        Running 7 seconds ago
m4jzati4ddpp  web.7  nginx:latest  myengine2  Running        Running 8 seconds ago
7jek2zvuz6fs  web.8  nginx:latest  myengine0  Running        Running 11 seconds ago

Aug
03
2016
How to dockerize a live system

I need to run some ansible playbooks to a running (live) machine.
But, of-course, I cant use a production server for testing purposes !!

So here comes docker!
I have ssh access from my docker-server to this production server:



[docker-server] ssh livebox tar --one-file-system --sparse -C / -cf -  | docker import - centos6:livebox 

Then run the new docker image:



[docker-server]  docker run -t -i --rm -p 2222:22 centos6:livebox bash                                                  

[root@40b2bab2f306 /]# /usr/sbin/sshd -D                                                                             

Create a new entry on your hosts inventory file, that uses ssh port 2222
or create a new separated inventory file

and test it with ansible ping module:


# ansible -m ping -i hosts.docker dockerlivebox

dockerlivebox | success >> {
    "changed": false,
    "ping": "pong"
}

Tag(s): docker
Jun
19
2016
vagrant docker ansible

Recently, I had the opportunity to see a presentation on the subject by Alexandros Kosiaris.

I was never fan of vagrant (or even virtualbox) but I gave it a try and below are my personal notes on the matter.
All my notes are based on Archlinux as it is my primary distribution but I think you can try them with every Gnu Linux OS.

Vagrant

So what is Vagrant ?

Vagrant is a wrapper, an abstraction layer to deal with some virtual solutions, like virtualbox, Vmware, hyper-v, docker, aws etc etc etc
With a few lines you can describe what you want to do and then use vagrant to create your enviroment of virtual boxes to work with.

Just for the fun of it, I used docker

Docker

We first need to create and build a proper Docker Image!

The Dockerfile below, is suggesting that we already have an archlinux:latest docker image.
You can use your own dockerfile or docker image.

You need to have an ssh connection to this docker image and you will need -of course- to have a ssh password or a ssh authorized key built in this image for root. If you are using sudo (then even better) dont forget to add the user to sudoers!



# vim Dockerfile 

# sshd on archlinux
#
# VERSION               0.0.2

FROM     archlinux:latest
MAINTAINER  Evaggelos Balaskas < evaggelos _AT_ balaskas _DOT_ gr >

# Update the repositories
RUN  pacman -Syy && pacman -S --noconfirm openssh python2

# Generate host keys
RUN  /usr/bin/ssh-keygen -A

# Add password to root user
RUN  echo 'root:roottoor' | chpasswd

# Fix sshd
RUN  sed -i -e 's/^UsePAM yes/UsePAM no/g' /etc/ssh/sshd_config && echo 'PermitRootLogin yes' >> /etc/ssh/sshd_config

# Expose tcp port
EXPOSE   22

# Run openssh daemon
CMD  ["/usr/sbin/sshd", "-D"]

Again, you dont need to follow this step by the book!
It is an example to understand that you need a proper docker image that you can ssh into it.

Build the docker image:



# docker build -t archlinux:sshd . 

On my PC:



# docker images 

REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
archlinux           sshd                1b074ffe98be        7 days ago          636.2 MB
archlinux           latest              c0c56d24b865        7 days ago          534 MB
archlinux           devel               e66b5b8af509        2 weeks ago         607 MB
centos6             powerdns            daf76074f848        3 months ago        893 MB
centos6             newdnsps            642462a8dfb4        3 months ago        546.6 MB
centos7             cloudstack          b5e696e65c50        6 months ago        1.463 GB
centos7             latest              d96affc2f996        6 months ago        500.2 MB
centos6             latest              4ba27f5a1189        6 months ago        489.8 MB

Environment

We can define docker as our default provider with:


# export VAGRANT_DEFAULT_PROVIDER=docker

It is not necessary to define the default provider, as you will see below,
but it is also a good idea - if your forget to declare your vagrant provider later

Before we start with vagrant, let us create a new folder:



# mkdir -pv vagrant
# cd vagrant 

Initialization

We are ready to initialized our enviroment for vagrant:


# vagrant init

A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

Initial Vagrantfile

A typical vagrant configuration file looks something like this:



# cat Vagrantfile
 cat Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

# All Vagrant configuration is done below. The "2" in Vagrant.configure
# configures the configuration version (we support older styles for
# backwards compatibility). Please don't change it unless you know what
# you're doing.
Vagrant.configure("2") do |config|
  # The most common configuration options are documented and commented below.
  # For a complete reference, please see the online documentation at
  # https://docs.vagrantup.com.

  # Every Vagrant development environment requires a box. You can search for
  # boxes at https://atlas.hashicorp.com/search.
  config.vm.box = "base"

  # Disable automatic box update checking. If you disable this, then
  # boxes will only be checked for updates when the user runs
  # `vagrant box outdated`. This is not recommended.
  # config.vm.box_check_update = false

  # Create a forwarded port mapping which allows access to a specific port
  # within the machine from a port on the host machine. In the example below,
  # accessing "localhost:8080" will access port 80 on the guest machine.
  # config.vm.network "forwarded_port", guest: 80, host: 8080

  # Create a private network, which allows host-only access to the machine
  # using a specific IP.
  # config.vm.network "private_network", ip: "192.168.33.10"

  # Create a public network, which generally matched to bridged network.
  # Bridged networks make the machine appear as another physical device on
  # your network.
  # config.vm.network "public_network"

  # Share an additional folder to the guest VM. The first argument is
  # the path on the host to the actual folder. The second argument is
  # the path on the guest to mount the folder. And the optional third
  # argument is a set of non-required options.
  # config.vm.synced_folder "../data", "/vagrant_data"

  # Provider-specific configuration so you can fine-tune various
  # backing providers for Vagrant. These expose provider-specific options.
  # Example for VirtualBox:
  #
  # config.vm.provider "virtualbox" do |vb|
  #   # Display the VirtualBox GUI when booting the machine
  #   vb.gui = true
  #
  #   # Customize the amount of memory on the VM:
  #   vb.memory = "1024"
  # end
  #
  # View the documentation for the provider you are using for more
  # information on available options.

  # Define a Vagrant Push strategy for pushing to Atlas. Other push strategies
  # such as FTP and Heroku are also available. See the documentation at
  # https://docs.vagrantup.com/v2/push/atlas.html for more information.
  # config.push.define "atlas" do |push|
  #   push.app = "YOUR_ATLAS_USERNAME/YOUR_APPLICATION_NAME"
  # end

  # Enable provisioning with a shell script. Additional provisioners such as
  # Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
  # documentation for more information about their specific syntax and use.
  # config.vm.provision "shell", inline: <<-SHELL
  #   apt-get update
  #   apt-get install -y apache2
  # SHELL
end

If you try to run this Vagrant configuration file with docker provider,
it will try to boot up base image (Vagrant Default box):



# vagrant up --provider=docker

Bringing machine 'default' up with 'docker' provider...
==> default: Box 'base' could not be found. Attempting to find and install...
    default: Box Provider: docker
    default: Box Version: >= 0
==> default: Box file was not detected as metadata. Adding it directly...
==> default: Adding box 'base' (v0) for provider: docker
    default: Downloading: base
An error occurred while downloading the remote file. The error
message, if any, is reproduced below. Please fix this error and try
again.

Couldn't open file /ebal/Desktop/vagrant/base

Vagrantfile

Put the initial vagrantfile aside and create the below Vagrant configuration file:


Vagrant.configure("2") do |config|
  config.vm.provider "docker" do |d|
    d.image = "archlinux:sshd"
  end
end

That translate to :

Vagrant Provider: docker
Docker Image: archlinux:sshd

Basic commands

Run vagrant to create our virtual box:


#  vagrant up

Bringing machine 'default' up with 'docker' provider...
==> default: Creating the container...
    default:   Name: vagrant_default_1466368592
    default:  Image: archlinux:sshd
    default: Volume: /home/ebal/Desktop/vagrant:/vagrant
    default:
    default: Container created: 4cf4649b47615469
==> default: Starting container...
==> default: Provisioners will not be run since container doesn't support SSH.

ok, we havent yet configured vagrant to use ssh

but we have a running docker instance:



# vagrant status

Current machine states:

default                   running (docker)

The container is created and running. You can stop it using
`vagrant halt`, see logs with `vagrant docker-logs`, and
kill/destroy it with `vagrant destroy`.

that we can verify with docker ps:


#  docker ps -a

CONTAINER ID        IMAGE               COMMAND               CREATED              STATUS              PORTS               NAMES
4cf4649b4761        archlinux:sshd      "/usr/sbin/sshd -D"   About a minute ago   Up About a minute   22/tcp              vagrant_default_1466368592

Destroy

We need to destroy this instance:



#  vagrant destroy

    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Stopping container...
==> default: Deleting the container...

Vagrant ssh

We need to edit Vagrantfile to add ssh support to our docker :



# vim Vagrantfile

Vagrant.configure("2") do |config|

    config.vm.provider "docker" do |d|
        d.image = "archlinux:sshd"
        d.has_ssh = true
    end

end

and re-up our vagrant box:


#  vagrant up

Bringing machine 'default' up with 'docker' provider...
==> default: Creating the container...
    default:   Name: vagrant_default_1466368917
    default:  Image: archlinux:sshd
    default: Volume: /home/ebal/Desktop/vagrant:/vagrant
    default:   Port: 127.0.0.1:2222:22
    default:
    default: Container created: b4fce563a9f9042c
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 172.17.0.2:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default: Warning: Authentication failure. Retrying...
    default: Warning: Authentication failure. Retrying...

Vagrant will try to connect to our docker instance with the user: vagrant and a key.
But our docker image only have a root user and a root password !!


# vagrant status

Current machine states:

default                   running (docker)

The container is created and running. You can stop it using
`vagrant halt`, see logs with `vagrant docker-logs`, and
kill/destroy it with `vagrant destroy`.

#  vagrant destroy

    default: Are you sure you want to destroy the 'default' VM? [y/N] y
==> default: Stopping container...
==> default: Deleting the container...

Vagrant ssh - the Correct way !

We need to edit the Vagrantfile, properly:



# vim Vagrantfile

Vagrant.configure("2") do |config|

    config.ssh.username = 'root'
    config.ssh.password = 'roottoor'

    config.vm.provider "docker" do |d|
        d.image = "archlinux:sshd"
        d.has_ssh = true
    end

end


# vagrant up

Bringing machine 'default' up with 'docker' provider...
==> default: Creating the container...
    default:   Name: vagrant_default_1466369126
    default:  Image: archlinux:sshd
    default: Volume: /home/ebal/Desktop/vagrant:/vagrant
    default:   Port: 127.0.0.1:2222:22
    default:
    default: Container created: 7fef0efc8905bb3a
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 172.17.0.2:22
    default: SSH username: root
    default: SSH auth method: password
    default: Warning: Connection refused. Retrying...
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

# vagrant status

Current machine states:

default                   running (docker)

The container is created and running. You can stop it using
`vagrant halt`, see logs with `vagrant docker-logs`, and
kill/destroy it with `vagrant destroy`.

# vagrant ssh-config

Host default
  HostName 172.17.0.2
  User root
  Port 22
  UserKnownHostsFile /dev/null
  StrictHostKeyChecking no
  PasswordAuthentication no
  IdentityFile /tmp/vagrant/.vagrant/machines/default/docker/private_key
  IdentitiesOnly yes
  LogLevel FATAL

# vagrant ssh

[root@7fef0efc8905 ~]# uptime
 20:45:48 up 11:33,  0 users,  load average: 0.53, 0.42, 0.28
[root@7fef0efc8905 ~]#
[root@7fef0efc8905 ~]#
[root@7fef0efc8905 ~]#
[root@7fef0efc8905 ~]# exit
logout
Connection to 172.17.0.2 closed.

Ansible

It is time to add ansible to the mix!

Ansible Playbook

We need to create a basic ansible playbook:



# cat playbook.yml 

---
- hosts: all

  vars:
      ansible_python_interpreter: "/usr/bin/env python2"

  gather_facts: no

  tasks:

    # Install package vim
    - pacman: name=vim state=present

The above playbook, is going to install vim, via pacman (archlinux PACkage MANager)!
Archlinux comes by default with python3 and with ansible_python_interpreter you are declaring to use python2!

Vagrantfile with Ansible



# cat Vagrantfile

Vagrant.configure("2") do |config|

    config.ssh.username = 'root'
    config.ssh.password = 'roottoor'

    config.vm.provider "docker" do |d|
        d.image = "archlinux:sshd"
        d.has_ssh = true
    end

    config.vm.provision "ansible" do |ansible|
        ansible.verbose = "v"
        ansible.playbook = "playbook.yml"
    end

end

Vagrant Docker Ansible



# vagrant up 

Bringing machine 'default' up with 'docker' provider...
==> default: Creating the container...
    default:   Name: vagrant_default_1466370194
    default:  Image: archlinux:sshd
    default: Volume: /home/ebal/Desktop/vagrant:/vagrant
    default:   Port: 127.0.0.1:2222:22
    default:
    default: Container created: 8909eee7007b8d4f
==> default: Starting container...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 172.17.0.2:22
    default: SSH username: root
    default: SSH auth method: password
    default: Warning: Connection refused. Retrying...
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

==> default: Running provisioner: ansible...
    default: Running ansible-playbook...
PYTHONUNBUFFERED=1 ANSIBLE_FORCE_COLOR=true ANSIBLE_HOST_KEY_CHECKING=false ANSIBLE_SSH_ARGS='-o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s' ansible-playbook --connection=ssh --timeout=30 --limit="default" --inventory-file=/mnt/VB0250EAVER/home/ebal/Desktop/vagrant/.vagrant/provisioners/ansible/inventory -v playbook.yml
Using /etc/ansible/ansible.cfg as config file

PLAY [all] *********************************************************************

TASK [pacman] ******************************************************************
changed: [default] => {"changed": true, "msg": "installed 1 package(s). "}

PLAY RECAP *********************************************************************
default                    : ok=1    changed=1    unreachable=0    failed=0   


# vagrant status

Current machine states:

default                   running (docker)

The container is created and running. You can stop it using
`vagrant halt`, see logs with `vagrant docker-logs`, and
kill/destroy it with `vagrant destroy`.



#  vagrant ssh 

[root@8909eee7007b ~]# vim --version
VIM - Vi IMproved 7.4 (2013 Aug 10, compiled Jun  9 2016 09:35:16)
Included patches: 1-1910
Compiled by Arch Linux

Vagrant Provisioning

The ansible-step is called: provisioning as you may already noticed.

If you make a few changes on this playbook, just type:


#  vagrant provision

and it will re-run the ansible part on this vagrant box !

Jun
05
2016
Docker Notes

Personal Notes on this blog post.
[work in progress]

Why ?

Γιατί docker ?

To docker είναι ένα management εργαλείο για διαχείριση containers.
Εάν κι αρχικά βασίστηκε σε lxc, πλέον είναι αυτοτελές.

Containers είναι ένα isolated περιβάλλον, κάτι περισσότερο από
chroot(jail) κάτι λιγότερο από virtual machines.

Μπορούμε να σηκώσουμε αρκετά linux λειτουργικά, αλλά της ίδιας αρχιτεκτονικής.

Χρησιμοποιούνται κυρίως για development αλλά πλέον τρέχει μεγάλη
production υποδομή σε μεγάλα projects.

Κερδίζει γιατί το docker image που έχω στο PC μου, μπορεί να τρέξει αυτούσιο
σε οποιοδήποτε linux λειτουργικό (centos/fedora/debian/archlinux/whatever)
και προσφέρει isolation μεταξύ της εφαρμογής που τρέχει και του λειτουργικού.
Οι επιδόσεις -πλέον- είναι πολύ κοντά σε αυτές του συστήματος.

Σε production κυρίως χρησιμοποιείτε για continuous deployment,
καθώς τα images μπορεί να τα παράγουν developers, vendors ή whatever,
και θα παίξει σε commodity server με οποιοδήποτε λειτουργικό σύστημα!
Οπότε πλέον το “Σε εμένα παίζει” με το docker μεταφράζεται σε
“Και σε εμένα παίζει” !! στην παραγωγή.

Info

Εάν δεν τρέχει το docker:


# systemctl restart docker

basic info on CentOS7 με devicemapper


# docker info

Containers: 0
Images: 4
Server Version: 1.9.1
Storage Driver: devicemapper
 Pool Name: docker-8:1-10617750-pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem:
 Data file: /dev/loop0
 Metadata file: /dev/loop1
 Data Space Used: 1.654 GB
 Data Space Total: 107.4 GB
 Data Space Available: 105.7 GB
 Metadata Space Used: 1.642 MB
 Metadata Space Total: 2.147 GB
 Metadata Space Available: 2.146 GB
 Udev Sync Supported: true
 Deferred Removal Enabled: false
 Deferred Deletion Enabled: false
 Deferred Deleted Device Count: 0
 Data loop file: /var/lib/docker/devicemapper/devicemapper/data
 Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
 Library Version: 1.02.107-RHEL7 (2015-12-01)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.13.1.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 16
Total Memory: 15.66 GiB
Name: myserverpc
ID: DCO7:RO56:3EWH:ESM3:257C:TCA3:JPLD:QFLU:EHKL:QXKU:GJYI:SHY5

basic info σε archlinux με btrfs :


# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 8
Server Version: 1.11.1
Storage Driver: btrfs
 Build Version: Btrfs v4.5.1
 Library Version: 101
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: null host bridge
Kernel Version: 4.4.11-1-lts
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.68 GiB
Name: myhomepc
ID: MSCX:LLTD:YKZS:E7UN:NIA4:QW3F:SRGC:RQTH:RKE2:26VS:GFB5:Y7CS
Docker Root Dir:  /var/lib/docker/
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/

Images



# docker images -a

REPOSITORY  TAG         IMAGE ID      CREATED       VIRTUAL SIZE
centos6     rpmbuild    ccb144691075  11 days ago       1.092 GB
< none >      < none >      6d8ff86f2749  11 days ago       1.092 GB
< none >      < none >      af92904a92b4  11 days ago       811.8 MB
< none >      < none >      8e429b38312b  11 days ago       392.7 MB

Τα none:none είναι built parent images που χρησιμοποιούνται από τα named docker images
αλλά δεν τα αποθηκεύσαμε σωστά.

Understanding Images

Ανάλογα με το back-end του docker, το docker κρατά σε delta-layers τις διαφορές
ανάμεσα στα parent/child docker images.

Αυτό μας διευκολύνει, γιατί μπορούμε στο production να έχουμε μεγάλες μήτρες από docker images
και να στέλνουμε μικρά delta-child docker images με το production service που θέλουμε να τρέξουμε.

Επίσης βοηθά και στο update.

Σε έξι μήνες, από την αρχική μας εικόνα, φτιάχνουμε το update image και πάνω σε αυτό
ξαναφορτώνουμε την εφαρμογή ή service που θέλουμε να τρέξει.

Έτσι μπορούμε να στέλνουμε μικρά σε μέγεθος docker images και να γίνονται build
τα διάφορα services μας πάνω σε αυτά.

Στο myserverpc μέσω του docker info είδαμε πως τρέχει σε:


Storage Driver: devicemapper

και χρησιμοποιεί το παρακάτω αρχείο για να κρατά τα images :


Data loop file: /var/lib/docker/devicemapper/devicemapper/data

Το οποίο στην πραγματικότητα είναι:


# file data
data: SGI XFS filesystem data (blksz 4096, inosz 256, v2 dirs)

Τα πιο δημοφιλή storage drivers είναι τα UFS & btrfs.
Προσωπικά (ebal) χρησιμοποιώ btrfs γιατί χρησιμοποιεί subvolumes
(σαν να λέμε ξεχωριστά cow volumes) για κάθε docker image
(parent ή child).



# ls /var/lib/docker/btrfs/subvolumes
070dd70b48c86828463a7341609a7ee4924decd4d7fdd527e9fbaa70f7a0caf8
1fb7e53272a8f01801d9e413c823cbb8cbc83bfe1218436bdb9e658ea2e8b755
632cceadcc05f28dda37b39b8a9111bb004a9bdaeca59c0300196863ee44ad0a
8bfbbf03c00863bc19f46aa994d1458c0b5454529ad6af1203fb6e2599a35e91
93bb08f5730368de14719109142232249dc6b3a0571a0157b2a043d7fc94117a
a174a1b850ae50dfaf1c13ede7fe37cc0cb574498a2faa4b0e80a49194d0e115
d0e92b9a33b207c679e8a05da56ef3bf9a750bddb124955291967e0af33336fc
e9904ddda15030a210c7d741701cca55a44b89fd320e2253cfcb6e4a3f905669

Processes

Τι docker process τρέχουν:


# docker ps -a 

CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

ενώ όταν τρέχει κάποιο:



CONTAINER ID  IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
62ef0ed8bc95  centos6:rpmbuild    "bash"              10 seconds ago      Up 9 seconds                            drunk_mietner

Δώστε σημασία στο NAMES

To docker δίνει randomly δύο ονόματα για ευκολότερη διαχείριση,
αλλιώς θα πρέπει να χρησιμοποιούμε το πλήρες hashed named.

Στο παραπάνω παράδειγμα:

62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f

Inspect

Πως παίρνουμε πληροφορίες από ένα docker process:


# docker inspect drunk_mietner

[
{
    "Id": "62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f",
    "Created": "2016-06-05T07:41:18.821123985Z",
    "Path": "bash",
    "Args": [],
    "State": {
        "Status": "running",
        "Running": true,
        "Paused": false,
        "Restarting": false,
        "OOMKilled": false,
        "Dead": false,
        "Pid": 23664,
        "ExitCode": 0,
        "Error": "",
        "StartedAt": "2016-06-05T07:41:19.558616976Z",
        "FinishedAt": "0001-01-01T00:00:00Z"
    },
    "Image": "ccb1446910754d6572976a6d36e5d0c8d1d029e4dc72133211670b28cf2f1d8f",
    "ResolvConfPath": "/var/lib/docker/containers/62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f/resolv.conf",
    "HostnamePath": "/var/lib/docker/containers/62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f/hostname",
    "HostsPath": "/var/lib/docker/containers/62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f/hosts",
    "LogPath": "/var/lib/docker/containers/62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f/62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f-json.log",
    "Name": "/drunk_mietner",
    "RestartCount": 0,
    "Driver": "devicemapper",
    "ExecDriver": "native-0.2",
    "MountLabel": "system_u:object_r:svirt_sandbox_file_t:s0:c344,c750",
    "ProcessLabel": "system_u:system_r:svirt_lxc_net_t:s0:c344,c750",
    "AppArmorProfile": "",
    "ExecIDs": null,
    "HostConfig": {
        "Binds": null,
        "ContainerIDFile": "",
        "LxcConf": [],
        "Memory": 0,
        "MemoryReservation": 0,
        "MemorySwap": 0,
        "KernelMemory": 0,
        "CpuShares": 0,
        "CpuPeriod": 0,
        "CpusetCpus": "",
        "CpusetMems": "",
        "CpuQuota": 0,
        "BlkioWeight": 0,
        "OomKillDisable": false,
        "MemorySwappiness": -1,
        "Privileged": false,
        "PortBindings": {},
        "Links": null,
        "PublishAllPorts": false,
        "Dns": [],
        "DnsOptions": [],
        "DnsSearch": [],
        "ExtraHosts": null,
        "VolumesFrom": null,
        "Devices": [],
        "NetworkMode": "default",
        "IpcMode": "",
        "PidMode": "",
        "UTSMode": "",
        "CapAdd": null,
        "CapDrop": null,
        "GroupAdd": null,
        "RestartPolicy": {
            "Name": "no",
            "MaximumRetryCount": 0
        },
        "SecurityOpt": null,
        "ReadonlyRootfs": false,
        "Ulimits": null,
        "Sysctls": {},
        "LogConfig": {
            "Type": "json-file",
            "Config": {}
        },
        "CgroupParent": "",
        "ConsoleSize": [
            0,
            0
        ],
        "VolumeDriver": "",
        "ShmSize": 67108864
    },
    "GraphDriver": {
        "Name": "devicemapper",
        "Data": {
            "DeviceId": "13",
            "DeviceName": "docker-8:1-10617750-62ef0ed8bc952d501f241dbc4ecda25d3a629880d27fbb7344b5429a44af985f",
            "DeviceSize": "107374182400"
        }
    },
    "Mounts": [],
    "Config": {
        "Hostname": "62ef0ed8bc95",
        "Domainname": "",
        "User": "",
        "AttachStdin": true,
        "AttachStdout": true,
        "AttachStderr": true,
        "Tty": true,
        "OpenStdin": true,
        "StdinOnce": true,
        "Env": null,
        "Cmd": [
            "bash"
        ],
        "Image": "centos6:rpmbuild",
        "Volumes": null,
        "WorkingDir": "",
        "Entrypoint": null,
        "OnBuild": null,
        "Labels": {},
        "StopSignal": "SIGTERM"
    },
    "NetworkSettings": {
        "Bridge": "",
        "SandboxID": "992cf9db43c309484b8261904f46915a15eff3190026749841b93072847a14bc",
        "HairpinMode": false,
        "LinkLocalIPv6Address": "",
        "LinkLocalIPv6PrefixLen": 0,
        "Ports": {},
        "SandboxKey": "/var/run/docker/netns/992cf9db43c3",
        "SecondaryIPAddresses": null,
        "SecondaryIPv6Addresses": null,
        "EndpointID": "17b09b362d3b2be7d9c48377969049ac07cb821c482a9644970567fd5bb772f1",
        "Gateway": "172.17.0.1",
        "GlobalIPv6Address": "",
        "GlobalIPv6PrefixLen": 0,
        "IPAddress": "172.17.0.2",
        "IPPrefixLen": 16,
        "IPv6Gateway": "",
        "MacAddress": "02:42:ac:11:00:02",
        "Networks": {
            "bridge": {
                "EndpointID": "17b09b362d3b2be7d9c48377969049ac07cb821c482a9644970567fd5bb772f1",
                "Gateway": "172.17.0.1",
                "IPAddress": "172.17.0.2",
                "IPPrefixLen": 16,
                "IPv6Gateway": "",
                "GlobalIPv6Address": "",
                "GlobalIPv6PrefixLen": 0,
                "MacAddress": "02:42:ac:11:00:02"
            }
        }
    }
}
]

output σε json, που σημαίνει εύκολο provisioning !!!

Import

Ο πιο εύκολος τρόπος είναι να έχουμε ένα tar archive
από το σύστημα που θέλουμε και να το κάνουμε import:


# docker import - centos6:latest < a.tar

Run

Πως σηκώνουμε ένα docker image:



# docker run -t -i --rm centos6:latest bash

Αυτό σημαίνει πως θα μας δώσει interactive process με entry-point το bash.
Τέλος, μόλις κλείσουμε το docker image θα εξαφανιστούν ΟΛΕΣ οι αλλαγές που έχουμε κάνει.

Χρειάζεται να τα διαγράφουμε, για να μην γεμίσουμε με images που έχουν μεταξύ τους μικρές αλλαγές

Μπορούμε να έχουμε docker processes χωρίς entry-point.

Αυτά είναι τα service oriented containers που το entry-point
είναι TCP port (συνήθως) και τρέχουν τα διάφορα services που θέλουμε.
Όλα αυτά αργότερα.

Inside

Μέσα σε ένα docker image:



[root@62ef0ed8bc95 /]# hostname
62ef0ed8bc95

# ip a

1: lo:  mtu 65536 qdisc noqueue state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever

11: eth0@if12:  mtu 1500 qdisc noqueue state UP
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:acff:fe11:2/64 scope link
       valid_lft forever preferred_lft forever

# ip r

default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0  proto kernel  scope link  src 172.17.0.2 

private network 172.x.x.x

το οποίο έχει δημιουργηθεί από το myserverpc:



10: docker0:  mtu 1500 qdisc noqueue state UP
    link/ether 02:42:24:5c:42:f6 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:24ff:fe5c:42f6/64 scope link
       valid_lft forever preferred_lft forever

12: veth524ea1d@if11:  mtu 1500 qdisc noqueue master docker0 state UP
    link/ether 6e:39:ae:aa:ec:65 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::6c39:aeff:feaa:ec65/64 scope link
       valid_lft forever preferred_lft forever

# brctl show

bridge name bridge id       STP enabled interfaces
docker0     8000.0242245c42f6   no      veth524ea1d
virbr0      8000.525400990c9d   yes     virbr0-nic

Commit

ok, έχουμε κάνει τις αλλαγές μας ή έχουμε στήσει μια μήτρα ενός docker image
που θέλουμε να κρατήσουμε. Πως το κάνουμε commit ?

Από το myserverpc (κι όχι μέσα από το docker process):



# docker commit -p -m "centos6 rpmbuild test image" drunk_mietner centos6:rpmbuildtest

Το βλέπουμε πως έχει δημιουργηθεί:


# docker images

REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
centos6             rpmbuildtest        95246c3b7b8b        3 seconds ago       1.092 GB
centos6             rpmbuild            ccb144691075        11 days ago         1.092 GB

Remove

και δεν το χρειαζόμαστε πλέον, θέλουμε να κρατήσουμε μόνο το centos6:rpmbuild

Εάν δεν έχει child docker images και δεν τρέχει κάποιο docker process βασισμένο σε αυτό το docker:


# docker rmi centos6:rpmbuildtest 

Untagged: centos6:rpmbuildtest
Deleted: 95246c3b7b8b77e9f5c70f2fd7b8ea2c8ec1f72e846897c87cd60722f6caabef

# docker images

REPOSITORY  TAG         IMAGE ID      CREATED     VIRTUAL SIZE
centos6     rpmbuild    ccb144691075  11 days ago   1.092 GB
< none >      < none >      6d8ff86f2749  11 days ago   1.092 GB

Export

οκ, έχουμε φτιάξει στον υπολογιστή μας το τέλειο docker image
και θέλουμε να το κάνουμε export για να το φορτώσουμε κάπου αλλού:



# docker export drunk_mietner > CentOS68_rpmbuild.tar

Tag(s): docker
Jan
28
2016
Create a debian docker image with debootstrap

debootstrap is a very powerful tool that most of debian/ubuntu people already know about.

It’s really super easy to create your own basic debian docker image, even if you are not running debian.

I used the below steps to my archlinux box, but i believe are generic and you can also use them, without any effort.

Step One:

Download and prepare debootstrap



# wget -c http://ftp.debian.org/debian/pool/main/d/debootstrap/debootstrap_1.0.77.tar.gz
# tar xf debootstrap_*.tar.gz
# cd debootstrap

# sed -i -e 's#/usr/share/debootstrap#.#' debootstrap

Step Two:

debootstrap a new sid (unstable) debian:


# mkdir sid

# ./debootstrap --arch amd64 --include=aptitude  sid sid/

Step Three:

Just to be safe, extract debian packages with ar


# cd sid

# for i in `ls -1 var/cache/apt/archives/*deb`; do ar p $i data.tar.xz | tar xJ ; done
# for i in `ls -1 var/cache/apt/archives/*deb`; do ar p $i data.tar.gz | tar xz ; done
# rm -rf var/cache/apt/archives/*deb

Step Four:

Prepare your debian unstable directory.
eg. create the sources.list file


# cat > etc/apt/sources.list << EOF
> deb http://ftp.gr.debian.org/debian unstable main contrib non-free
> deb http://ftp.debian.org/debian/ Sid-updates main contrib non-free
> deb http://security.debian.org/ Sid/updates main contrib non-free
> EOF

Step Five:

Dockerized your debian image:



# tar -c . | docker import - debian:sid
cdf6f22b76f23fa95ae2d5858cec4546086a2064b66cf34b937bc87c83f13c91

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
debian              sid                 cdf6f22b76f2        5 seconds ago       291.3 MB

You are now ready to play with your new image:



# docker run -t -i --rm debian:sid bash
I have no name!@f3ee67226a07:/# 

Jan
28
2016
Create an archlinux docker image from archlinux

Some time ago, I wrote this article: How to create an archlinux docker image from the latest bootstrap but I think the below approach is even better.

Step 0

This step is optional.
If you want to reduce the size of the docker image:


# vi /etc/pacman.conf

and add the below lines:


NoExtract = usr/lib/firmware/*
NoExtract = usr/lib/modules/*
NoExtract = usr/share/locale/*
NoExtract = usr/share/man/*

Step 1

Create the latest archlinux on a temporary directory:


# mkdir -pv /tmp/latestarchlinux/var/lib/pacman
# pacman -Syy -r /tmp/latestarchlinux/
# pacman -S base -r /tmp/latestarchlinux/ --noconfirm

Step 2

dockerized the above directory


# cd /tmp/latestarchlinux/
# tar -c . | docker import - archlinux:latest
99a9d7cd2e357f2463b4bb8f3ad1e8bea4bfc10531dfac1931004405727bf035

Step 3

Actually you ‘ve done !
Just play with it already.


# docker run -t -i --rm archlinux:latest bash
[root@de9b7a1d6058 /]#
Tag(s): docker, archlinux
Mar
30
2015
How to create an archlinux docker image from the latest bootstrap

Docker is a wonderful application for creating development images quick and not-so-dirty.

I am working -mostly- on archlinux so here are the steps:


[~]> wget -c ftp://ftp.otenet.gr/pub/linux/archlinux/iso/latest/archlinux-bootstrap-2015.03.01-x86_64.tar.gz
[~]> tar xf archlinux-bootstrap-2015.03.01-x86_64.tar.gz
[~]> cd root.x86_64
[~]> tar cf archlinux-bootstrap-2015.03.01-x86_64.tar .
[~]> docker import - archlinux:bootstrap < archlinux-bootstrap-2015.03.01-x86_64.tar

after that you should update the docker image:


$ docker run -t -i --rm archlinux:bootstrap bash
# echo 'Server = http://ftp.otenet.gr/linux/archlinux/$repo/os/$arch' > /etc/pacman.d/mirrorlist
# pacman-key --init
# pacman-key --populate archlinux
# pacman -Syuvw
# pacman -Suv

to save your changes, open a new terminal and:


[~]> docker commit -p -m "archlinux bootstrap latest" -a USERNAME DOCKER_ID archlinux:bootstrap

replace your username and your docker_id accordingly.

You can now exit from your docker image.

To help you even more, check out this video i’ve made:

archlinux docker bootstrap image from Evaggelos Balaskas on Vimeo.

Tag(s): archlinux, docker
Jun
08
2014
Dockerfile to build a docker archlinux image with ssh

Today’s work : A dockerfile to build an archlinux image with sshd

You can find my notes here: Dockerfile notes

Jun
07
2014
Time at hackerspace

I am a very proud member of Athen’s Hackerspace.

I am enjoying the entire 3+ years time (and money) that i’ve spend at this hackerspace. Love it.

Today was a very productive day.

With a good friend of mine, are working to setup an ansible, docker, btrfs workshop !

We want to contribute back to the community and we thought that this is a great opportunity.
We are not guru or anything like that - no, we just want to share the knowledge we are getting by spending time at hackerspace. Nothing more, nothing less. Just share our feedback to all the people that have helped us till now.

So, we are working together (collaboration) by making small steps towards to build these workshop.
Today’s work: Creating a tiny compressed archlinux docker image.

My instruction set is documented here: archlinux installation for docker.

Hopefully my next blog post will be about a simple ssh docker file.
We are trying to keep simple notes so that many people can read and use them.