lost and found ( for me ? )

how to use Kubernetes on CentOS Atomic host

Reference
http://www.projectatomic.io/docs/gettingstarted/

Prepare 5 Atomic hosts
# virsh list --all | grep -i atomic
-     CentOS-Atomic-01               shut off
-     CentOS-Atomic-02               shut off
-     CentOS-Atomic-03               shut off
-     CentOS-Atomic-04               shut off
-     CentOS-Atomic-05               shut off

add a storage for Doker.
(http://www.projectatomic.io/docs/quickstart/)

create a qcow2 image for CentOS-Atomic-01
# qemu-img create -f qcow2 /var/lib/libvirt/images/Atomic-01-qcow2.img 10G
Formatting '/var/lib/libvirt/images/Atomic-01-qcow2.img', fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off

start Atomic-01 VM
virsh # start CentOS-Atomic-01
Domain CentOS-Atomic-01 started

access to the VM
virsh # console CentOS-Atomic-01
Connected to domain CentOS-Atomic-01
Escape character is ^]

CentOS Linux 7 (Core)
Kernel 3.10.0-327.4.5.el7.x86_64 on an x86_64

atomic01 login: centos
Password:
Last login: Mon Feb 15 08:34:15 on ttyS0
[centos@atomic01 ~]$ ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vda2

add the storage
virsh # attach-disk CentOS-Atomic-01 /var/lib/libvirt/images/Atomic-01-qcow2.img vdb --driver qemu --type disk --subdriver qcow2 --persistent
Disk attached successfully

on the VM.
/dev/vdb has been added on the fly.
[centos@atomic01 ~]$ ls /dev/vd*
/dev/vda  /dev/vda1  /dev/vda2  /dev/vdb

check xml file of that VM. virsh dumpxml
virsh # dumpxml CentOS-Atomic-01
  </disk>
   <disk type='block' device='disk'>
     <driver name='qemu' type='qcow2'/>
     <source dev='var/lib/libvirt/images/Atomic-01-qcow2.img'/>
     <target dev='vdb' bus='virtio'/>
     <alias name='virtio-disk1'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
   </disk>

configure the new drive for docker storage.
on the Atomic-01, edit /etc/sysconfig/docker-storage-setup.
-bash-4.2# egrep -v ^# /etc/sysconfig/docker-storage-setup
DEVS="/dev/vdb"

run docker-storage-setup
-bash-4.2# docker-storage-setup
Checking that no-one is using this disk right now ...
[  381.338799]  vdb: unknown partition table
OK

Disk /dev/vdb: 20805 cylinders, 16 heads, 63 sectors/track
sfdisk:  /dev/vdb: unrecognized partition table type

Old situation:
sfdisk: No partitions found

New situation:
Units: sectors of 512 bytes, counting from 0

  Device Boot    Start       End   #sectors  Id  System
/dev/vdb1          2048  20971519   20969472  8e  Linux LVM
/dev/vdb2             0         -          0   0  Empty
/dev/vdb3             0         -          0   0  Empty
/dev/vdb4             0         -          0   0  Empty
Warning: partition 1 does not start at a cylinder boundary
Warning: partition 1 does not end at a cylinder boundary
Warning: no primary partition is marked bootable (active)
This does not matter for LILO, but the DOS MBR will not boot this disk.
Successfully wrote the new partition table

Re-reading the partition table ...
[  381.471818]  vdb: vdb1

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
 Physical volume "/dev/vdb1" successfully created
 Volume group "atomicos" successfully extended
 Logical volume "docker-pool" changed.

see docker info
-bash-4.2# docker info
Containers: 0
Images: 0
Storage Driver: devicemapper
Pool Name: atomicos-docker--pool
Pool Blocksize: 524.3 kB
Backing Filesystem: xfs
Data file:
Metadata file:
Data Space Used: 62.39 MB
Data Space Total: 2.902 GB
Data Space Available: 2.84 GB
Metadata Space Used: 40.96 kB
Metadata Space Total: 12.58 MB
Metadata Space Available: 12.54 MB
Udev Sync Supported: true
Deferred Removal Enabled: true
Library Version: 1.02.107-RHEL7 (2015-10-14)
Execution Driver: native-0.2
Logging Driver: json-file
Kernel Version: 3.10.0-327.4.5.el7.x86_64
Operating System: CentOS Linux 7 (Core)
CPUs: 1
Total Memory: 993.1 MiB
Name: atomic01.example.org
ID: KC6X:4UEH:3LHO:AEEI:WUOD:Y5YB:F66K:Y52W:PX6Y:JCY3:GGS7:RWM6
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled

repeat the same on Atomic-02 - 05.

[ configure cluster master ( CentOS-Atomic-01 ) ]

at first, upgrade atomic host if you have any updates.
[centos@atomic01 ~]$ ip a s eth0 | grep inet | grep -v inet6
   inet 192.168.122.233/24 brd 192.168.122.2065 scope global dynamic eth0

[centos@atomic01 ~]$ sudo atomic host upgrade
Updating from: centos-atomic-host:centos-atomic-host/7/x86_64/standard

1 metadata, 0 content objects fetched; 602 B transferred in 2 seconds
No upgrade available.
[centos@atomic01 ~]$ sudo systemctl reboot
        Stopping LVM2 PV scan on device 252:2...
        Unmounting RPC Pipe File System...


create a local docker retistry.
$  sudo docker create -p 5000:5000 \
> -v /var/lib/local-registry:/srv/registry \
> -e STANDALONE=false \
> -e MIRROR_SOURCE=https://registry-1.docker.io \
> -e MIRROR_SOURCE_INDEX=https://index.docker.io \
> -e STORAGE_PATH=/srv/registry \
> --name=local-registry registry

create a systemd file so that local cache is always up.
[centos@atomic01 ~]$ sudo cat /etc/systemd/system/local-registry.service
[Unit]
Description=Local Docker Mirror registry cache
Requires=docker.service
After=docker.service

[Service]
Restart=on-failure
RestartSec=10
ExecStart=/usr/bin/docker start -a %p
ExecStop=-/usr/bin/docker stop -t 2 %p

[Install]
WantedBy=multi-user.target

[centos@atomic01 ~]$ sudo systemctl daemon-reload
[centos@atomic01 ~]$ sudo systemctl enable local-registry
Created symlink from /etc/systemd/system/multi-user.target.wants/local-registry.service to /etc/systemd/system/local-registry.service.
[centos@atomic01 ~]$ sudo systemctl start local-registry
[centos@atomic01 ~]$
[centos@atomic01 ~]$ sudo systemctl status local-registry
* local-registry.service - Local Docker Mirror registry cache
  Loaded: loaded (/etc/systemd/system/local-registry.service; enabled; vendor preset: disabled)
  Active: active (running) since Mon 2016-02-15 09:07:01 UTC; 7s ago
Main PID: 2725 (docker)
  Memory: 32.0K
  CGroup: /system.slice/local-registry.service
          `-2725 /usr/bin/docker start -a local-registry

change selinux context. I can not understand this part, because I am not familiar with selinux..
-bash-4.2$ sudo chcon -Rvt svirt_sandbox_file_t /var/lib/local-registry
changing security context of '/var/lib/local-registry'


[ configure kubernetes master ( CentOS-Atomic-01 ) ]

add 2379 and 4001 ports. ( /etc/etcd/etcd.conf )
[centos@atomic01 ~]$ sudo egrep 'ETCD_LISTEN_CLIENT_URLS|ETCD_ADVERTISE_CLIENT_URLS' /etc/etcd/etcd.conf
#ETCD_LISTEN_CLIENT_URLS="http://localhost:2379"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"
#ETCD_ADVERTISE_CLIENT_URLS="http://localhost:2379"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001"

edit /etc/kubernetes/* files so that Atomic-01 acts as master.

- /etc/kubernetes/config

add red lines
[centos@atomic01 ~]$ sudo egrep -v ^# /etc/kubernetes/config | grep -v ^$
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.122.233:8080"
KUBE_ETCD_SERVERS="--etcd_servers=http://192.168.122.233:2379"

- /etc/kubernetes/apiserver

remove ServiceAccount from KUBE_ADMISSION_CONTROL.
[centos@atomic01 ~]$ sudo egrep -v ^# /etc/kubernetes/apiserver | grep -v ^$
KUBE_API_ADDRESS="--address=0.0.0.0"
KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379"
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"
KUBE_API_ARGS=""

start kubernetes services.

[centos@atomic01 ~]$ sudo systemctl enable etcd kube-apiserver kube-controller-manager kube-scheduler
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.

[centos@atomic01 ~]$ sudo systemctl start etcd kube-apiserver kube-controller-manager kube-scheduler
[centos@atomic01 ~]$

[centos@atomic01 ~]$ sudo systemctl status etcd kube-apiserver kube-controller-manager kube-scheduler | grep -i run
  Active: active (running) since Mon 2016-02-15 09:26:19 UTC; 25s ago
  Active: active (running) since Mon 2016-02-15 09:26:21 UTC; 23s ago
Feb 15 09:26:21 atomic01.example.org kube-apiserver[3200]: I0215 09:26:21.907775    3200 server.go:456] Using self-signed cert (/var/run/kubernetes/apiserver.crt, /var/run/kubernetes/apiserver.key)
  Active: active (running) since Mon 2016-02-15 09:26:19 UTC; 25s ago
  Active: active (running) since Mon 2016-02-15 09:26:19 UTC; 25s ago

[ configure flannel overlay network (CentOS-Atomic-01) ]

create a json file.
[centos@atomic01 ~]$ cat flanneld-conf.json
{
 "Network": "172.16.0.0/12",
 "SubnetLen": 24,
 "Backend": {
   "Type": "vxlan"
 }
}

create a network configuration.
[centos@atomic01 ~]$ curl -L http://localhost:2379/v2/keys/atomic01/network/config -XPUT --data-urlencode value@flanneld-conf.json
{"action":"set","node":{"key":"/atomic01/network/config","value":"{\n  \"Network\": \"172.16.0.0/12\",\n  \"SubnetLen\": 24,\n  \"Backend\": {\n    \"Type\": \"vxlan\"\n  }\n}\n","modifiedIndex":12,"createdIndex":12}}

check network configuration.
[centos@atomic01 ~]$ curl -L http://localhost:2379/v2/keys/atomic01/network/config | python -m json.tool
 % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                Dload  Upload   Total   Spent    Left  Speed
100   218  100   218    0     0  12152      0 --:--:-- --:--:-- --:--:-- 12823
{
   "action": "get",
   "node": {
       "createdIndex": 12,
       "key": "/atomic01/network/config",
       "modifiedIndex": 12,
       "value": "{\n  \"Network\": \"172.16.0.0/12\",\n  \"SubnetLen\": 24,\n  \"Backend\": {\n    \"Type\": \"vxlan\"\n  }\n}\n"
   }
}

[ configure atomic nodes ( CentOS-Atomic-02 – 05 ) ]

- edit /etc/sysconfig/docker to configure Docker to use the cluster registry

[centos@atomic02 ~]$ sudo vi /etc/sysconfig/docker
OPTIONS='--registry-mirror=http://192.168.122.233:5000 --selinux-enabled'

- edit /etc/sysconfig/flannel to configure Docker to use flannel overlay network.

[centos@atomic02 ~]$ grep -v ^# /etc/sysconfig/flanneld  | grep -v ^$
FLANNEL_ETCD="http://192.168.122.233:2379"
FLANNEL_ETCD_KEY="/atomic01/network"
I am not sure this part, because I am not familiar with flannel..

[centos@atomic02 ~]$ sudo mkdir -p /etc/systemd/system/docker.service.d/

[centos@atomic02 ~]$  sudo cat /etc/systemd/system/docker.service.d/10-flanneld-network.conf
[Unit]
After=flanneld.service
Requires=flanneld.service

[Service]
EnvironmentFile=/run/flannel/subnet.env
ExecStartPre=-/usr/sbin/ip link del docker0
ExecStart=
ExecStart=/usr/bin/docker -d \
     --bip=${FLANNEL_SUBNET} \
     --mtu=${FLANNEL_MTU} \
     $OPTIONS \
     $DOCKER_STORAGE_OPTIONS \
     $DOCKER_NETWORK_OPTIONS \
     $INSECURE_REGISTRY

[ configure kubernetes nodes ( CentOS-Atomic-02 – 05 ) ]

- edit /etc/kubernetes/kubelet

[centos@atomic02 ~]$ ip a s eth0| grep inet | grep -v inet6
   inet 192.168.122.207/24 brd 192.168.122.2065 scope global dynamic eth0

[centos@atomic02 ~]$ grep -v ^# /etc/kubernetes/kubelet | grep -v ^$
KUBELET_ADDRESS="--address=192.168.122.207"
KUBELET_HOSTNAME="--hostname_override=192.168.122.207"
KUBELET_API_SERVER="--api_servers=http://192.168.122.233:8080"
KUBELET_ARGS=""

- edit /etc/kubernetes/config

[centos@atomic02 ~]$ grep -v ^#  /etc/kubernetes/config | grep -v ^$
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow_privileged=false"
KUBE_MASTER="--master=http://192.168.122.233:8080"

make sure everything starts up on boot.
[centos@atomic02 ~]$ sudo systemctl daemon-reload
[centos@atomic02 ~]$ sudo systemctl enable flanneld kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.requires/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[centos@atomic02 ~]$ sudo systemctl reboot

[centos@atomic02 ~]$ sudo systemctl status flanneld kubelet kube-proxy | grep -i run
  Active: active (running) since Mon 2016-02-15 09:45:26 UTC; 37s ago
 Process: 846 ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker (code=exited, status=0/SUCCESS)
  Active: active (running) since Mon 2016-02-15 09:45:30 UTC; 33s ago
  Active: active (running) since Mon 2016-02-15 09:45:25 UTC; 37s ago

make sure you can see flannel device.
[centos@atomic02 ~]$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
   link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
   inet6 ::1/128 scope host
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
   link/ether 52:54:00:29:90:67 brd ff:ff:ff:ff:ff:ff
   inet 192.168.122.207/24 brd 192.168.122.2065 scope global dynamic eth0
      valid_lft 3510sec preferred_lft 3510sec
   inet6 fe80::5054:ff:fe29:9067/64 scope link
      valid_lft forever preferred_lft forever
3: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN
   link/ether fe:40:9f:27:51:51 brd ff:ff:ff:ff:ff:ff
   inet 172.16.88.0/12 scope global flannel.1
      valid_lft forever preferred_lft forever
   inet6 fe80::fc40:9fff:fe27:5151/64 scope link
      valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
   link/ether 02:42:38:ae:60:96 brd ff:ff:ff:ff:ff:ff
   inet 172.16.88.1/24 scope global docker0
      valid_lft forever preferred_lft forever

on the master.
[centos@atomic01 ~]$ kubectl get node
NAME             LABELS                                  STATUS
192.168.122.207   kubernetes.io/hostname=192.168.122.207   Ready

do the same operations on CentOS-Atomic-03 – 05.

Here is an output of kubectl on the master.
One master manages 4 nodes.
[centos@atomic01 ~]$ kubectl get node
NAME             LABELS                                  STATUS
192.168.122.207   kubernetes.io/hostname=192.168.122.207   Ready
192.168.122.206   kubernetes.io/hostname=192.168.122.206   Ready
192.168.122.246    kubernetes.io/hostname=192.168.122.246    Ready
192.168.122.220   kubernetes.io/hostname=192.168.122.220   Ready

[centos@atomic01 ~]$ kubectl get service
NAME         LABELS                                    SELECTOR   IP(S)        PORT(S)
kubernetes   component=apiserver,provider=kubernetes   <none>     10.254.0.1   443/TCP

[ enjoy kubernetes ]

on the master, create an yml file to create pods.
[centos@atomic01 ~]$ cat kube-nginx.yml
apiVersion: v1
kind: Pod
metadata:
 name: www
spec:
 containers:
   - name: nginx
     image: nginx
     ports:
       - containerPort: 80
         hostPort: 8080

start pods.
[centos@atomic01 ~]$ kubectl create -f kube-nginx.yml
pods/www

download an nginx container and run that.
[centos@atomic01 ~]$ kubectl get pod
NAME      READY     STATUS    RESTARTS   AGE
www       1/1       Running   0          3m

container www is running on a node ‘192.168.122.207’
-bash-4.2$ kubectl get pod www
NAME      READY     STATUS    RESTARTS   AGE
www       1/1       Running   0          3m


-bash-4.2$ kubectl describe
limitrange             persistentvolume       resourcequota
minion                 persistentvolumeclaim  secret
namespace              pod                    service
node                   replicationcontroller  serviceaccount
-bash-4.2$ kubectl describe
limitrange             persistentvolume       resourcequota
minion                 persistentvolumeclaim  secret
namespace              pod                    service
node                   replicationcontroller  serviceaccount
-bash-4.2$ kubectl describe pod www
Name: www
Namespace: default
Image(s): nginx
Node: 192.168.122.207/192.168.122.207
Labels: <none>
Status: Running
Reason:
Message:
IP: 172.16.100.2
Replication Controllers: <none>
Containers:
 nginx:
   Image: nginx
   State: Running
     Started: Mon, 15 Feb 2016 15:38:07 +0000
   Ready: True
   Restart Count: 0
Conditions:
 Type Status
 Ready True
Events:
 FirstSeen LastSeen Count From SubobjectPath Reason Message
 Mon, 15 Feb 2016 15:36:47 +0000 Mon, 15 Feb 2016 15:36:47 +0000 1 {scheduler } scheduled Successfully assigned www to 192.168.122.207
 Mon, 15 Feb 2016 15:36:59 +0000 Mon, 15 Feb 2016 15:36:59 +0000 1 {kubelet 192.168.122.207} implicitly required container POD pulled Successfully pulled Pod container image "gcr.io/google_containers/pause:0.8.0"
 Mon, 15 Feb 2016 15:37:01 +0000 Mon, 15 Feb 2016 15:37:01 +0000 1 {kubelet 192.168.122.207} implicitly required container POD createdCreated with docker id bd7c515b9cdf
 Mon, 15 Feb 2016 15:37:02 +0000 Mon, 15 Feb 2016 15:37:02 +0000 1 {kubelet 192.168.122.207} implicitly required container POD startedStarted with docker id bd7c515b9cdf
 Mon, 15 Feb 2016 15:38:05 +0000 Mon, 15 Feb 2016 15:38:05 +0000 1 {kubelet 192.168.122.207} spec.containers{nginx} pulled Successfully pulled image "nginx"
 Mon, 15 Feb 2016 15:38:07 +0000 Mon, 15 Feb 2016 15:38:07 +0000 1 {kubelet 192.168.122.207} spec.containers{nginx} createdCreated with docker id f9bd3bf6a69b
 Mon, 15 Feb 2016 15:38:07 +0000 Mon, 15 Feb 2016 15:38:07 +0000 1 {kubelet 192.168.122.207} spec.containers{nginx} startedStarted with docker id f9bd3bf6a69b

run one more container.
-bash-4.2$ cat kube-nginx-02.yml
apiVersion: v1
kind: Pod
metadata:
 name: www02
spec:
 containers:
   - name: nginx
     image: nginx
     ports:
       - containerPort: 80
         hostPort: 8080
-bash-4.2$

-bash-4.2$ kubectl create -f kube-nginx-02.yml
pods/www02
-bash-4.2$

two pods are running.
-bash-4.2$ kubectl get pod
NAME      READY     STATUS    RESTARTS   AGE
www       1/1       Running   0          11m
www02     1/1       Running   0          1m



-bash-4.2$ kubectl describe pod www02
Name: www02
Namespace: default
Image(s): nginx
Node: 192.168.122.246/192.168.122.246
Labels: <none>
Status: Running
Reason:
Message:
IP: 172.16.33.2
Replication Controllers: <none>
Containers:
 nginx:

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.