lost and found ( for me ? )

Install Kubernetes clusters in CoreOS

Here are logs when installing Kubernetes clusters in CoreOS.
CoreOSs are running as VMs within Ubuntu KVM.

KVM host : Ubuntu 14.04
install CoreOS(kubernetes) as virtual machines within Ubuntu KVM

download kubernetes
https://github.com/kubernetes/kubernetes

before installing CoreOS VMs, create a directory for CoreOS VMs
$ sudo mkdir /var/lib/libvirt/images/kubernetes
$ sudo chown -R $USER:$USER /var/lib/libvirt/images/kubernetes/

run kube-up.sh
This script will set up four CoreOS VMs, one is master and the others are nodes.
$ pwd
/home/hattori/kubernetes

$ export KUBERNETES_PROVIDER=libvirt-coreos

$ ./cluster/kube-up.sh
Nb ready nodes: 0 / 3
Nb ready nodes: 0 / 3
Nb ready nodes: 3 / 3
Kubernetes cluster is running. The master is running at:

 http://192.168.10.1:8080

You can control the Kubernetes cluster with: 'cluster/kubectl.sh'
You can connect on the master with: 'ssh core@192.168.10.1'
... calling validate-cluster
Found 3 node(s).
NAME           STATUS    AGE
192.168.10.2   Ready     1s
192.168.10.3   Ready     1s
192.168.10.4   Ready     1s
Validate output:
NAME                 STATUS    MESSAGE              ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:

Kubernetes master is running at http://192.168.10.1:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

kubect ( on Ubuntu host )
$ pwd
/home/hattori/kubernetes/platforms/linux/amd64

$ sudo cp kubectl /usr/local/bin/kubectl

$ which kubectl
/usr/local/bin/kubectl

$ kubectl get nodes
NAME           STATUS    AGE
192.168.10.2   Ready     54m
192.168.10.3   Ready     54m
192.168.10.4   Ready     54m

master : 192.168.10.1
nodes : 192.168.10.[2-4]

on Ubuntu host
hattori@ubuntu03:~/kubernetes$ ./cluster/kubectl.sh cluster-info
Kubernetes master is running at http://192.168.10.1:8080
KubeDNS is running at http://192.168.10.1:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

hattori@ubuntu03:~/kubernetes$ ./cluster/kubectl.sh get nodes
NAME           STATUS    AGE
192.168.10.2   Ready     31m
192.168.10.3   Ready     31m
192.168.10.4   Ready     31m
hattori@ubuntu03:~/kubernetes$

hattori@ubuntu03:~/kubernetes$ ./cluster/kubectl.sh get namespace
NAME          STATUS    AGE
default       Active    31m
kube-system   Active    31m

$ ls ~/.kube/config
/home/hattori/.kube/config

Two network has been created.
$ virsh dumpxml kubernetes-master | grep network
   <interface type='network'>
     <source network='kubernetes_global'/>
   <interface type='network'>
     <source network='kubernetes_pods'/>

$ virsh dumpxml kubernetes-node-1 | grep network
   <interface type='network'>
     <source network='kubernetes_global'/>
   <interface type='network'>
     <source network='kubernetes_pods'/>


$ virsh net-dumpxml kubernetes_global
<network connections='4'>
 <name>kubernetes_global</name>
 <uuid>182e5c56-53e0-4ca9-9451-720877fd6230</uuid>
 <forward mode='nat'>
   <nat>
     <port start='1024' end='65535'/>
   </nat>
 </forward>
 <bridge name='virbr_kub_gl' stp='off' delay='0'/>
 <mac address='52:54:00:78:bb:d8'/>
 <ip address='192.168.10.254' netmask='255.255.255.0'>
 </ip>
</network>


$ virsh net-dumpxml kubernetes_pods
<network connections='4'>
 <name>kubernetes_pods</name>
 <uuid>02f9691e-2dbf-4fd7-b82b-5ac0a0ce1191</uuid>
 <bridge name='virbr_kub_pods' stp='off' delay='0'/>
 <mac address='52:54:00:78:38:df'/>
 <ip address='10.10.0.100' netmask='255.255.0.0'>
 </ip>
</network>

Memory is a little bit small..
virsh # dumpxml 28
<domain type='kvm' id='28'>
 <name>kubernetes-node-2</name>
 <uuid>2ebfb4c3-19d0-412b-937a-6c500d6c86ce</uuid>
 <memory unit='KiB'>524288</memory>
 <currentMemory unit='KiB'>524288</currentMemory>
 <vcpu placement='static'>2</vcpu>
 <resource>

You can change memory by editing coreos.xml
$ pwd
/home/hattori/kubernetes/cluster/libvirt-coreos

hattori@ubuntu03:~/kubernetes/cluster/libvirt-coreos$ grep -i memory coreos.xml
 <memory unit='MiB'>512</memory>
 <currentMemory unit='MiB'>512</currentMemory>

Access to the master
$ ssh core@192.168.10.1
The authenticity of host '192.168.10.1 (192.168.10.1)' can't be established.

CoreOS alpha (1122.0.0)
Last login: Tue Aug 23 01:59:59 2016 from 192.168.10.254
Update Strategy: No Reboots
Failed Units: 1
 kube-addons.service
core@kubernetes-master ~ $

core@kubernetes-master ~ $ for i in {docker,flanneld,kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service};do systemctl status $i;done

core@kubernetes-master ~ $ ls /opt/kubernetes/
addons  bin  certs  manifests

Access to a node
$ ssh core@192.168.10.2

CoreOS alpha (1122.0.0)
Last login: Tue Aug 23 02:14:33 2016 from 192.168.10.254
Update Strategy: No Reboots
core@kubernetes-node-1 ~ $

core@kubernetes-node-1 ~ $ for i in {docker,flanneld,docker,kubelet,kube-proxy};do systemctl status $i;done

core@kubernetes-node-1 ~ $ ls /opt/kubernetes/
addons  bin  certs  manifests

core@kubernetes-node-1 ~ $ ip -4 a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
   inet 127.0.0.1/8 scope host lo
      valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
   inet 192.168.10.2/24 brd 192.168.10.255 scope global eth0
      valid_lft forever preferred_lft forever
4: cbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
   inet 10.10.1.1/24 brd 10.10.1.255 scope global cbr0
      valid_lft forever preferred_lft forever

core@kubernetes-node-1 ~ $ cat /etc/os-release
NAME=CoreOS
ID=coreos
VERSION=1122.0.0
VERSION_ID=1122.0.0
BUILD_ID=2016-07-27-0739
PRETTY_NAME="CoreOS 1122.0.0 (MoreOS)"
ANSI_COLOR="1;32"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://github.com/coreos/bugs/issues"

Destroy environments
hattori@ubuntu03:~/kubernetes$ ./cluster/kube-down.sh
Bringing down cluster using provider: libvirt-coreos
Domain kubernetes-node-1 destroyed

Domain kubernetes-node-2 destroyed

Domain kubernetes-node-3 destroyed

Domain kubernetes-master destroyed

Vol kubernetes deleted

Vol kubernetes-master.img deleted

Vol kubernetes-node-1.img deleted

Vol kubernetes-node-2.img deleted

Vol kubernetes-node-3.img deleted

Vol kubernetes_config_master deleted

Vol kubernetes_config_node-00 deleted

Vol kubernetes_config_node-01 deleted

Vol kubernetes_config_node-02 deleted

Network kubernetes_global destroyed

Network kubernetes_pods destroyed

Done

use stable channel instead of alpha.
$ export COREOS_CHANNEL=stable

$ export KUBERNETES_PROVIDER=libvirt-coreos

$ ./cluster/kube-up.sh

You can use beta channel too.
$ export COREOS_CHANNEL=beta


use proxy when downloading docker images.

add the followings in user_data.yml (kubernetes/cluster/libvirt-coreos/user_data.yml)
coreos:
 units:
   - name: docker.service
     drop-ins:
       - name: 20-http-proxy.conf
         content: |
           [Service]
           Environment="HTTP_PROXY=http://proxy.example.com:8080"
     command: restart

create a POD
hattori@ubuntu03:~$ cd kubernetes/examples/
hattori@ubuntu03:~/kubernetes/examples$ kubectl create -f pod

hattori@ubuntu03:~/kubernetes/examples$ kubectl get pod
NAME      READY     STATUS    RESTARTS   AGE
nginx     1/1       Running   0          4m

hattori@ubuntu03:~/kubernetes/examples$ kubectl get pod -o wide
NAME      READY     STATUS    RESTARTS   AGE       IP          NODE
nginx     1/1       Running   0          4m        10.10.1.2   192.168.10.2
hattori@ubuntu03:~/kubernetes/examples$

on the master
core@kubernetes-master ~ $ ping 10.10.1.2
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=63 time=0.611 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=63 time=0.228 ms
^C

on the node
core@kubernetes-node-1 ~ $ ping 10.10.1.2 -c 3
PING 10.10.1.2 (10.10.1.2) 56(84) bytes of data.
64 bytes from 10.10.1.2: icmp_seq=1 ttl=64 time=0.050 ms
64 bytes from 10.10.1.2: icmp_seq=2 ttl=64 time=0.064 ms
^C

delete the POD
hattori@ubuntu03:~/kubernetes/examples$ kubectl delete pod nginx
pod "nginx" deleted