lost and found ( for me ? )

Ubuntu uvtool : how to use a template and use-data(cloud init)

Here are samples on how to use a template and user-data with uvt-kvm.

When you create a VM as below, the VM will belong to “default” network.
$ uvt-kvm create --password=hello test01

user name : ubuntu
credentials : hello

$ virsh dumpxml test01
   <interface type='network'>
     <mac address='52:54:00:fc:3c:56'/>
     <source network='default' bridge='virbr0'/>
     <target dev='vnet0'/>
     <model type='virtio'/>
     <alias name='net0'/>
     <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
   </interface>

When you create a VM without specifying a template, uvt-kvm will create a VM by using a “template.xml”
$ pwd
/usr/share/uvtool/libvirt

$ ls
remote-wait.sh  template.xml

If you want to customize a network which a VM belongs to, you can change that by using a template.

$ cat template02.xml
<domain type='kvm'>
 <os>
   <type>hvm</type>
   <boot dev='hd'/>
 </os>
 <features>
   <acpi/>
   <apic/>
   <pae/>
 </features>
 <devices>
   <interface type='network'>
     <source network='default'/>
     <model type='virtio'/>
   </interface>
   <interface type='network'>
     <source network='a-net'/>
     <model type='virtio'/>
   </interface>
   <interface type='network'>
     <source network='b-net'/>
     <model type='virtio'/>
   </interface>
   <serial type='pty'>
     <source path='/dev/pts/3'/>
     <target port='0'/>
   </serial>
   <graphics type='vnc' autoport='yes' listen='127.0.0.1'>
     <listen type='address' address='127.0.0.1'/>
   </graphics>
   <video/>
 </devices>
</domain>

$ uvt-kvm create --password=hello --template=./template02.xml --cpu 1 --disk 5 test02

[ user-data ]

Assign static IP addresses.
$ cat user-data
#cloud-config
ssh_pwauth: yes
chpasswd:
 list: |
   ubuntu:hello
 expire: False

write_files:
#  - path: /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg
#    owner: root:root
#    permissions: '0644'
#    content: |
#      network: {config: disabled}
 - path: /etc/network/interfaces.d/50-cloud-init.cfg
   owner: root:root
   permissions: '0644'
   content: |
     auto ens3
     iface ens3 inet dhcp

     auto ens4
     iface ens4 inet static
     address 192.168.130.120
     netmask 255.255.255.0

     auto ens5
     iface ens5 inet static
     address 192.168.131.120
     netmask 255.255.255.0

power_state:
 mode: reboot

$ uvt-kvm create --user-data=./user-data --template=./template02.xml --cpu 1 --disk 5 test05

$ uvt-kvm ip test05
192.168.122.34
$ ssh ubuntu@192.168.122.34

ubuntu@ubuntu:~$ ip a s | grep 192
   inet 192.168.122.34/24 brd 192.168.122.255 scope global ens3
   inet 192.168.130.120/24 brd 192.168.130.255 scope global ens4
   inet 192.168.131.120/24 brd 192.168.131.255 scope global ens5
ubuntu@ubuntu:~$

Kubernetes : deploy Ceph cluster as persistent volume

Here are logs when I set up Ceph clusters for persistent volumes.

Assume you have already set up Kubernetes cluster with Juju and MAAS.
http://lost-and-found-narihiro.blogspot.jp/2017/07/ubuntu-1604-deploy-kubernetes-
cluster.html

MAAS : MAAS Version 1.9.5+bzr4599-0ubuntu1 (14.04.1)
Juju : 2.2.2-xenial-amd64

Before deploying Ceph.

Juju GUI

K8s dashboard
No persistent volumes

[ deploy Ceph clusters with Juju ]

https://jujucharms.com/ceph-mon/
https://jujucharms.com/ceph-osd/

- Ceph mon

# juju deploy cs:ceph-mon -n 3

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:allocating, workload:waiting)
- ceph-mon/4: 192.168.40.40 (agent:allocating, workload:waiting)
- ceph-mon/5: 192.168.40.41 (agent:allocating, workload:waiting)

# juju status ceph-mon --format short

- ceph-mon/3: 192.168.40.42 (agent:idle, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:idle, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:idle, workload:active)

Juju GUI after deploying ceph-mon.

- Ceph osd

# cat ceph-osd-config.yaml
ceph-osd:
   osd-devices: /dev/vdb

# juju deploy cs:ceph-osd -n 3 --config ceph-osd-config.yaml

# juju status ceph-osd --format short

- ceph-osd/0: 192.168.40.45 (agent:allocating, workload:waiting)
- ceph-osd/1: 192.168.40.43 (agent:allocating, workload:waiting)
- ceph-osd/2: 192.168.40.44 (agent:allocating, workload:waiting)

# juju add-relation ceph-mon ceph-osd

# juju status ceph-mon ceph-osd --format short

- ceph-mon/3: 192.168.40.42 (agent:executing, workload:active)
- ceph-mon/4: 192.168.40.40 (agent:executing, workload:active)
- ceph-mon/5: 192.168.40.41 (agent:executing, workload:active)
- ceph-osd/0: 192.168.40.45 (agent:executing, workload:active)
- ceph-osd/1: 192.168.40.43 (agent:executing, workload:active)
- ceph-osd/2: 192.168.40.44 (agent:executing, workload:active)



# juju add-relation kubernetes-master ceph-mon


# juju run-action kubernetes-master/0 create-rbd-pv name=test size=50

# juju ssh kubernetes-master/0


$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      17s

$ kubectl get pvc
No resources found.

on Dashboard

Reference
https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/

create a persistent volume claim.
ubuntu@m-node05:~$ cat pv-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: test-pv-claim
spec:
 storageClassName: rbd
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 3M


ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Available             rbd                      20m

ubuntu@m-node05:~$ kubectl create -f pv-claim.yaml
persistentvolumeclaim "test-pv-claim" created

ubuntu@m-node05:~$ kubectl get pv
NAME      CAPACITY   ACCESSMODES   RECLAIMPOLICY   STATUS    CLAIM                   STORAGECLASS   REASON    AGE
test      50M        RWO           Retain          Bound     default/test-pv-claim   rbd                      20m

ubuntu@m-node05:~$ kubectl get pvc
NAME            STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
test-pv-claim   Bound     test      50M        RWO           rbd            7s
ubuntu@m-node05:~$


create a pod with PVC.
ubuntu@m-node05:~$ cat create-a-pod-with-pvc.yaml
kind: Pod
apiVersion: v1
metadata:
 name: task-pv-pod
spec:

 volumes:
   - name: task-pv-storage
     persistentVolumeClaim:
      claimName: test-pv-claim

 containers:
   - name: task-pv-container
     image: nginx
     ports:
       - containerPort: 80
         name: "http-server"
     volumeMounts:
     - mountPath: "/usr/share/nginx/html"
       name: task-pv-storage

ubuntu@m-node05:~$ kubectl create -f create-a-pod-with-pvc.yaml
pod "task-pv-pod" created

$ kubectl get pod task-pv-pod
NAME          READY     STATUS    RESTARTS   AGE
task-pv-pod   1/1       Running   0          48s

ubuntu@m-node05:~$ kubectl exec -it task-pv-pod -- /bin/bash

root@task-pv-pod:~# df -h | grep rbd
/dev/rbd0        46M  2.6M   44M   6% /usr/share/nginx/html

root@task-pv-pod:~# apt update;apt install curl -y

root@task-pv-pod:/# echo 'hello world' > /usr/share/nginx/html/index.html

root@task-pv-pod:/# curl http://127.0.0.1
hello world

accecc to a ceph-mon node.
$ juju ssh ceph-mon/3

ubuntu@m-node10:~$ sudo ceph health
HEALTH_OK

ubuntu@m-node10:~$ sudo ceph osd stat
    osdmap e15: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds

ubuntu@m-node10:~$ sudo ceph -s
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ubuntu@m-node10:~$ sudo ceph
ceph> health
HEALTH_OK

ceph> status
   cluster 80009c18-729c-11e7-9d93-5254009250af
    health HEALTH_OK
    monmap e1: 3 mons at {m-node08=192.168.40.40:6789/0,m-node09=192.168.40.41:6789/0,m-node10=192.168.40.42:6789/0}
           election epoch 4, quorum 0,1,2 m-node08,m-node09,m-node10
    osdmap e14: 3 osds: 3 up, 3 in
           flags sortbitwise,require_jewel_osds
     pgmap v145: 64 pgs, 1 pools, 8836 kB data, 8 objects
           109 MB used, 58225 MB / 58334 MB avail
                 64 active+clean

ceph> exit