lost and found ( for me ? )

Ubuntu 16.04: deploy Kubernetes cluster with Juju and MAAS

Here are logs when I set up Kubernetes cluster with Juju and MAAS.

MAAS Version 1.9.5+bzr4599-0ubuntu1 (14.04.1)
Juju 2.2.2 ( 2.2.2-xenial-amd64 )

Assume you have already set up Juju and MAAS.
http://lost-and-found-narihiro.blogspot.jp/2016/11/ubuntu-1604-install-maas-within-ubuntu.html

[ deploy Kubernetes clusters ]

download a bundle from https://jujucharms.com/canonical-kubernetes/ and deployed K8s with Juju
# juju deploy ./bundle01.yaml

Here is a bundle I used.
# cat bundle01.yaml
series: xenial
description: 'A nine-machine Kubernetes cluster, appropriate for production. Includes
 a

 three-machine etcd cluster and three Kubernetes worker nodes.

 '
services:
 easyrsa:
   annotations:
     gui-x: '450'
     gui-y: '550'
   charm: cs:~containers/easyrsa-12
   num_units: 1
 etcd:
   annotations:
     gui-x: '800'
     gui-y: '550'
   charm: cs:~containers/etcd-40
   num_units: 1
 flannel:
   annotations:
     gui-x: '450'
     gui-y: '750'
   charm: cs:~containers/flannel-20
 kubeapi-load-balancer:
   annotations:
     gui-x: '450'
     gui-y: '250'
   charm: cs:~containers/kubeapi-load-balancer-16
   expose: true
   num_units: 1
 kubernetes-master:
   annotations:
     gui-x: '800'
     gui-y: '850'
   charm: cs:~containers/kubernetes-master-35
   num_units: 1
   options:
     channel: 1.7/stable
 kubernetes-worker:
   annotations:
     gui-x: '100'
     gui-y: '850'
   charm: cs:~containers/kubernetes-worker-40
   expose: true
   num_units: 2
   options:
     channel: 1.7/stable
relations:
- - kubernetes-master:kube-api-endpoint
 - kubeapi-load-balancer:apiserver
- - kubernetes-master:loadbalancer
 - kubeapi-load-balancer:loadbalancer
- - kubernetes-master:kube-control
 - kubernetes-worker:kube-control
- - kubernetes-master:certificates
 - easyrsa:client
- - etcd:certificates
 - easyrsa:client
- - kubernetes-master:etcd
 - etcd:db
- - kubernetes-worker:certificates
 - easyrsa:client
- - kubernetes-worker:kube-api-endpoint
 - kubeapi-load-balancer:website
- - kubeapi-load-balancer:certificates
 - easyrsa:client
- - flannel:etcd
 - etcd:db
- - flannel:cni
 - kubernetes-master:cni
- - flannel:cni
 - kubernetes-worker:cni

After deploying K8s.
# juju status --format short

- easyrsa/0: 192.168.40.36 (agent:idle, workload:active)
- etcd/0: 192.168.40.32 (agent:idle, workload:active) 2379/tcp
- kubeapi-load-balancer/0: 192.168.40.37 (agent:idle, workload:active) 443/tcp
- kubernetes-master/0: 192.168.40.35 (agent:idle, workload:active) 6443/tcp
 - flannel/0: 192.168.40.35 (agent:idle, workload:active)
- kubernetes-worker/0: 192.168.40.38 (agent:idle, workload:active) 80/tcp, 443/tcp
 - flannel/2: 192.168.40.38 (agent:idle, workload:active)
- kubernetes-worker/1: 192.168.40.39 (agent:idle, workload:active) 80/tcp, 443/tcp
 - flannel/1: 192.168.40.39 (agent:idle, workload:active)

[ access to the dashboard ]

# juju config kubernetes-master enable-dashboard-addons=true
WARNING the configuration setting "enable-dashboard-addons" already has the value "true"

ssh to the kubernetes-master and look into a “config” file.
You will find an IP address of API server, username and credentials.
# juju ssh kubernetes-master/0

ubuntu@m-node05:~$ cat config
   server: https://192.168.40.37:443
users:
- name: admin
 user:
   password: credentials
   username: admin

ubuntu@m-node05:~$ kubectl cluster-info
Kubernetes master is running at http://localhost:8080
Heapster is running at http://localhost:8080/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at http://localhost:8080/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy
Grafana is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
InfluxDB is running at http://localhost:8080/api/v1/namespaces/kube-system/services/monitoring-influxdb/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Access to the https:// <IP>/ui , enter username and credentials.


Ubuntu 16.04 : enable core dump in LXD containers

Host OS : Ubuntu 16.04
Container OS : CentOS 7

1. on the host OS(Ubuntu 16.04), configure core file pattern

edit core_pattern.

from
$ cat /proc/sys/kernel/core_pattern
|/usr/share/apport/apport %p %s %c %P

to
$ sudo sh -c "echo /tmp/core.%e.%p.%t > /proc/sys/kernel/core_pattern"

$ cat /proc/sys/kernel/core_pattern
/tmp/core.%e.%p.%t

If you want to make this permanently, add the following in /etc/sysctl.d/*.conf
$ tail -1 /etc/sysctl.d/99-sysctl.conf
kernel.core_pattern = /tmp/core.%e.%p.%t

2. on the host OS(Ubuntu 16.04), set core file size

$ sudo sh -c ulimit -c unlimited
unlimited

add the following lines in /etc/security/limits.conf
* soft core unlimited
* hard core unlimited

- launch a CentOS7 container

$ lxc launch centos7 centos01

$ lxc exec centos01 bash

[root@centos01 ~]# ulimit -c
0

[root@centos01 ~]# cat /proc/sys/kernel/core_pattern
/tmp/core.%e.%p.%t

[root@centos01 ~]# kill -6 298

[root@centos01 ~]# ls /tmp/core*
/tmp/core.<program name>.298.1499911251