Saturday, March 30, 2019

Docker and Kubernetes





A Dockerfile is a text file that defines a Docker image. 

A Docker image is created by building a Dockerfile with the docker build command

Start containers with the docker run or docker-compose command.

You’ll usually start searching for available Docker images on the Docker store, you’ll also find images on github included with a good number of repos (in the form of a Dockerfile), or you can share Docker images within your team or company by creating your own Docker Registry. 

You can see your available images using
docker images

Generate image from Dockerfile under ./docker directory and give it an image name defined in $image:
docker build -t $image ./docker/

This will run an image defined in $image in detected mode and set the contain id to $gvsunittest
docker run -t -d --name $gvsunittest -v $PAN_BUILD_DIR/pkg/RPMS/noarch:/home/rpm $image gvs

Run such image in isolated container:
$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]


You can run in detached mode
docker run -d image_name:tag_name (or image ID)

docker run -d -p 8080:80/tcp image-id ß this will also mapping container port 80 to host port 8080
Then you can check your container is running using
docker ps

To display stopped container:
docker ps -f "status=exited"
To restart a stopped container:
docker start container-id
docker ps give you a docker id, you can use it to go into your container using:
docker exec -it container_id /bin/bash

and you can stop it using docker stop container_id and docker rm container_id
You can also run your container with -rm arguments so if you stop your container it will be automatically removed.
Run a container and use bash to work on it in one step (also skip entrypoint):
docker run -it --name google-app-nodejs5 --entrypoint=  {image_id} bash

Pull an image from public:
docker pull {image}
Sample:
docker pull gcr.io/google-appengine/nodejs

Generate image from docker container:
docker commit {container_id} {image_name}:{tag}
docker commit 3ec3c8fb5b4a actionservice-proxy:0.4
Save an image to file system:
docker save -o {file_name} {image_name}:{tag}
docker save -o actionservice-proxy_0_4.docker actionservice-proxy:0.4
Remove a docker container:
docker rm -f {container_id}

To install common commands for docker container:
apt-get update
apt-get install iproute (ip, etc.)
apt-get install net-tools (ifconfig, netstat, etc.)
apt-get install vim (vi, etc.)
apt-get install iputils-ping (ping, etc.)

Access service running on host from docker container:
use ‘host.docker.internal’ as host name of host machine.

Publish docker image to docker-public.af.paaaaaa.local (a jfrog artifactory)
(2018-11-26 14:17:44) jzeng@curium:~/dservices3/docker$ docker login docker-public.af.paaaaaa.local
Username: jzeng
Password:
WARNING! Your password will be stored unencrypted in /home/jzeng/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

(2018-11-26 14:27:49) jzeng@curium:~/dservices3/docker$ docker tag 9489640ce1d0 docker-public.af.paaaaaaa.local/directory-sync:v1
[Ubuntu 16 Dev Server]

(2018-11-26 14:30:19) jzeng@curium:~/dservices3/docker$ docker push docker-public.af.paaaaaaa.local/directory-sync:v1
The push refers to repository [docker-public.af.paaaaaaa.local/directory-sync]
eb3e57f02354: Layer already exists
bd8ed7c540fb: Layer already exists
8b36a56228ba: Layer already exists
851945122429: Layer already exists
8dfd5f47de0b: Layer already exists
8de8c1ea8963: Layer already exists
d3547c379d00: Layer already exists
v1: digest: sha256:c3d4f616d2d25aabb1ca428950c77b578f46b8d8eb7375af544fc82bb52588ea size: 1779
[Ubuntu 16 Dev Server]

Build a new image from an existing one
(2018-11-28 09:54:25) jzeng@curium:~$ docker images | grep dire
docker-public.af.paaaaaaa.local/directory-sync-build   v1                  4354bb397010        18 hours ago        969MB
docker-public.af.paaaaaaa.local/directory-sync             v2                  4354bb397010        18 hours ago        969MB
directory-sync-build                                                                            v1                  4354bb397010        18 hours ago        969MB
docker-public.af.paaaaaaa.local/directory-sync              v1                  9489640ce1d0        2 years ago          1.96GB

((2018-11-28 09:55:06) jzeng@curium:~$ docker run -t -d --name ds-build-py2 4354bb397010
12e7ac315122fdeed420226d482f2b7f963c91a222b5a6ef1c4ce08283d66f7d             

or: docker run -t -d --name ds-build-py2-jzeng -v /home/jzeng/dservices2:/home/jzeng/dservices2 {docker-image}

Example:

(2018-11-30 18:02:07) jzeng@curium:~/dservices2$ docker run -t -d --name ds-build-py2-jzeng -v /home/jzeng/dservices2:/home/jzeng/dservices2 829babe8f8c8

docker exec -it 489cced0e8c4 /bin/bash  ßß


(2018-11-28 10:16:49) jzeng@curium:~/docker$ docker ps
CONTAINER ID        IMAGE                              COMMAND                  CREATED             STATUS              PORTS               NAMES
12e7ac315122        4354bb397010                       "python2"                16 minutes ago      Up 15 minutes                           ds-build-py2

(2018-11-26 15:01:03) jzeng@curium:~/dservices3/docker$ docker exec -it 12e7ac315122 /bin/bash

Build a new image from Dockerfile
(2018-11-27 16:09:27) jzeng@curium:~/docker$ cat Dockerfile
FROM docker-engtools.af.paaaaaaa.local/panwbase-python2:latest
RUN apt-get update && apt-get install -y bc rpm
RUN pip install pytest pytest-xdist pytest-cov
(2018-11-27 16:09:27) jzeng@curium:~/docker$ docker build -t directory-sync-build:v3 .
(2018-11-27 16:09:20) jzeng@curium:~/docker$ docker images | grep directory
directory-sync-build                                         v1                  4354bb397010        52 seconds ago      969MB

(2018-11-27 16:12:22) jzeng@curium:~/docker$ docker login docker-public.af.paaaaaaa.local
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /home/jzeng/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded
[Ubuntu 16 Dev Server]

(2018-11-27 16:14:43) jzeng@curium:~/docker$ docker tag 4354bb397010 docker-public.af.paaaaaaa.local/directory-sync:v3
[Ubuntu 16 Dev Server]
(2018-11-27 16:16:23) jzeng@curium:~/docker$ docker push docker-public.af.paaaaaaa.local/directory-sync:v3

docker info

(2018-09-11 14:49:20) jzeng@bromine:~$ docker info
Containers: 2599
 Running: 3
 Paused: 0
 Stopped: 2596
Images: 4637
Server Version: 18.06.1-ce
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Backing Filesystem: extfs
 Dirs: 9980
 Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.4.0-134-generic
Operating System: Ubuntu 16.04.5 LTS
OSType: linux
Architecture: x86_64
CPUs: 32
Total Memory: 125.9GiB
Name: bromine
ID: GTIU:XVFJ:QOO7:Y2OT:5QMX:P6HC:UVJC:YCIO:XB5K:QAGX:RHQ6:763I
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 repo:5000
 127.0.0.0/8
Live Restore Enabled: false


Docker Networks:

Docker Compose sets up a single network for your application(s) by default, adding each container for a service to the default network. Containers on a single network can reach and discover every other container on the network.

docker network ls  à list all networks
docker network inspect {network_id}  à detail info about such network

To add more containers into existing network, add following top-level entry into your docker-compose.yml file.  Following example will add new containers to existing network cp-all-in-one_default

networks:
  default:
    external:
      name: cp-all-in-one_default







Docker Foundations:


Docker images are stored as series of read-only layers. When we start a container, Docker takes the read-only image and adds a read-write layer on top. If the running container modifies an existing file, the file is copied out of the underlying read-only layer and into the top-most read-write layer where the changes are applied. The version in the read-write layer hides the underlying file, but does not destroy it — it still exists in the underlying layer. When a Docker container is deleted, relaunching the image will start a fresh container without any of the changes made in the previously running container — those changes are lost. Docker calls this combination of read-only layers with a read-write layer on top a Union File System.

Volume:

In order to be able to save (persist) data and also to share data between containers, Docker came up with the concept of volumes. Quite simply, volumes are directories (or files) that are outside of the default Union File System and exist as normal directories and files on the host filesystem.

Since the point of volumes is to exist independent from containers, when a container is removed, a volume is not automatically removed at the same time. When a volume exists and is no longer connected to any containers, it's called a dangling volume. 


docker volume ls (list all volumes)
docker volume ls -f dangling=true. (list all dangling volumes)
docker volume prune (remove all dangling volumes)

If you start a container with a volume that does not yet exist, Docker creates the volume for you. The following example mounts the volume myvol2 into /app/ in the container.
The -v and --mount examples below produce the same result. You can’t run them both unless you remove the devtest container and the myvol2 volume after running the first one.
$ docker run -d \
  --name devtest \
  -v myvol2:/app \
  nginx:latest
$ docker run -d \
  --name devtest \
  --mount source=myvol2,target=/app \
  nginx:latest

--mount is preferred and some places only support --mount, not -v.



Container:

docker rm -v {container_name}. (remove a container and its volumes)
docker stop $(docker ps -a -q)  (stop all containers)
docker rm $(docker ps -a -q). (remove all containers)



COE(Container Orchestration Engine): Kubernetes, Docker Swarm, or


MiniKube

Helm: Helm is the package manager (analogous to yum and apt) and Charts are packages (analogous to debs and rpms). The home for these Charts is the Kubernetes Charts repository which provides continuous integration for pull requests, as well as automated releases of Charts in the master branch.
There are two main folders where charts reside. The stable folder hosts those applications which meet minimum requirements such as proper documentation and inclusion of only Beta or higher Kubernetes resources. The incubator folder provides a place for charts to be submitted and iterated on until they’re ready for promotion to stable at which time they will automatically be pushed out to the default repository.


Pods
A pod is the smallest deployable entity in Kubernetes and it is important to understand the main principles around pods
·       Containers always run inside a Pod.
·       A pod usually has 1 container but can have more.
·       Containers in the same pod are guaranteed to be located on the same machine and share resources


Helm Chart Repositories:




Setup Kafka on Minikube:

$ helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
$ kubectl create ns kafka
$ helm install --name pan-kafka --namespace kafka incubator/kafka
$ kubectl create -f kafka-pod.yaml
pod/testclient created

Here kafka-pod.yaml is

apiVersion: v1
kind: Pod
metadata:
  name: testclient
  namespace: kafka
spec:
  containers:
  - name: kafka
    image: solsson/kafka:0.11.0.0
    command:
      - sh
      - -c
      - "exec tail -f /dev/null"

jzeng@cloud-dev-one:~$ kubectl --namespace kafka get pods
or
jzeng@cloud-dev-one:~$ kubectl -n kafka get pods
NAME                    READY     STATUS    RESTARTS   AGE
pan-kafka-0             1/1       Running   3          23h
pan-kafka-1             1/1       Running   0          23h
pan-kafka-2             1/1       Running   0          23h
pan-kafka-zookeeper-0   1/1       Running   0          23h
pan-kafka-zookeeper-1   1/1       Running   0          23h
pan-kafka-zookeeper-2   1/1       Running   0          23h
testclient              1/1       Running   0          2h

jzeng@cloud-dev-one:/$ kubectl -n kafka get svc
NAME                           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
pan-kafka                      ClusterIP   10.105.235.56   <none>        9092/TCP                     1d
pan-kafka-0-external           NodePort    10.103.252.6    <none>        19092:31090/TCP              1d
pan-kafka-1-external           NodePort    10.96.70.68     <none>        19092:31091/TCP              1d
pan-kafka-2-external           NodePort    10.97.12.156    <none>        19092:31092/TCP              1d
pan-kafka-headless             ClusterIP   None            <none>        9092/TCP                     1d
pan-kafka-zookeeper            ClusterIP   10.99.201.236   <none>        2181/TCP                     1d
pan-kafka-zookeeper-headless   ClusterIP   None            <none>        2181/TCP,3888/TCP,2888/TCP   1d

Or Just kubectl -n kafka get all

Get info about a pod such as which container is insider it:

jzeng@cloud-dev-one:~$ kubectl -n kafka  get pods pan-kafka-0 -o json
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "creationTimestamp": "2018-09-11T22:45:40Z",
        "generateName": "pan-kafka-",
        "labels": {
            "app": "kafka",
            "controller-revision-hash": "pan-kafka-7477d5d6db",
            "release": "pan-kafka",
            "statefulset.kubernetes.io/pod-name": "pan-kafka-0"
        },
        "name": "pan-kafka-0",
        "namespace": "kafka",
        "ownerReferences": [
            {
                "apiVersion": "apps/v1",
                "blockOwnerDeletion": true,
                "controller": true,
                "kind": "StatefulSet",
                "name": "pan-kafka",
                "uid": "68727fef-b614-11e8-a163-000c291cd135"
            }
        ],
        "resourceVersion": "173198",
        "selfLink": "/api/v1/namespaces/kafka/pods/pan-kafka-0",
        "uid": "68799894-b614-11e8-a163-000c291cd135"
    },
    "spec": {
        "containers": [
            {
                "command": [
                    "sh",
                    "-exc",
                    "unset KAFKA_PORT \u0026\u0026 \\\nexport KAFKA_BROKER_ID=${HOSTNAME##*-} \u0026\u0026 \\\nexport KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 \u0026\u0026 \\\nexec /etc/confluent/docker/run\n"
                ],
                "env": [
                    {
                        "name": "POD_IP",
                        "valueFrom": {
                            "fieldRef": {
                                "apiVersion": "v1",
                                "fieldPath": "status.podIP"
                            }
                        }
                    },
                    {
                        "name": "KAFKA_HEAP_OPTS",
                        "value": "-Xmx1G -Xms1G"
                    },
                    {
                        "name": "KAFKA_ZOOKEEPER_CONNECT",
                        "value": "pan-kafka-zookeeper:2181"
                    },
                    {
                        "name": "KAFKA_LOG_DIRS",
                        "value": "/opt/kafka/data/logs"
                    },
                    {
                        "name": "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR",
                        "value": "3"
                    },
                    {
                        "name": "KAFKA_JMX_PORT",
                        "value": "5555"
                    }
                ],
                "image": "confluentinc/cp-kafka:4.1.2-2",
                "imagePullPolicy": "IfNotPresent",
                "livenessProbe": {
                    "exec": {
                        "command": [
                            "sh",
                            "-ec",
                            "/usr/bin/jps | /bin/grep -q SupportedKafka"
                        ]
                    },
                    "failureThreshold": 3,
                    "initialDelaySeconds": 30,
                    "periodSeconds": 10,
                    "successThreshold": 1,
                    "timeoutSeconds": 5
                },
                "name": "kafka-broker",
                "ports": [
                    {
                        "containerPort": 9092,
                        "name": "kafka",
                        "protocol": "TCP"
                    }
                ],
                "readinessProbe": {
                    "failureThreshold": 3,
                    "initialDelaySeconds": 30,
                    "periodSeconds": 10,
                    "successThreshold": 1,
                    "tcpSocket": {
                        "port": "kafka"
                    },
                    "timeoutSeconds": 5
                },
                "resources": {},
                "terminationMessagePath": "/dev/termination-log",
                "terminationMessagePolicy": "File",
                "volumeMounts": [
                    {
                        "mountPath": "/opt/kafka/data",
                        "name": "datadir"
                    },
                    {
                        "mountPath": "/var/run/secrets/kubernetes.io/serviceaccount",
                        "name": "default-token-x8gks",
                        "readOnly": true
                    }
                ]
            }
        ],
        "dnsPolicy": "ClusterFirst",
        "hostname": "pan-kafka-0",
        "nodeName": "minikube",
        "priority": 0,
        "restartPolicy": "Always",
        "schedulerName": "default-scheduler",
        "securityContext": {},
        "serviceAccount": "default",
        "serviceAccountName": "default",
        "subdomain": "pan-kafka-headless",
        "terminationGracePeriodSeconds": 60,
        "tolerations": [
            {
                "effect": "NoExecute",
                "key": "node.kubernetes.io/not-ready",
                "operator": "Exists",
                "tolerationSeconds": 300
            },
            {
                "effect": "NoExecute",
                "key": "node.kubernetes.io/unreachable",
                "operator": "Exists",
                "tolerationSeconds": 300
            }
        ],
        "volumes": [
            {
                "name": "datadir",
                "persistentVolumeClaim": {
                    "claimName": "datadir-pan-kafka-0"
                }
            },
            {
                "name": "default-token-x8gks",
                "secret": {
                    "defaultMode": 420,
                    "secretName": "default-token-x8gks"
                }
            }
        ]
    },
    "status": {
        "conditions": [
            {
                "lastProbeTime": null,
                "lastTransitionTime": "2018-09-11T22:45:40Z",
                "status": "True",
                "type": "Initialized"
            },
            {
                "lastProbeTime": null,
                "lastTransitionTime": "2018-09-11T22:49:32Z",
                "status": "True",
                "type": "Ready"
            },
            {
                "lastProbeTime": null,
                "lastTransitionTime": null,
                "status": "True",
                "type": "ContainersReady"
            },
            {
                "lastProbeTime": null,
                "lastTransitionTime": "2018-09-11T22:45:40Z",
                "status": "True",
                "type": "PodScheduled"
            }
        ],
        "containerStatuses": [
            {
                "containerID": "docker://ef3972240e0855aa219a285f3da19c704bfcdb8e4d4d7b91a0fe9f2597fca15d",
                "image": "confluentinc/cp-kafka:4.1.2-2",
                "imageID": "docker-pullable://confluentinc/cp-kafka@sha256:73dd49ced8a646c8f857d32bc87608114cbf4cffead32c7d4def950fce5b001a",
                "lastState": {
                    "terminated": {
                        "containerID": "docker://b8cd45d159d0496c1905dadd48c324b5e1373e3f5feb4409936e14615c4901d6",
                        "exitCode": 1,
                        "finishedAt": "2018-09-11T22:48:28Z",
                        "reason": "Error",
                        "startedAt": "2018-09-11T22:48:22Z"
                    }
                },
                "name": "kafka-broker",
                "ready": true,
                "restartCount": 3,
                "state": {
                    "running": {
                        "startedAt": "2018-09-11T22:49:00Z"
                    }
                }
            }
        ],
        "hostIP": "10.5.134.22",
        "phase": "Running",
        "podIP": "172.17.0.23",
        "qosClass": "BestEffort",
        "startTime": "2018-09-11T22:45:40Z"
    }
}

Or similar command, but better human readable output:

jzeng@cloud-dev-one:~$ kubectl -n kafka describe pods pan-kafka-0
Name:               pan-kafka-0
Namespace:          kafka
Priority:           0
PriorityClassName:  <none>
Node:               minikube/10.5.134.22
Start Time:         Tue, 11 Sep 2018 22:45:40 +0000
Labels:             app=kafka
                    controller-revision-hash=pan-kafka-7477d5d6db
                    pod=pan-kafka-0
                    release=pan-kafka
                    statefulset.kubernetes.io/pod-name=pan-kafka-0
Annotations:        <none>
Status:             Running
IP:                 172.17.0.23
Controlled By:      StatefulSet/pan-kafka
Containers:
  kafka-broker:
    Container ID:  docker://ef3972240e0855aa219a285f3da19c704bfcdb8e4d4d7b91a0fe9f2597fca15d
    Image:         confluentinc/cp-kafka:4.1.2-2
    Image ID:      docker-pullable://confluentinc/cp-kafka@sha256:73dd49ced8a646c8f857d32bc87608114cbf4cffead32c7d4def950fce5b001a
    Port:          9092/TCP
    Host Port:     0/TCP
    Command:
      sh
      -exc
      unset KAFKA_PORT && \
export KAFKA_BROKER_ID=${HOSTNAME##*-} && \
export KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${POD_IP}:9092 && \
exec /etc/confluent/docker/run

    State:          Running
      Started:      Tue, 11 Sep 2018 22:49:00 +0000
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Tue, 11 Sep 2018 22:48:22 +0000
      Finished:     Tue, 11 Sep 2018 22:48:28 +0000
    Ready:          True
    Restart Count:  3
    Liveness:       exec [sh -ec /usr/bin/jps | /bin/grep -q SupportedKafka] delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:      tcp-socket :kafka delay=30s timeout=5s period=10s #success=1 #failure=3
    Environment:
      POD_IP:                                   (v1:status.podIP)
      KAFKA_HEAP_OPTS:                         -Xmx1G -Xms1G
      KAFKA_ZOOKEEPER_CONNECT:                 pan-kafka-zookeeper:2181
      KAFKA_LOG_DIRS:                          /opt/kafka/data/logs
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR:  3
      KAFKA_JMX_PORT:                          5555
    Mounts:
      /opt/kafka/data from datadir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-x8gks (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  datadir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  datadir-pan-kafka-0
    ReadOnly:   false
  default-token-x8gks:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-x8gks
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:          <none>



Get a shell to the running container:

jzeng@cloud-dev-one:~$ kubectl --namespace kafka exec -it pan-kafka-0 -- /bin/bash

Verify Kafka:

Once you have the testclient pod above running, you can list all kafka topics with:

  $ kubectl -n kafka exec testclient -- ./bin/kafka-topics.sh --zookeeper pan-kafka-zookeeper:2181 --list

  To create a new topic:

  $ kubectl -n kafka exec testclient -- ./bin/kafka-topics.sh --zookeeper pan-kafka-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1

  To listen for messages on a topic:

  $ kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-consumer.sh --bootstrap-server pan-kafka:9092 --topic test1 --from-beginning

  To stop the listener session above press: Ctrl+C

  To start an interactive message producer session:

  $ kubectl -n kafka exec -ti testclient -- ./bin/kafka-console-producer.sh --broker-list pan-kafka-headless:9092 --topic test1

  To create a message in the above session, simply type the message and press "enter"
  To end the producer session try: Ctrl+C


Snap:

jzeng@cloud-dev-one:~$ snap list
Name     Version  Rev   Tracking  Publisher   Notes
core     16-2.35  5328  stable    canonical  core
kubectl  1.11.2   442   stable    canonical  classic
jzeng@cloud-dev-one:~$ snap info kubectl
name:      kubectl
summary:   kubectl controls the Kubernetes cluster manager.
publisher: Canonical
contact:   snaps@canonical.com
license:   unset



Kafka Helm Chart: an implementation of Kafka StatefulSet. 

jzeng@cloud-dev-one:/home$ helm list
NAME          REVISION      UPDATED                      STATUS       CHART            APP VERSION   NAMESPACE
hardy-heron   2             Tue Sep 11 09:30:31 2018     DEPLOYED.    jenkins-0.18.0  2.121.3       jenkins 
openfaas      2             Mon Sep 10 12:03:09 2018     DEPLOYED.    openfaas-1.2.3                 openfaas
pan-kafka      2             Tue Sep 11 22:53:23 2018     DEPLOYED     kafka-0.9.5      4.1.2         kafka   

jzeng@cloud-dev-one:/home$ helm repo list
NAME          URL                                                                
stable        https://kubernetes-charts.storage.googleapis.com                   
local         http://127.0.0.1:8879/charts                                       
incubator     http://storage.googleapis.com/kubernetes-charts-incubator          
confluentinc  https://raw.githubusercontent.com/confluentinc/cp-helm-charts/master

jzeng@cloud-dev-one:/home$ kubectl get namespaces
NAME          STATUS    AGE
default       Active    2d
jenkins       Active    2d
kafka         Active    18h
kong          Active    2d
kube-public   Active    2d
kube-system   Active    2d
openfaas      Active    2d
openfaas-fn   Active    2d

In Kubernetes Engine, a cluster consists of at least one cluster master and multiple worker machines called nodes. These master and node machines run the Kubernetes cluster orchestration system.
A cluster is the foundation of Kubernetes Engine: the Kubernetes objects that represent your containerized applications all run on top of a cluster.
jzeng@cloud-dev-one:~$ snap list
Name     Version  Rev   Tracking  Publisher   Notes
core     16-2.35  5328  stable    canonical  core
kubectl  1.11.2   442   stable    canonical  classic

jzeng@cloud-dev-one:/home$ kubectl get nodes
NAME       STATUS    ROLES     AGE       VERSION
minikube   Ready     master    2d        v1.11.3

jzeng@cloud-dev-one:/home$ kubectl describe node minikube
Name:               minikube
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/hostname=minikube
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket=/var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl=0
                    volumes.kubernetes.io/controller-managed-attach-detach=true
CreationTimestamp:  Mon, 10 Sep 2018 04:01:26 +0000
Taints:             <none>
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  OutOfDisk        False   Wed, 12 Sep 2018 17:21:39 +0000   Mon, 10 Sep 2018 04:01:16 +0000   KubeletHasSufficientDisk     kubelet has sufficient disk space available
  MemoryPressure   False   Wed, 12 Sep 2018 17:21:39 +0000   Mon, 10 Sep 2018 04:01:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 12 Sep 2018 17:21:39 +0000   Mon, 10 Sep 2018 04:01:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 12 Sep 2018 17:21:39 +0000   Mon, 10 Sep 2018 04:01:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            True    Wed, 12 Sep 2018 17:21:39 +0000   Mon, 10 Sep 2018 04:01:16 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  10.5.134.22
  Hostname:    minikube
Capacity:
 cpu:                8
 ephemeral-storage:  205372392Ki
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65970404Ki
 pods:               110
Allocatable:
 cpu:                8
 ephemeral-storage:  189271196154
 hugepages-1Gi:      0
 hugepages-2Mi:      0
 memory:             65868004Ki
 pods:               110
System Info:
 Machine ID:                 432fd83d2db64486bd5f71e5b1b7fbb5
 System UUID:                D4DB4D56-E637-2811-718B-81B23F1CD135
 Boot ID:                    9d88f371-9ed2-4b05-91a1-98793ab215cc
 Kernel Version:             4.15.0-33-generic
 OS Image:                   Ubuntu 18.04.1 LTS
 Operating System:           linux
 Architecture:               amd64
 Container Runtime Version:  docker://18.3.1
 Kubelet Version:            v1.11.3
 Kube-Proxy Version:         v1.11.3
Non-terminated Pods:         (33 in total)
  Namespace                  Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits
  ---------                  ----                                     ------------  ----------  ---------------  -------------
  default                    busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)
  default                    flogo-7b9db9fbb9-mk2jq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)
  jenkins                    hardy-heron-jenkins-6b777f567d-ghpk5     50m (0%)      2 (25%)     256Mi (0%)       2Gi (3%)
  kafka                      pan-kafka-0                              0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kafka                      pan-kafka-1                              0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kafka                      pan-kafka-2                              0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kafka                      pan-kafka-zookeeper-0                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kafka                      pan-kafka-zookeeper-1                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kafka                      pan-kafka-zookeeper-2                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kong                       kong-67fd577fb7-9pxxq                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kong                       kong-67fd577fb7-qr7nc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kong                       kong-67fd577fb7-xnwl8                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kong                       konga-58f646894-bv8kn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kong                       postgres-6b56c58d88-7nkxn                0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                coredns-78fcdf6894-t74d7                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                coredns-78fcdf6894-xr2pd                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)
  kube-system                etcd-minikube                            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-addon-manager-minikube              5m (0%)       0 (0%)      50Mi (0%)        0 (0%)
  kube-system                kube-apiserver-minikube                  250m (3%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-controller-manager-minikube         200m (2%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-proxy-9492m                         0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                kube-scheduler-minikube                  100m (1%)     0 (0%)      0 (0%)           0 (0%)
  kube-system                kubernetes-dashboard-6f66c7fc56-xc7qt    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)
  kube-system                tiller-deploy-64c9d747bd-vd4bj           0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas-fn                decode-instance-b6557896d-rvdjw          0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas-fn                hello-openfaas-py-7c89fc9564-dl647       0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   alertmanager-764bf95d45-8c687            0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   faas-idler-675b5d4576-b9rj2              0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   gateway-76bcdd8577-bnhmw                 0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   nats-74fc8944fb-bmzqh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   prometheus-6df4df55fc-fx964              0 (0%)        0 (0%)      0 (0%)           0 (0%)
  openfaas                   queue-worker-55d499fffb-gzbfk            0 (0%)        0 (0%)      0 (0%)           0 (0%)
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource  Requests    Limits
  --------  --------    ------
  cpu       805m (10%)  2 (25%)
  memory    446Mi (0%)  2388Mi (3%)
Events:     <none>


Search for Chart:

jzeng@cloud-dev-one:/home$ helm search
NAME                                              CHART VERSION   APP VERSION                      DESCRIPTION                                                
confluentinc/cp-helm-charts                       0.1.0           1.0                              A Helm chart for Confluent Open Source                     
incubator/artifactory                             5.2.0                                            Universal Repository Manager supporting all major packagi...
incubator/azuremonitor-containers                 0.2.0           2.0.0-3                          Helm chart for deploying Azure Monitor container monitori...
incubator/burrow                                  0.3.3           0.17.1                           Burrow is a permissionable smart contract machine          
incubator/cassandra                               0.5.3           3                                Apache Cassandra is a free and open-source distributed da...
incubator/chartmuseum                             1.1.1           0.5.1                            Helm Chart Repository with support for Amazon S3 and Goog...
incubator/check-mk                                0.2.1           1.4.0p26                         check_mk monitoring                                        
incubator/cockroachdb                             0.1.1                                            CockroachDB Helm chart for Kubernetes.                     
incubator/common                                  0.0.4           0.0.4                            Common chartbuilding components and helpers                 
incubator/consul                                  0.1.4                                            Highly available and distributed service discovery and ke...
incubator/couchdb                                 0.2.0           2.2.0                            A database featuring seamless multi-
……..

Enable K8s Dashboard on Mac:

From “Preference”: Enable Kubernetes

Install Web UI:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml


(most of the URL from google search do not work.  But above works)

Check if it is running:

SJCMACJ15JHTD8:~ jzeng$ kubectl -n kube-system get pod
NAME                                         READY     STATUS    RESTARTS   AGE
etcd-docker-for-desktop                      1/1       Running   0          1d
kube-apiserver-docker-for-desktop            1/1       Running   0          1d
kube-controller-manager-docker-for-desktop   1/1       Running   0          1d
kube-dns-86f4d74b45-mj446                    3/3       Running   0          1d
kube-proxy-dd4lh                             1/1       Running   0          1d
kube-scheduler-docker-for-desktop            1/1       Running   0          1d
kubernetes-dashboard-669f9bbd46-xf7h7        1/1       Running   0          5m

Access Web UI:


you likely got an error trying to access the dashboard.
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "services \"https:kubernetes-dashboard:\" is forbidden: User \"system:anonymous\" cannot get services/proxy in the namespace \"kube-system\"",
  "reason": "Forbidden",
  "details": {
    "name": "https:kubernetes-dashboard:",
    "kind": "services"
  },
  "code": 403
}


Solutions are:

kubectl proxy
This access mode is not recommended to be used as the method to publicly expose your dashboard. The proxy only allows HTTP connection.
To use this method you need to install kubectl in your computer and run the following command. The proxy will start to serve the dashboard on http://localhost:8001 by default.
kubectl proxy

API Server

This is the method which I recommend to use for production systems as well as for dev and test. It is important to keep the same security mechanisms end to end and get familiar with Kubernetes RBAC.
You need to export a single file (.p12) with the following two certificates: the client-certificate-data, and the client-key-data. My example runs the command on /home/jzeng. If you run this command on macOS, be sure to change the base64 -d to base64 -D.
grep 'client-certificate-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -D >> kubecfg.crt

grep 'client-key-data' ~/.kube/config | head -n 1 | awk '{print $2}' | base64 -D >> kubecfg.key

openssl pkcs12 -export -clcerts -inkey kubecfg.key -in kubecfg.crt -out kubecfg.p12 -name "kubernetes-client"

certificate password:  welcome

Import the certificate (the p12 file above) through KeyChain:



Then do following:
1.      Create service account
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kube-system
EOF
2.      Create ClusterRoleBinding
cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system
EOF
3.      Get the Bearer Token. Once you run the following command, copy the token value which you will use on the following step.
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWduemZsIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI0MzZkY2IwZC0yNWE3LTExZTktODNlZC0wMjUwMDAwMDAwMDEiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.USNILOmMb043wE_mk39l0Ozj73CAMGmaCb52rH0-iBjAIydVUSF0DiyWBiu7Na6dMNn_oGrOwKtYMrZl_taSYFjpFDzKxHQau62-FdZGzcM_YI1gCuFQKiL9IzGyo9LnjcWB059hIb2nuu0Id7EO-d5rzogJauxRMldw3_aiQ4x-qC5r8tI9Y-ioGxvsi6VjVfSbJnpeRxzqdXOnthjJXeMLIepNmJvz42WLIlb8JgDrVwsK_sfCCNTmNmQ65fLCpKgANXuPhSYycCamKuvFlNasy6Tk6QTrYTVeaLq6riwXdRs_Qavo_id0mx_MAl9LtFKTYCjNpx5aLpCCp-PvkQ
4.      Come back to your browser and choose token on the login page. You will need to paste the token value you have copied on the previous step.






SJCMACJ15JHTD8:~ jzeng$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    insecure-skip-tls-verify: true
    server: https://localhost:6443
  name: docker-for-desktop-cluster
contexts:
- context:
    cluster: docker-for-desktop-cluster
    user: docker-for-desktop
  name: docker-for-desktop
current-context: docker-for-desktop
kind: Config
preferences: {}
users:
- name: docker-for-desktop
  user:
    client-certificate-data: REDACTED
    client-key-data: REDACTED


SJCMACJ15JHTD8:~ jzeng$ kubectl cluster-info
Kubernetes master is running at https://localhost:6443

KubeDNS is running at https://localhost:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy




Docker is based on 3 kernel technologies:

cgroups
namespaces
capabilities