Hands-on Intro to

Kubernetes & OpenShift

{ for JS Hackers }

bit.ly/nodejs-on-k8s

Recorded on Thursday 9am-10:50

presented by…

&

brought to you by

Introduction

Intro Survey / Who are you?

  1. do you have any experience using containers?
  2. have you completed all of the laptop setup tasks?
  3. do you have any experience using Kubernetes?
  4. do you consider yourself to be proficient with the oc or kubectl cli tools?
  5. can you name five basic Kubernetes primitives or resource types?
  6. do you have a plan for iterative web development using containers?

Workshop Agenda

# Kubernetes * [is](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/): an ops tool; a collection of APIs for managing container-based workloads * [is not](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not): a PaaS
# OpenShift * includes, extends, & is a distribution of: Kubernetes * adds: Multi-tenant security, PaaS-style workflows, Service Catalog and Brokers, a container registry, distributed metrics and logs ... ![octoverse_chart](https://d33wubrfki0l68.cloudfront.net/d5511b44cad712d8ba3e7139448c081366808fa0/27939/images/blog-logging/2018-04-24-open-source-charts-2017/most-discussed.png)

More Information

## Workshop Requirements To run this workshop locally, you'll need: 1. [kubectl](#/kubectl) 2. [oc](#/oc) 3. [bash](#/bash) 3. [minishift](#/minishift) ↓

Install kubectl

For detailed installation notes, see the kubectl install doc

One line install for linux/amd64:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

One line install for macOS:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

To verify kubectl availability, try running:

kubectl help

Install oc

For detailed installation notes, see the oc installation doc

One line install for linux/amd64:

curl -Lo oc.tar.gz https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz && tar xvzf oc.tar.gz */oc && sudo mv $_ /usr/local/bin/ && rm -d openshift-origin-client-tools-* && rm oc.tar.gz

One line install for macOS:

curl -Lo oc.zip https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-mac.zip && tar xvzf oc.zip oc && sudo mv oc /usr/local/bin/ && rm oc.zip

To verify oc availability, try running:

oc help

Bash for Windows

Windows users should install the Windows Subsystem for Linux and related command-line tools:

Enable Control Panel > Programs > Windows Features > Windows Subsystem for Linux

Install minishift

For detailed installation notes, see the minishift release notes

One line install for linux/amd64:

curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.25.0/minishift-1.25.0-linux-amd64.tgz && tar xvzf minishift.tgz */minishift && sudo mv $_ /usr/local/bin/ && rm -d minishift-* && rm minishift.tgz

One line install for macOS:

curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.25.0/minishift-1.25.0-darwin-amd64.tgz && tar xvzf minishift.tgz */minishift && sudo mv $_ /usr/local/bin/ && rm -d minishift-* && rm minishift.tgz

Optionally, customize your cluster's memory or cpu allocation:

minishift config set memory 4096
minishift config set cpus 2
minishift config set openshift-version latest

to verify minishift availability:

minishift version

Virtualization Plugins

See the minishift installation guide for virt driver plugin requirements


If your minishift environment does not boot correctly:

  1. Minishift requires an OS virtualization solution. Most OSes already include one!
  2. Install the appropriate driver plugin for your system
  3. Use the --vm-driver flag to select specific plugins by name
minishift start --vm-driver=virtualbox

Minishift Basics

minishift provides an easy way to run OpenShift locally:

minishift start

When you are done, halt the VM to free up system resources:

minishift stop

Need a fresh start? Delete your VM instance with:

minishift delete

Ready?


Verify that your cli tools are configured to connect to your Kubernetes environment:

kubectl version

The output should include your kubectl version info, and the release version of the kubernetes API server (when available)

Let's Go!

Kubernetes Basics

↓

Kubernetes uses ## etcd to keep track of the cluster's state ![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png) * distributed key-value store * implements the [RAFT](https://raft.github.io/raft.pdf) consensus protocol * CAP theorum: [CAP twelve years later](https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed)
## Etcd cluster sizes Fault tolerance sizing chart: ![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
Kubernetes provides… # An API API object primitives include the following attributes: ``` kind apiVersion metadata spec status ``` *mostly true Extended Kubernetes API Reference: http://k8s.io/docs/reference/generated/kubernetes-api/v1.12/
### Basic K8s Terminology 1. [node](#/node) 2. [pod](#/po) 3. [service](#/svc) 4. [deployment](#/deployment) 5. [replicaSet](#/rs) Introduction borrowed from: [bit.ly/k8s-kubectl](http://bit.ly/k8s-kubectl)
### Nodes A node is a host machine (physical or virtual) where containerized processes run. Node activity is managed via one or more Master instances.

Try using kubectl to list resources by type:

kubectl get nodes

Log in as an admin user (password "openshift")

minishift addon apply admin-user
oc login -u admin

Try to list nodes using admin credentials:

kubectl get nodes

Now try using curl to make the same request:

curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/nodes

We won't need admin priveleges for the remaining content, so let's swap back to the "developer" user:

oc login -u developer
### Observations: * Designed to exist on multiple machines (distributed system) * built for high availability * platform scale out * The Kubernetes API checks auth credentials and restricts access to Etcd, our platform's distributed consensus store * Your JS runs on nodes!
### Pods A group of one or more co-located containers. Pods represent your minimum increment of scale. > "Pods Scale together, and they Fail together" @theSteve0

Try using kubectl to list resources by type:

kubectl get pods

Create a new resource from a json object specification:

curl https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json
curl -k -H"Authorization: Bearer $(oc whoami -t)" -H'Content-Type: application/json' https://$(minishift ip):8443/api/v1/namespaces/myproject/pods -X POST --data-binary @pod.json

Attempt the same using kubectl:

kubectl create -f https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json

List pods by type using curl:

curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods

Fetch an individual resource by type/id; output as json:

kubectl get pod hello-k8s -o json

Attempt the same using curl:

curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods/hello-k8s

Notice any changes between the initial json podspec and the API response?

Request the same info, but output the results as structured yaml:

kubectl get pod hello-k8s -o yaml

Print human-readable API output:

kubectl describe pod/hello-k8s
### Observations: * API resources provide declarative specifications with asyncronous fulfilment of requests * you set the `spec`, the platform will populate the `status` * automated health checking for PID1 in each container * Pods are scheduled to be run on nodes * The API ambidextriously supports both json and yaml
### Services Services (svc) establish a single endpoint for a collection of replicated pods, distributing traffic based on label selectors In our K8s modeling language they represent a load balancer. Their implementation may vary per cloud provider

Contacting your App

Expose the pod by creating a new service (or "loadbalancer"):

kubectl expose pod/hello-k8s --port 8080 --type=NodePort

Take a look at the resulting {.spec.selector} attribute:

kubectl get svc/hello-k8s -o json

Try using a JSONpath selector to find the assigned port number:

kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort}

Contact your newly-exposed pod via the exposed nodePort:

echo http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
curl http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Schedule the deletion of all pods that are labeled with:

kubectl get pods -l run=hello-k8s
kubectl delete pods -l run=hello-k8s

Contact the related service. What happens?:

curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Delete the service:

kubectl delete service hello-k8s
### Observations: * *"service"* basically means *"loadbalancer"* * Label selectors can be used to organize workloads and manage groups of related resouces * The Service resource uses label selectors to discover where traffic should be directed * Pods and Services exist independently, have disjoint lifecycles
### Deployments A `deployment` helps you specify container runtime requirements (in terms of pods)

Create a specification for your deployment:

kubectl run hello-k8s --image=jkleinert/nodejsint-workshop \
--dry-run -o json > deployment.json

View the generated deployment spec file:

cat deployment.json

Create a new deployment from your local spec file:

kubectl create -f deployment.json

Create a Service spec to direct traffic:

kubectl expose deploy/hello-k8s --type=NodePort --port=8080 --dry-run -o json > service.json

View the resulting spec file:

cat service.json

Create a new service from your local spec file:

kubectl create -f service.json

List multiple resources by type:

kubectl get po,svc,deploy

Connect to your new deployment via the associated service port:

curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Replication

Scale up the hello-k8s deployment to 3 replicas:

kubectl scale deploy/hello-k8s --replicas=3

List pods:

kubectl get po

Edit deploy/hello-k8s, setting spec.replicas to 5:

kubectl edit deploy/hello-k8s -o json

Save and quit. What happens?

kubectl get pods

AutoRecovery

Watch for changes to pod resources:

kubectl get pods --watch &

In another terminal, delete several pods by id:

kubectl delete pod $(kubectl get pods | grep ^hello-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')

What happened? How many pods remain?

kubectl get pods

Close your backgrounded --watch processes by running fg, then sending a break signal (CTRL-c)

### Observations: * Use the `--dry-run` flag to generate new resource specifications * A deployment spec contains a pod spec in it's "template" element * The API provides `edit` and `watch` operations (in addition to `get`, `set`, and `list`)
### ReplicaSets A `replicaset` provides replication and lifecycle management for a specific image release

View the current state of your deployment:

curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Watch deployments:

kubectl get deploy -w &

Rollouts

Update your deployment's image spec to rollout a new release:

kubectl set image deploy/hello-k8s hello-k8s=jkleinert/nodejsint-workshop:v1

View the current state of your deployment

curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Ask the API to list replicaSets

kubectl get rs

Rollbacks

View the list of previous rollouts:

kubectl rollout history deploy/hello-k8s

Rollback to the previous state:

kubectl rollout undo deployment hello-k8s

Reload your browser to view the state of your deployment

Cleanup

Cleanup all resources:

kubectl delete service,deployment hello-k8s

Close your remaining --watch listeners by running fg before sending a break signal (CTRL-c)


Verify that your namespace is clean:

kubectl get all
### Observations: * ReplicaSets provide lifecycle management for pod resources * Deployments create ReplicaSets to manage pod replication per rollout (per change in podspec: image:tag, environment vars) * `Deployments` > `ReplicaSets` > `Pods`

From KubeCon NA 2017

"Developing locally with Minikube": youtu.be/_W6O_pfA00s

Hands-On with OpenShift

Build

Build and deploy container images

Introducing…

The OpenShift Web Console


Access the dashboard by running:

minishift dashboard

Web Workflow: Create

For this example, we will deploy a fork of the ryanj/http-base repo by clicking on "Add to Project" in the web console

Example repo source: http://github.com/ryanj/http-base

  1. Fork the ryanj/http-base repo on GitHub. This will allow you to configure your own GitHub webhooks in the upcoming Deploy section
  2. Return to the web console and click on "Add to Project"
  3. Next, select a nodejs base image, and name your webservice "http-base". Then enter the github url for your fork
  4. Review the options, then press the "Create" button when you're ready to proceed

Container Status

The web console uses a socket stream to report status changes as they occur throughout the cluster

After the build task has completed, find the NAME of the pod where your image has been deployed:

oc get pods

As with the core APIs, the CLI output is consistently formatted, following established patterns:

kubectl get pods

Source

to

Image

Combines source repos and operationally-maintained builder images to produce application images

Available as a standalone project, for use with Jenkins or other external builder processes: github.com/openshift/source-to-image

Automate

git push to deploy

Send in the Clones

Clone a local copy of your repo fork by adding your own github username to the following command:

git clone http://github.com/YOUR_GH_USERNAME/http-base
cd http-base

WebHook Build Automation

Set up a commit WebHook to automate image production

Explore the Build resources using the web console. Look for the GitHub Webhook settings. Copy the webhook url, and paste it into your repo's Webhook settings on GitHub

If you're running OpenShift locally in a VM, try using ultrahook to proxy webhook events to your laptop

ReBuild on Push

After configuring the webhook for your repo, add a small commit locally, then git push to deploy

git push

Or, use GitHub's web-based editor to make a minor change

If you don't have a working webhook to automate the build process, it can also be started manually:

oc start-build http-base

Iterate

Iterate using a fully containerized toolchain

Live Development

Make a minor edit to your local repo's index.html file,

then test your changes before you commit by synching content into your hosted container:

export PODNAME=$(oc get pods -l app=http-base | tail -n 1 | cut -f1 -d' ')
oc rsync -w --exclude='.git,node_modules' . $PODNAME:
## Terminal Access * Available in the Web Console * And on the CLI, with: oc exec -it $PODNAME -- bash curl http-base
## Configuration [Environment Variables](https://docs.openshift.org/latest/dev_guide/environment_variables.html) are one way to add configuration settings to your images: oc env dc/http-base KEY=VALUE ConfigMaps and Secrets are also useful configuration abstractions
## Logs Centralized logging and metrics

Deployment Strategies

Get more control of your container rollout and update processes by selecting appropriate deployment strategies for your fleet of managed containers

Collaborate

Share and replicate your success

Service Catalog & Brokers

Expose and provision services

www.openservicebrokerapi.org

### Everyone's Service Catalog > "The Open Service Broker API project allows developers, ISVs, and SaaS vendors a single, simple, and elegant way to deliver services to applications running within cloud native platforms" Works with: [Kubernetes](https://github.com/kubernetes-incubator/service-catalog), [OpenShift](https://docs.openshift.com/container-platform/3.6/architecture/service_catalog/template_service_broker.html), [Cloud Foundry](https://github.com/spring-cloud/spring-cloud-open-service-broker)
### Available Service Brokers * [Template Broker](#) * [Helm Chart Broker](#) * [AWS Broker](https://aws.amazon.com/blogs/opensource/aws-service-operator-kubernetes-available/) * [DIY Broker example](#)

Templates as Installers

Install a template into the current project, making it easier to reuse:

oc create -f template.json

Create an application from an installed template, from a file, or from a url:

oc new-app -f template.json

Multi-Service App Example

Nodejs and MongoDB multi-service application example:

oc create -f https://raw.githubusercontent.com/openshift-roadshow/nationalparks-js/master/nationalparks-js.json

github.com/ryanj/nationalparks-js

Review and install the above template content using oc create, then try launching it via the web-based Service Catalog.

When you're done, list all available API resources to review the contents of your project namespace:

oc get all

Advanced Extensibility

### Standardize your environments with custom base images https://docs.okd.io/latest/using_images/s2i_images/nodejs.html https://github.com/bucharest-gold/centos7-s2i-nodejs

"Operators"

Operators = Custom Resources + Custom Controllers

coreos.com/operators

github.com/operator-framework/awesome-operators

Wrap Up

Exit Survey

  1. have experience using containers?
  2. have experience using Kubernetes?
  3. Do you consider yourself to be basically proficient with the oc or kubectl command-line tools?
  4. Can you name five basic Kubernetes primitives or resource types?
  5. Ready to start standardizing your web development processes with containers?
## Resources

Free O'Reilly Ebook

Deploying to OpenShift

## Top Tools for K8s JS Hackers * https://www.npmjs.com/package/openshift-rest-client * https://www.npmjs.com/package/nodeshift * https://github.com/bucharest-gold/centos7-s2i-nodejs
## K8s Terminology 1. [node](https://kubernetes.io/docs/concepts/architecture/nodes/) 2. [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) 3. [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 4. [service](https://kubernetes.io/docs/concepts/services-networking/service/) 5. [replicaSet (rs)](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/)
## OpenShift Terminology 1. [buildConfig (bc)](https://docs.openshift.org/latest/rest_api/apis-build.openshift.io/v1.BuildConfig.html) 2. [imageStream (is)](https://docs.openshift.org/latest/rest_api/apis-image.openshift.io/v1.ImageStream.html) 3. [deploymentConfig (dc)](https://docs.openshift.org/latest/rest_api/apis-apps.openshift.io/v1.DeploymentConfig.html) 4. [route](https://docs.openshift.org/latest/rest_api/apis-route.openshift.io/v1.Route.html) 5. [template](https://docs.openshift.org/latest/rest_api/oapi/v1.Template.html)
### More Ways to Try OpenShift * [OpenShift Learning Portal](http://learn.openshift.com) * [OpenShift Origin](https://github.com/openshift/origin) (and [minishift](https://github.com/minishift/minishift)) * [OpenShift Online (Starter and Pro plans available)](https://www.openshift.com/products/online/) * [OpenShift Dedicated (operated on AWS, GCE, and Azure)](https://www.openshift.com/products/dedicated/) * [OpenShift Container Platform (supported on RHEL, CoreOS)](https://www.openshift.com/products/container-platform/) for a local laptop install see [minikube](http://bit.ly/k8s-minikube) and/or [minishift](http://bit.ly/k8s-minishift)

Q&A

Thank You!

@RyanJ / [email protected]

bit.ly/nodejs-on-k8s

Runs on Kubernetes Presented by: @ryanj