Hands-on Intro to

Kubernetes & OpenShift

{ For Beginners }

bit.ly/k8s-intro

presented by…

Principal OpenShift Technical Marketing Manager

Kubernetes, Containers, Microservices, and Cloud Native architecture


brought to you by

Introduction

Intro Survey / Who are you?

  1. do you have any experience using containers?
  2. do you have any experience using Kubernetes?
  3. do you consider yourself to be proficient with the oc or kubectl cli tools?
  4. can you name five basic Kubernetes primitives or resource types?
  5. do you have a plan for iterative web development using containers?
  6. have you completed all of the laptop setup tasks?

Workshop Agenda

# Kubernetes * [is](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/): an ops tool; a collection of APIs for managing container-based workloads * [is not](https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/#what-kubernetes-is-not): a PaaS
# OpenShift * includes, extends, & is a distribution of: Kubernetes * adds: Multi-tenant security, PaaS-style workflows, Service Catalog and Brokers, a container registry, distributed metrics and logs ... ![octoverse_chart](https://d33wubrfki0l68.cloudfront.net/d5511b44cad712d8ba3e7139448c081366808fa0/27939/images/blog-logging/2018-04-24-open-source-charts-2017/most-discussed.png)

More Information

## Workshop Requirements To run this workshop locally, you'll need: 1. [bash](#/bash) 2. [kubectl](#/kubectl) 3. [oc](#/oc) 4. [browser](#/browser) ↓

Bash for Windows

Windows users should install one of the following

  • Windows Subsystem for Linux and related command-line tools:

  • Control Panel > Programs > Windows Features > Windows Subsystem for Linux

     
  • git-bash can be installed as well using the git installer

Install kubectl

For detailed installation notes, see the kubectl install doc

One line install for linux/amd64:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

One line install for macOS:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

To verify kubectl availability, try running:

kubectl help

Install oc

For detailed installation notes, see the oc installation doc

One line install for linux/amd64:

curl -Lo oc.tar.gz https://mirror.openshift.com/pub/openshift-v3/clients/3.11.115/linux/oc.tar.gz && tar xvzf oc.tar.gz && sudo mv oc /usr/local/bin/ && rm oc.tar.gz

One line install for macOS:

curl -Lo oc.tar.gz https://mirror.openshift.com/pub/openshift-v3/clients/3.11.115/macosx/oc.tar.gz && tar xvzf oc.tar.gz && sudo mv oc /usr/local/bin/ && rm oc.tar.gz

To verify oc availability, try running:

oc help

Browser

Most modern uptodate browsers will work. We recommend the follwing:

  • Firefox
  • Chrome
  • Opera

But I don't want to do all that...

You may use our browser based console if you don't want to install anything


Ready?


Verify that your cli tools are configured to connect to your Kubernetes environment:

kubectl version

The output should include your kubectl version info, and the release version of the kubernetes API server (when available)

Let's Go!

Use the oc command to login to the cluster:

oc login https://master.sjc-c866.openshiftworkshop.com

If you're on the browser based console; you may be already be logged in

Kubernetes Basics

↓

Kubernetes uses ## etcd to keep track of the cluster's state ![etcd logo](https://raw.githubusercontent.com/coreos/etcd/master/logos/etcd-glyph-color.png) * distributed key-value store * implements the [RAFT](https://raft.github.io/raft.pdf) consensus protocol * CAP theorum: [CAP twelve years later](https://www.infoq.com/articles/cap-twelve-years-later-how-the-rules-have-changed)
## Etcd cluster sizes Fault tolerance sizing chart: ![etcd cluster sizing chart](http://cloudgeekz.com/wp-content/uploads/2016/10/etcd-fault-tolerance-table.png)
Kubernetes provides… # An API API object primitives include the following attributes: ``` kind apiVersion metadata spec status ``` *mostly true Extended Kubernetes API Reference: http://k8s.io/docs/reference/generated/kubernetes-api/v1.12/
### Basic K8s Terminology 1. [node](#/node) 2. [pod](#/po) 3. [service](#/svc) 4. [deployment](#/deployment) 5. [replicaSet](#/rs) Introduction borrowed from: [bit.ly/k8s-kubectl](http://bit.ly/k8s-kubectl)
### Nodes A node is a host machine (physical or virtual) where containerized processes run. Node activity is managed via one or more Master instances.

Try using kubectl to list resources by type:

kubectl get nodes

Did it work?

Only an admin user can list nodes (just watch for now)

oc login -u admin

With admin credentials:

kubectl get nodes

We won't need admin priveleges for this workshop, but you can see who you're logged in as:

oc whoami
### Observations: * Designed to exist on multiple machines (distributed system) * built for high availability * platform scale out * The Kubernetes API checks auth credentials and restricts access to Etcd, our platform's distributed consensus store * You may or maynot be able to do things depending on your access
### Pods A group of one or more co-located containers. Pods represent your minimum increment of scale. > "Pods Scale together, and they Fail together" @theSteve0

Try using kubectl to list resources by type:

kubectl get pods

Create a new resource from a json object specification:

curl https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json
curl -k -H"Authorization: Bearer $(oc whoami -t)" -H'Content-Type: application/json' https://master.sjc-c866.openshiftworkshop.com/api/v1/namespaces/myproject/pods -X POST --data-binary @pod.json

Attempt the same using kubectl:

kubectl create -f https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json

List pods by type using curl:

curl -k -H"Authorization: Bearer $(oc whoami -t)" https://master.sjc-c866.openshiftworkshop.com/api/v1/namespaces/myproject/pods

Fetch an individual resource by type/id; output as json:

kubectl get pod hello-k8s -o json

Attempt the same using curl:

curl -k -H"Authorization: Bearer $(oc whoami -t)" https://master.sjc-c866.openshiftworkshop.com/api/v1/namespaces/myproject/pods/hello-k8s

Notice any changes between the initial json podspec and the API response?

Request the same info, but output the results as structured yaml:

kubectl get pod hello-k8s -o yaml

Print human-readable API output:

kubectl describe pod/hello-k8s
### Observations: * API resources provide declarative specifications with asyncronous fulfilment of requests * you set the `spec`, the platform will populate the `status` * automated health checking for PID1 in each container * Pods are scheduled to be run on nodes * The API ambidextriously supports both json and yaml
### Services Services (svc) establish a single endpoint for a collection of replicated pods, distributing traffic based on label selectors In our K8s modeling language they represent a load balancer. Their implementation may vary per cloud provider

Contacting your App

Expose the pod by creating a new service (or "loadbalancer"):

kubectl expose pod/hello-k8s --port 8080 --type=NodePort

Take a look at the resulting {.spec.selector} attribute:

kubectl get svc/hello-k8s -o json

Try using a JSONpath selector to find the assigned port number:

kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort}

Contact your newly-exposed pod via the exposed nodePort:

echo http://master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
curl http://master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Schedule the deletion of all pods that are labeled with:

kubectl get pods -l run=hello-k8s
kubectl delete pods -l run=hello-k8s

Contact the related service. What happens?:

curl master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Delete the service:

kubectl delete service hello-k8s
### Observations: * *"service"* basically means *"loadbalancer"* * Label selectors can be used to organize workloads and manage groups of related resouces * The Service resource uses label selectors to discover where traffic should be directed * Pods and Services exist independently, have disjoint lifecycles
### Deployments A `deployment` helps you specify container runtime requirements (in terms of pods)

Create a specification for your deployment:

kubectl create deployment hello-k8s --image=jkleinert/nodejsint-workshop \
    --dry-run -o json > deployment.json

View the generated deployment spec file:

cat deployment.json

Create a new deployment from your local spec file:

kubectl create -f deployment.json

Create a Service spec to direct traffic:

kubectl expose deploy/hello-k8s --type=NodePort --port=8080 --dry-run -o json > service.json

View the resulting spec file:

cat service.json

Create a new service from your local spec file:

kubectl create -f service.json

List multiple resources by type:

kubectl get po,svc,deploy

Connect to your new deployment via the associated service port:

curl master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Replication

Scale up the hello-k8s deployment to 3 replicas:

kubectl scale deploy/hello-k8s --replicas=3

List pods:

kubectl get po

Edit deploy/hello-k8s, setting spec.replicas to 5:

kubectl edit deploy/hello-k8s -o json

Save and quit. What happens?

kubectl get pods

AutoRecovery

Watch for changes to pod resources:

watch kubectl get pods

In another terminal, delete several pods by id:

kubectl delete pod $(kubectl get pods | grep ^hello-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')

What happened? How many pods remain?

### Observations: * Use the `--dry-run` flag to generate new resource specifications * A deployment spec contains a pod spec in it's "template" element * The API provides `edit` and `watch` operations (in addition to `get`, `set`, and `list`)
### ReplicaSets A `replicaset` provides replication and lifecycle management for a specific image release

View the current state of your deployment:

curl master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Watch deployments:

watch kubectl get deploy

Rollouts

Update your deployment's image spec to rollout a new release:

kubectl set image deploy/hello-k8s nodejsint-workshop=jkleinert/nodejsint-workshop:v1

View the current state of your deployment

curl master.sjc-c866.openshiftworkshop.com:$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})

Ask the API to list replicaSets

kubectl get rs

Rollbacks

View the list of previous rollouts:

kubectl rollout history deploy/hello-k8s

Rollback to the previous state:

kubectl rollout undo deployment hello-k8s

Reload your browser to view the state of your deployment

Cleanup

Cleanup all resources:

kubectl delete service,deployment hello-k8s

Close your remaining --watch listeners by running fg before sending a break signal (CTRL-c)


Verify that your namespace is clean:

kubectl get all
### Observations: * ReplicaSets provide lifecycle management for pod resources * Deployments create ReplicaSets to manage pod replication per rollout (per change in podspec: image:tag, environment vars) * `Deployments` > `ReplicaSets` > `Pods`

Hands-On with OpenShift

Build

Build and deploy container images

Build Prerequisites

If you don't want to use your github account:

You can use http://gogs-roadshow.apps.sjc-c866.openshiftworkshop.com/

Introducing…

The OpenShift Web Console


Access the dashboard by visiting: https://master.sjc-c866.openshiftworkshop.com

Web Workflow: Create

For this example, we will deploy a fork of the ryanj/http-base repo by clicking on "Add to Project" in the web console

Example repo source: http://github.com/ryanj/http-base

  1. Fork the ryanj/http-base repo on GitHub. This will allow you to configure your own GitHub webhooks in the upcoming Deploy section
  2. Return to the web console and click on "Add to Project"
  3. Next, select a nodejs base image, and name your webservice "http-base". Then enter the github url for your fork
  4. Review the options, then press the "Create" button when you're ready to proceed

Container Status

The web console uses a socket stream to report status changes as they occur throughout the cluster

After the build task has completed, find the NAME of the pod where your image has been deployed:

oc get pods

As with the core APIs, the CLI output is consistently formatted, following established patterns:

kubectl get pods

Source

to

Image

Combines source repos and operationally-maintained builder images to produce application images

Available as a standalone project, for use with Jenkins or other external builder processes: github.com/openshift/source-to-image

Automate

git push to deploy

Send in the Clones

Clone a local copy of your repo fork by adding your own github username to the following command:

git clone http://github.com/YOUR_GH_USERNAME/http-base
cd http-base

WebHook Build Automation

Set up a commit WebHook to automate image production

Explore the Build resources using the web console. Look for the GitHub Webhook settings. Copy the webhook url, and paste it into your repo's Webhook settings on GitHub

If you're running OpenShift locally in a VM and did NOT want to use gogs, try using ultrahook to proxy webhook events to your laptop

ReBuild on Push

After configuring the webhook for your repo, add a small commit locally, then git push to deploy

git push

Or, use GitHub's web-based editor to make a minor change

If you don't have a working webhook to automate the build process, it can also be started manually:

oc start-build http-base
Tip!

Modify http-base/.git/config so you don't have to enter a password each time

Iterate

Iterate using a fully containerized toolchain

Live Development

Make a minor edit to your local repo's index.html file,

then test your changes before you commit by synching content into your hosted container:

export PODNAME=$(oc get pods -l app=http-base | tail -n 1 | cut -f1 -d' ')
oc rsync -w --exclude='.git,node_modules' . $PODNAME:
## Terminal Access * Available in the Web Console * And on the CLI, with: oc exec -it $PODNAME -- bash curl http-base
## Configuration [Environment Variables](https://docs.openshift.org/latest/dev_guide/environment_variables.html) are one way to add configuration settings to your images: oc set env dc/http-base KEY=VALUE ConfigMaps and Secrets are also useful configuration abstractions
## Logs Centralized logging and metrics

Deployment Strategies

Get more control of your container rollout and update processes by selecting appropriate deployment strategies for your fleet of managed containers

Collaborate

Share and replicate your success

Service Catalog & Brokers

Expose and provision services

www.openservicebrokerapi.org

### Operators > "To help make it easier to build Kubernetes applications, Red Hat and the Kubernetes open source community developed the Operator Framework — an open source toolkit designed to manage Kubernetes native applications, called Operators, in a more effective, automated, and scalable way." More Information: [Operator Framework](https://www.redhat.com/en/blog/introducing-operator-framework-building-apps-kubernetes)
### Operator Hub [Offical and 3rd Party ISV Operators](https://operatorhub.io/)

Templates as Installers

Install a template into the current project, making it easier to reuse:

oc create -f template.json

Create an application from an installed template, from a file, or from a url:

oc new-app -f template.json

Multi-Service App Example

Nodejs and MongoDB multi-service application example:

oc create -f https://raw.githubusercontent.com/openshift-roadshow/nationalparks-js/master/nationalparks-js.json

github.com/ryanj/nationalparks-js

Review and install the above template content using oc create, then try launching it via the web-based Service Catalog.

When you're done, list all available API resources to review the contents of your project namespace:

oc get all

Advanced Extensibility

### Standardize your environments with custom base images https://docs.okd.io/latest/using_images/s2i_images/nodejs.html https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image

Wrap Up

Exit Survey

  1. have experience using containers?
  2. have experience using Kubernetes?
  3. Do you consider yourself to be basically proficient with the oc or kubectl command-line tools?
  4. Can you name five basic Kubernetes primitives or resource types?
  5. Ready to start standardizing your web development processes with containers?
## Resources

Free O'Reilly Ebook

Deploying to OpenShift

### Kubernetes SIGs [Kubernetes Special Interest Groups (SIGs)](https://github.com/kubernetes/community/blob/master/sig-list.md)
## K8s Terminology 1. [node](https://kubernetes.io/docs/concepts/architecture/nodes/) 2. [pod](https://kubernetes.io/docs/concepts/workloads/pods/pod/) 3. [deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) 4. [service](https://kubernetes.io/docs/concepts/services-networking/service/) 5. [replicaSet (rs)](https://kubernetes.io/docs/concepts/workloads/controllers/replicaset/)
## OpenShift Terminology 1. [buildConfig (bc)](https://docs.openshift.org/latest/rest_api/apis-build.openshift.io/v1.BuildConfig.html) 2. [imageStream (is)](https://docs.openshift.org/latest/rest_api/apis-image.openshift.io/v1.ImageStream.html) 3. [deploymentConfig (dc)](https://docs.openshift.org/latest/rest_api/apis-apps.openshift.io/v1.DeploymentConfig.html) 4. [route](https://docs.openshift.org/latest/rest_api/apis-route.openshift.io/v1.Route.html) 5. [template](https://docs.openshift.org/latest/rest_api/oapi/v1.Template.html)
### More Ways to Try OpenShift * [OpenShift Learning Portal](http://learn.openshift.com) * [OpenShift Origin](https://github.com/openshift/origin) (and [minishift](https://github.com/minishift/minishift)) * [OpenShift Online (Starter and Pro plans available)](https://www.openshift.com/products/online/) * [OpenShift Dedicated (operated on AWS, GCE, and Azure)](https://www.openshift.com/products/dedicated/) * [OpenShift Container Platform (supported on RHEL, CoreOS)](https://www.openshift.com/products/container-platform/) for a local laptop install see [minikube](http://bit.ly/k8s-minikube) and/or [minishift](http://bit.ly/k8s-minishift)

Q&A

Thank You!

@christianh814 / christian@redhat.com

http://bit.ly/k8s-intro

Runs on Kubernetes Presented by: @christianh814