presented by…
brought to you by
oc
or kubectl
cli tools?🏆 Most-discussed on GitHub in 2017!
Kubernetes Community - Top of the Open Source Charts in 2017
kubectl
For detailed installation notes, see the kubectl
install doc
One line install for linux/amd64:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
One line install for macOS:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
To verify kubectl
availability, try running:
kubectl help
For detailed installation notes, see the oc
installation doc
One line install for linux/amd64:
curl -Lo oc.tar.gz https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz && tar xvzf oc.tar.gz */oc && sudo mv $_ /usr/local/bin/ && rm -d openshift-origin-client-tools-* && rm oc.tar.gz
One line install for macOS:
curl -Lo oc.zip https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-mac.zip && tar xvzf oc.zip oc && sudo mv oc /usr/local/bin/ && rm oc.zip
To verify oc
availability, try running:
oc help
Windows users should install the Windows Subsystem for Linux
and related command-line tools:
Enable Control Panel > Programs > Windows Features > Windows Subsystem for Linux
minishift
For detailed installation notes, see the minishift
release notes
One line install for linux/amd64:
curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.25.0/minishift-1.25.0-linux-amd64.tgz && tar xvzf minishift.tgz */minishift && sudo mv $_ /usr/local/bin/ && rm -d minishift-* && rm minishift.tgz
One line install for macOS:
curl -Lo minishift.tgz https://github.com/minishift/minishift/releases/download/v1.25.0/minishift-1.25.0-darwin-amd64.tgz && tar xvzf minishift.tgz */minishift && sudo mv $_ /usr/local/bin/ && rm -d minishift-* && rm minishift.tgz
Optionally, customize your cluster's memory or cpu allocation:
minishift config set memory 4096
minishift config set cpus 2
minishift config set openshift-version latest
to verify minishift
availability:
minishift version
See the minishift
installation guide for virt driver plugin requirements
If your minishift environment does not boot correctly:
--vm-driver
flag to select specific plugins by nameminishift start --vm-driver=virtualbox
minishift
provides an easy way to run OpenShift locally:
minishift start
When you are done, halt the VM to free up system resources:
minishift stop
Need a fresh start? Delete your VM instance with:
minishift delete
Verify that your cli tools are configured to connect to your Kubernetes environment:
kubectl version
The output should include your kubectl
version info, and the release version of the kubernetes API server (when available)
↓
Try using kubectl
to list resources by type:
kubectl get nodes
Log in as an admin user (password "openshift")
minishift addon apply admin-user
oc login -u admin
Try to list nodes using admin credentials:
kubectl get nodes
Now try using curl
to make the same request:
curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/nodes
We won't need admin priveleges for the remaining content, so let's swap back to the "developer" user:
oc login -u developer
Try using kubectl
to list resources by type:
kubectl get pods
Create a new resource from a json object specification:
curl https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json
curl -k -H"Authorization: Bearer $(oc whoami -t)" -H'Content-Type: application/json' https://$(minishift ip):8443/api/v1/namespaces/myproject/pods -X POST --data-binary @pod.json
Attempt the same using kubectl
:
kubectl create -f https://raw.githubusercontent.com/jankleinert/hello-workshop/master/pod.json
List pods by type using curl
:
curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods
Fetch an individual resource by type/id
; output as json
:
kubectl get pod hello-k8s -o json
Attempt the same using curl
:
curl -k -H"Authorization: Bearer $(oc whoami -t)" https://$(minishift ip):8443/api/v1/namespaces/myproject/pods/hello-k8s
Notice any changes between the initial json podspec and the API response?
Request the same info, but output the results as structured yaml:
kubectl get pod hello-k8s -o yaml
Print human-readable API output:
kubectl describe pod/hello-k8s
Expose the pod by creating a new service
(or "loadbalancer"):
kubectl expose pod/hello-k8s --port 8080 --type=NodePort
Take a look at the resulting {.spec.selector}
attribute:
kubectl get svc/hello-k8s -o json
Try using a JSONpath selector to find the assigned port number:
kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort}
Contact your newly-exposed pod via the exposed nodePort:
echo http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
curl http://$(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
Schedule the deletion of all pods that are labeled with:
kubectl get pods -l run=hello-k8s
kubectl delete pods -l run=hello-k8s
Contact the related service. What happens?:
curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
Delete the service:
kubectl delete service hello-k8s
Create a specification for your deployment
:
kubectl run hello-k8s --image=jkleinert/nodejsint-workshop \
--dry-run -o json > deployment.json
View the generated deployment spec file:
cat deployment.json
Create a new deployment from your local spec file:
kubectl create -f deployment.json
Create a Service
spec to direct traffic:
kubectl expose deploy/hello-k8s --type=NodePort --port=8080 --dry-run -o json > service.json
View the resulting spec file:
cat service.json
Create a new service from your local spec file:
kubectl create -f service.json
List multiple resources by type:
kubectl get po,svc,deploy
Connect to your new deployment via the associated service port:
curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
Scale up the hello-k8s
deployment to 3 replicas:
kubectl scale deploy/hello-k8s --replicas=3
List pods:
kubectl get po
Edit deploy/hello-k8s
, setting spec.replicas
to 5
:
kubectl edit deploy/hello-k8s -o json
Save and quit. What happens?
kubectl get pods
Watch for changes to pod
resources:
kubectl get pods --watch &
In another terminal, delete several pods by id:
kubectl delete pod $(kubectl get pods | grep ^hello-k8s | cut -f1 -s -d' ' | head -n 3 | tr '\n' ' ')
What happened? How many pods remain?
kubectl get pods
Close your backgrounded --watch
processes by running fg
, then sending a break signal (CTRL-c
)
View the current state of your deployment:
curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
Watch deployments:
kubectl get deploy -w &
Update your deployment's image spec to rollout a new release:
kubectl set image deploy/hello-k8s hello-k8s=jkleinert/nodejsint-workshop:v1
View the current state of your deployment
curl $(minishift ip):$(kubectl get svc/hello-k8s -o jsonpath={.spec.ports[0].nodePort})
Ask the API to list replicaSets
kubectl get rs
View the list of previous rollouts:
kubectl rollout history deploy/hello-k8s
Rollback to the previous state:
kubectl rollout undo deployment hello-k8s
Reload your browser to view the state of your deployment
Cleanup all resources:
kubectl delete service,deployment hello-k8s
Close your remaining --watch
listeners by running fg
before sending a break signal (CTRL-c
)
Verify that your namespace is clean:
kubectl get all
Build and deploy container images
Access the dashboard by running:
minishift dashboard
For this example, we will deploy a fork of the ryanj/http-base
repo by clicking on "Add to Project" in the web console
Example repo source: http://github.com/ryanj/http-base
ryanj/http-base
repo on GitHub. This will allow you to configure your own GitHub webhooks in the upcoming Deploy sectionnodejs
base image, and name your webservice "http-base
". Then enter the github url for your forkThe web console uses a socket stream to report status changes as they occur throughout the cluster
After the build task has completed, find the NAME
of the pod where your image has been deployed:
oc get pods
As with the core APIs, the CLI output is consistently formatted, following established patterns:
kubectl get pods
to
Combines source repos and operationally-maintained builder images to produce application images
Available as a standalone project, for use with Jenkins or other external builder processes: github.com/openshift/source-to-image
git push
to deploy
Clone a local copy of your repo fork by adding your own github username to the following command:
git clone http://github.com/YOUR_GH_USERNAME/http-base
cd http-base
Set up a commit WebHook to automate image production
Explore the Build
resources using the web console. Look for the GitHub Webhook settings. Copy the webhook url, and paste it into your repo's Webhook settings on GitHub
If you're running OpenShift locally in a VM, try using ultrahook to proxy webhook events to your laptop
After configuring the webhook for your repo, add a small commit locally, then git push
to deploy
git push
Or, use GitHub's web-based editor to make a minor change
If you don't have a working webhook to automate the build process, it can also be started manually:
oc start-build http-base
Iterate using a fully containerized toolchain
Make a minor edit to your local repo's index.html
file,
then test your changes before you commit by synching content into your hosted container:
export PODNAME=$(oc get pods -l app=http-base | tail -n 1 | cut -f1 -d' ')
oc rsync -w --exclude='.git,node_modules' . $PODNAME:
Get more control of your container rollout and update processes by selecting appropriate deployment strategies for your fleet of managed containers
Share and replicate your success
Expose and provision services
Install a template into the current project, making it easier to reuse:
oc create -f template.json
Create an application from an installed template, from a file, or from a url:
oc new-app -f template.json
Nodejs and MongoDB multi-service application example:
oc create -f https://raw.githubusercontent.com/openshift-roadshow/nationalparks-js/master/nationalparks-js.json
github.com/ryanj/nationalparks-js
Review and install the above template content using oc create
, then try launching it via the web-based Service Catalog.
When you're done, list all available API resources to review the contents of your project namespace:
oc get all
Operators = Custom Resources + Custom Controllers
oc
or kubectl
command-line tools?