Server-Side Swift Beta

Learn web development with Swift, using Vapor and Kitura.

Kubernetes Tutorial for Swift on the Server

In this tutorial, you’ll learn how to use Kubernetes to deploy a Kitura server that’s resilient, with crash recovery and replicas. You’ll start by using the kubectl CLI, then use Helm to combine it all into one command.

5/5 3 Ratings

Version

  • Swift 5, macOS 10.14, Xcode 10

As you learn more about developing apps on the server-side, you’ll encounter multiple situations where you require tooling to handle processes outside of your source code. You’ll need to handle things like:

  • Deployment
  • Dependency management
  • Logging
  • Performance monitoring

While Swift on the server continues to mature, you can draw from a collection of tools that are considered “Cloud Native” to accomplish these things with a Swift API!

In this tutorial, you’ll:

  • Use Kubernetes to deploy and keep your app in flight.
  • Use Kubernetes to replicate your app for high availability.
  • Use Helm to combine all of the previous work you did with Kubernetes into one command.

This tutorial will use Kitura to build the API you will be working with, which is called RazeKube. Behold — the KUBE!

Swift Bird in a Cube

While the Kube is many things (all seeing, all knowing), there is one thing that the Kube isn’t: Resilient! You are going to use Cloud Native tools to make it so!

In this tutorial, you will use the following:

  • Kitura 2.7, or higher
  • macOS 10.14, or higher
  • Swift 5.0, or higher
  • Xcode 10.2, or higher
  • Terminal

Cloud Native Development and Swift

Here’s a short intro about what the Swift Server work group (SSWG) is working on and what Cloud Native Development entails.

Swift on the server draws motivation from many different ecosystems for inspiration. Vapor, Kitura and Perfect all draw their architecture from different projects in different programming languages. The concept of Futures and Promises in SwiftNIO isn’t native to Swift, although SwiftNIO is developed with a goal in mind to be a library that stands tall as a standard on its own.

A group of people who comprise the Swift Server work group meet bi-weekly to discuss advancements in the ecosystem. They have a few goals, but this one sticks out as relevant to this tutorial:

The Swift Server work group will … define and run an incubation process for these efforts to reduce duplication of effort, increase compatibility and promote best practices.

Regardless of how you may feel about adopting third-party libraries, the concept of reducing repeated code is important and worth pursuing.

Cloud Native Computing Foundation

Another collective in pursuit of this goal is the Cloud Native Computing Foundation (CNCF). The primary charter of the CNCF is as follows:

The Foundation’s mission is to make cloud native computing ubiquitous. The CNCF Cloud Native Definition v1.0 says: Cloud native technologies empower organizations to build and run scalable apps in modern, dynamic environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure, and declarative APIs exemplify this approach.

Swift and Kubernetes

Swift developers who have focused their efforts on mobile devices haven’t worried too much about standardization with other platforms. Apple has a reputation for forming iOS-centric guidelines for design, and there are a number of tools to accomplish similar goals, but all on one ecosystem — iOS.

In the world of server computing, multiple languages can solve almost any problem. There’s plenty of times when one programming language makes more sense than another. Rather than get into a “holy-war” discussion about when Swift makes more sense than other languages, you’ll focus on using Swift as a means to an end. You’ll get a taste of the tools that can help you solve problems while you do it!

Kubernetes is the first tool you’re going to use. While Kubernetes is an important tool for deployment in current-day server-side development, it serves a number of other purposes as well, and you’ll explore those in this tutorial! You’ll dive right in after you make sure your app is working the way it needs to.

Getting Started

Click the Download Materials button at the top or bottom of this tutorial to get the project files you’ll use to build the sample Kitura app.

Next, you need to install Docker for desktop and the Kitura CLI to proceed with this tutorial.

Note: There are two things I would like to point out before diving into this tutorial:
  1. Audrey Tam has written an absolutely brilliant tutorial about how to use Docker here. Docker is going to be discussed in this tutorial as a building block for other components, and I recommend giving Audrey’s tutorial a read before proceeding. She has also written a tutorial on how to deploy a Kitura app with Kubernetes here, which is also worth a read to familiarize yourself with some of the basic concepts used here too!
  2. Docker for Desktop seems to be the ideal way to use Kubernetes on your Mac lately, but you do have alternatives! You can try MiniKube or another cloud provider to set up an online Kubernetes service, but Docker for desktop includes support for Kubernetes as well as other things that will prove helpful throughout this tutorial.

Installing Docker and Kubernetes

If you’ve already installed Docker, start it up, then skip down to the next section: Enabling Kubernetes in Docker.

In a web browser, open https://www.docker.com/products/docker-desktop, and click the Download Desktop for Mac and Windows button:

Docker Desktop Installation

On the next page, sign in or create a free Docker Hub account, if you don’t already have one. Then, proceed to download the installer by clicking the Download Docker Desktop for Mac button:

Docker Desktop Download

You’ll download a file called Docker.dmg. Once downloaded, double-click the file. You’ll see a dialogue that wants you to drag the Docker Whale into your applications folder.

Dragging Docker to the Applications Folder

When the installer appears, you will have to allow privileged access for your Mac. Make sure you install the Stable version — previous versions of Docker for desktop only included Kubernetes on the Edge version.

Enabling Kubernetes in Docker

Once your installation has finished, open the Docker whale menu in the top toolbar of your Mac, and select Preferences. In the Kubernetes tab, check Enable Kubernetes, then click Apply:

Enabling Kubernetes within Docker Preferences

You might have to restart Docker for this change to take effect. If you do, open Preferences again, and make sure the bottom of the window says that both Docker and Kubernetes are running.

Verifying Kubernetes is running within Docker

To double-check that Docker is installed, open Terminal, and enter docker version — you should see output like this:

Client: Docker Engine - Community
 Version:           18.09.2
 API version:       1.39
 Go version:        go1.10.8
 Git commit:        6247962
 Built:             Sun Feb 10 04:12:39 2019
 OS/Arch:           darwin/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          18.09.2
  API version:      1.39 (minimum version 1.12)
  Go version:       go1.10.6
  Git commit:       6247962
  Built:            Sun Feb 10 04:13:06 2019
  OS/Arch:          linux/amd64
  Experimental:     false

Additionally, to ensure Kubernetes is running, enter kubectl get all, and you should see that one service is running:

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   16h

Creating Your RazeKube

You’ve set the stage — now it’s time to create your RazeKube API. First, you’ll install the Kitura CLI!

Installing the Kitura CLI

Note: If you have already done this in a different tutorial, confirm Kitura is installed by entering kitura --version in Terminal. If you see a version number, you can skip to the next section: Running RazeKube.

The easiest way to install the Kitura CLI (command-line interface) is via Homebrew. Follow the instructions to install Homebrew, then enter the following commands, one at a time, to install the Kitura CLI:

brew tap ibm-swift/kitura
brew install kitura

By installing the Kitura CLI, you get the ability to generate starter projects from the command line, but you also get the built-in capability to build and run your app into a Docker container with kitura build and kitura run! This will come in handy later.

Running RazeKube

Now you’ll build and run the starter app, before diving into Kubernetes.

Navigate to your starter project root directory in Terminal. To check, enter the command ls, and ensure you see Package.swift in the resulting output.

Enter swift build to ensure that everything builds OK, then enter swift run. Your output should be similar to the following:

[2019
-07-10T15:26:56.591-05:00] [WARNING] [ConfigurationManager.swift:394 load(url:deserializerName:)] Unable to load data from URL /Users/davidokunibm/RayWenderlich/rw-cloud-native/final/RazeKube/config/mappings.json
[Wed Jul 10 15:26:56 2019] com.ibm.diagnostics.healthcenter.loader INFO: Swift Application Metrics
[2019-07-10T15:26:56.642-05:00] [INFO] [Metrics.swift:52 initializeMetrics(router:)] Initialized metrics.
[2019-07-10T15:26:56.648-05:00] [INFO] [HTTPServer.swift:237 listen(_:)] Listening on port 8080

Click Allow if you see this dialogue asking if you want your appto accept incoming network connections:

Allow incoming connections dialog

Now, in a web browser, open localhost:8080 — you should see this home page:

Kitura HomePage up and running on your localhost

Lastly, check to make sure your all-knowing Kube is still … all-knowing: Navigate to localhost:8080/kubed?number=5 in your web browser. You should see the following result:

Showing the result of 5 cubed in the browser

If you see this, good work! Your starter project works as you want it to. Now, you’re going to deliberately sabotage the Kube. Don’t worry — the all-powerful Kube will forgive you and show you the light eventually.

Crashing Your RazeKube

Note: You’re going to create a .xcodeproj file for the starter project, so you can open it in Xcode. If you are using Xcode 11 beta, I cannot guarantee that this entire tutorial will work, but you should be able to open the project in Xcode 11 beta by double-clicking RazeKube.xcodeproj.

Or, to ensure the xed command opens Xcode 10, enter this command:

sudo xcode-select -s /Applications/Xcode.app/Contents/Developer

In Terminal, press Control-C to stop the server, then enter these commands:

swift package generate-xcodeproj
xed .

In Xcode, open Sources/Application/Routes/KubeRoutes.swift. This is a good time to take a look at the sheer power of the Kube by examining the kubeHandler function!

After you catch your breath, add the following code to the end of initializeKubedRoutes(app:):

app.router.get("/uhoh", handler: fatalHandler)

Here, you are declaring that any GET requests made to the /uhoh path should be handled by the function fatalHandler.

To take care of the error message, add the following function at the bottom of this file:

func fatalHandler(request: RouterRequest, response: RouterResponse, 
    next: () -> Void) {
  fatalError()
}

Save your file. Close Xcode: Although you could build and run this in Xcode if you wanted to, for the rest of this tutorial, you’ll be working almost exclusively in Terminal and a web browser!

In Terminal, enter these two commands:

swift build
swift run

Open a web browser, and confirm that localhost:8080 loads your home page. Now for the fun part — navigate to localhost:8080/uhoh in your browser. Yikes! Your Terminal process should freakout and tell you something similar to the following:

Fatal error: file /Users/davidokunibm/RayWenderlich/rw-cloud-native/final/RazeKube/Sources/Application/Routes/KubeRoutes.swift, line 52
[1]    42560 illegal hardware instruction  swift run

And your web browser doesn’t look any better:

Browser not being able to display the UHOH route since application crashed

For all the work that Apple has done to make Swift a safe language that doesn’t crash often, it’s important to remember that crashes still do happen, and as a developer, you have to mitigate them. This is where Kubernetes can help by auto-restarting your app!

Kubernetes and the State Machine

The heart of Kubernetes is the concept of managing state, and how that state is defined. Therefore, it’s OK to think of the core of Kubernetes as one big database — you wouldn’t be wrong!

That database is managed by something called etcd. This is, in-and-of-itself, a tool that’s also backed by the Cloud Native Computing Foundation. Operating Kubernetes is a matter of simply dictating state to etcd through the use of a command line interface called kubectl. You can use .yaml or .json files to dictate state for an app, or you can embed specific instructions inside a command via kubectl. You’re going to do a little bit of both.

Note: Your RazeKube app uses something called Helm charts to manage your app inside a Kubernetes cluster. You’ll learn what this does in a little bit!

Here’s what a YAML file might look like to describe your deployment of RazeKube:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: razekube
  labels:
    app: razekube
    version: "1.0.0"
containers:
- name: razekube-swift-run
  image: razekube-swift-run
  ports:
  - name: http-server
    containerPort: 8080

Notice the specification for containers towards the bottom — this means that you’re going to have to create a container image for your app first!

Building and Running Your RazeKube Docker Image

In Terminal, make sure that you’re in the root directory of your app. Enter the command kitura build, and go pour yourself a cup of coffee — this might take a few minutes. You should see output like this:

You may receive an error stating “failed to run IBM Cloud Developer Tools”. If you receive this error follow the instructions and run “kitura idt” to install the IBM Cloud Developer Tools. Once finished enter kitura build to continue.
Validating Docker image name
OK
Checking if Docker container razekube-swift-tools is running
OK
Deleting the container named 'razekube-swift-tools' ...
OK
Checking Docker image history to see if image already exists
OK
Creating image razekube-swift-tools based on Dockerfile-tools ...
Image will have user davidokunibm with id 501 added

Executing docker image build --file Dockerfile-tools --tag razekube-swift-tools --rm --pull
--build-arg bx_dev_userid=501 --build-arg bx_dev_user=davidokunibm .

OK
Creating a container named 'razekube-swift-tools' from that image...
OK
Starting the 'razekube-swift-tools' container...
OK
OK
Stopping the 'razekube-swift-tools' container...
OK

The Kitura CLI makes your life easier, while showing you the Docker commands it runs to build this image.

Next, enter the command kitura run — after about 30 seconds, you should see this output:

The run-cmd option was not specified
Stopping the 'razekube-swift-run' container...
OK
The 'razekube-swift-run' container is already stopped
Validating Docker image name
Binding IP and ports for Docker image.
OK
Checking if Docker container razekube-swift-run is running
OK
Deleting the container named 'razekube-swift-run' ...
OK
Checking Docker image history to see if image already exists
OK
Creating image razekube-swift-run based on Dockerfile ...

Executing docker image build --file Dockerfile --tag razekube-swift-run --rm --pull .
OK
Creating a container named 'razekube-swift-run' from that image...
OK
Starting the 'razekube-swift-run' container...
OK
Logs for the razekube-swift-run container:
[2019-07-10T21:06:23.250Z] [WARNING] [ConfigurationManager.swift:394 load(url:deserializerName:)] Unable to load data from URL /swift-project/config/mappings.json
[Wed Jul 10 21:06:23 2019] com.ibm.diagnostics.healthcenter.loader INFO: Swift Application Metrics
[2019-07-10T21:06:23.450Z] [INFO] [Metrics.swift:52 initializeMetrics(router:)] Initialized metrics.
[2019-07-10T21:06:23.456Z] [INFO] [HTTPServer.swift:237 listen(_:)] Listening on port 8080

These logs should look familiar — your API is now running in a Linux container via Docker!

Tagging Your RazeKube Docker Image

Open a web browser and navigate to localhost:8080 to make sure you can see the home page. Next, press Control-C in your Terminal to stop the container.

Now, enter the command docker image ls — your output should look like this:

REPOSITORY            TAG     IMAGE ID      CREATED         SIZE
razekube-swift-run    latest  eb85ef44e45f  2 minutes ago   598MB
razekube-swift-tools  latest  2008ae41e316  3 minutes ago   1.97GB

The Kitura CLI configures your app to use a separate container — razekube-swift-tools — to compile your app than the one that ultimately runs it — razekube-swift-run — all in the name of saving you space on your runtime.

If you think that this is still a bit large for a container, you aren’t alone – “slim” images and multi-stage Dockerfiles are in the works as you read this!

Lastly, tag your image like so:

docker tag razekube-swift-run razekube-swift-run:1.0.0

Type docker image ls again to make sure your razekube-swift-run tag was created:

REPOSITORY            TAG     IMAGE ID       CREATED         SIZE
razekube-swift-run    1.0.0   eb85ef44e45f   3 minutes ago   598MB
razekube-swift-run    latest  eb85ef44e45f   3 minutes ago   598MB
razekube-swift-tools  latest  2008ae41e316   4 minutes ago   1.97GB

All right, next you’ll put this inside your Kubernetes cluster!

Deploying RazeKube to Kubernetes

First, type kubectl get all and kubectl get pods, and check that the output looks like so:

➜ kubectl get all
NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   19h
➜ kubectl get pods
No resources found.

In Kubernetes, a pod is the smallest unit available — just a set of co-located containers. Observing a pod is similar to observing an app you deploy.

Make a pod for RazeKube by entering the following command in Terminal:

kubectl create deployment razekube --image=razekube-swift-run:1.0.0

Confirm that your app deployed by running kubectl get pods, and check that your output looks similar to this:

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   0          26s

Kubernetes creates a unique identifier for each pod as it runs, unless you specify otherwise. While this is great to see that your app is running, you haven’t yet configured a way to access it!

Creating a RazeKube Service

This is where Kubernetes begins to shine. Rather than take control away, you are given complete control over how your end users access each deployment via a service.

Add a point of access for your app by creating a service like so:

kubectl expose deployment razekube --type="NodePort" --port=8080

Now type kubectl get svc to get a list of exposed services currently in flight on Kubernetes, and you should see output like so:

NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          20h
razekube     NodePort    10.105.98.111   <none>        8080:32612/TCP   1m

Notice the PORT(S) column — Kubernetes has mapped port 8080 on your app to a randomly assigned port. This port will be different every time, so make sure you note which port Kubernetes opened for you. Open a web browser, and navigate to that address, which would be localhost:32612 in my case. If you see the home page, ask the almighty Kube to demonstrate its power by navigating to localhost:32612/kubed?number=4 — you should see this:

4 cubed running within Kubernetes

Nice! You are now running a Swift app on Kubernetes!!!

The sun with some sun glasses on

Recovering From a Crash

Now you’re going to test out how Kubernetes keeps things working for you. First, type kubectl get all in Terminal, and you should see the following output:

NAME                            READY     STATUS    RESTARTS   AGE
pod/razekube-6dfd6844f7-74j7f   1/1       Running   0          11m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          20h
service/razekube     NodePort    10.105.98.111   <none>        8080:32612/TCP   8m

NAME                       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/razekube   1         1         1            1           11m

NAME                                  DESIRED   CURRENT   READY     AGE
replicaset.apps/razekube-6dfd6844f7   1         1         1         11m

Notice how every component of your state is enumerated for you.

Next, type the command kubectl get pods, but don’t press Return just yet. In a moment, what you’re going to do is:

  • Navigate to localhost:32612/uhoh in your browser, which will deliberately crash your app.
  • Press Return in Terminal, and run the samekubectl get pods command over and over repeatedly until you see that your STATUS is Running. Hint: Press the Up Arrow to redisplay the previous command.
  • Navigate to localhost:32612 in your browser.

As you keep entering your command in Terminal, you will see your pod state evolve like so:

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       Error     0          17m

NAME                        READY     STATUS             RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       CrashLoopBackOff   0          17m

NAME                        READY     STATUS                RESTARTS   AGE
razekube-6dfd6844f7-74j7f   0/1       ContainerCreating     1          17m

NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   1          17m

As Kubernetes scans the state of everything in your cluster, it reconciles how things are — crashed — with how it should be — etcd. If there is a mismatch, then Kubernetes works to resolve the difference!

You have dictated that there should be a functioning deployment called razekube, but by triggering the /uhoh route, that deployment is no longer functioning. When Kubernetes picks up that the non-functional state doesn’t match the desired functional state in etcd, it redeploys the container to bring it back to a functional state. After your deployment is running again, you then access your app to see that you’re back in business!

Deploying Replicas

Running/not-running isn’t the only state that can be managed by Kubernetes. Consider the scenario that a bunch of people have heard about the almighty Kube, and they want to check out its power. You’ll need to have more than one app running concurrently to handle all that traffic!

In Terminal, enter the following command:

kubectl scale --replicas=5 deployment razekube

Typically, with heavier apps, you could enter this command to watch this happen in real time:

kubectl rollout status deployment razekube

But this is a fairly lightweight app, so the change will happen immediately.

Enter kubectl get pods and kubectl get deployments to check out the new app state:

➜ kubectl get pods
NAME                        READY     STATUS    RESTARTS   AGE
razekube-6dfd6844f7-74j7f   1/1       Running   4          32m
razekube-6dfd6844f7-88wr7   1/1       Running   0          1m
razekube-6dfd6844f7-b4snx   1/1       Running   0          1m
razekube-6dfd6844f7-tn6mr   1/1       Running   0          1m
razekube-6dfd6844f7-vnr7w   1/1       Running   0          1m
➜ kubectl get deployments
NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
razekube   5         5         5            5           33m

In this case, you’ve told etcd the desired state of your cluster should be that there are 5 replicas for your razekube deployment.

Hit your /uhoh route a couple of times, and type kubectl get pods over and over again in Terminal to observe the state of your pods as they work to maintain their dictated state!

Kubernetes can manage so much more than just these two examples. You can do things like:

  • Manage TLS certificate secrets for encrypted traffic.
  • Create an Ingress controller to handle where certain traffic goes into your cluster.
  • Handle a load balancer so that deployments inside your cluster receive equal amounts of traffic.

And because you worked with a Docker container this whole time, this means that this tool isn’t native to just Swift — it works for any app that you can put into Docker ;].

Cleaning Up

Rather than dive deeper into more of those capabilities, you’re going to learn how to consolidate all of the steps you’ve run above with Helm! Before proceeding to work with Helm, use kubectl to clean up your cluster like so:

kubectl delete service razekube
kubectl delete deployment razekube

When this is done, type kubectl get pods to ensure that you have no resources in flight.

Helm: The Kubernetes Package Manager

Helm is a package manager that is designed to simplify deploying simple or complex apps to Kubernetes. Helm has two components that you need to know about:

  • The client, referred to as helm on your command line and dictates deployment commands to your Kubernetes cluster.
  • The server, referred to as tiller, which takes commands from helm and forwards them to Kubernetes.

Helm uses YAML and JSON files to manage deployments to Kubernetes, and they’re called Charts. One benefit to using the Kitura CLI is that the app generator will make these chart files for you!

What’s in a Chart?

In Terminal, make sure you are in the root directory of your RazeKube app, and type the following command:

cat chart/razekube/values.yaml

Notice the format of this document, particularly the top component:

replicaCount: 1
revisionHistoryLimit: 1
image:
  repository: razekube-swift-run
  tag: 1.0.0
  pullPolicy: Always
  resources:
    requests:
      cpu: 200m
      memory: 300Mi
livenessProbe:
  initialDelaySeconds: 30
  periodSeconds: 10
service:
  name: swift
  type: NodePort
  servicePort: 8080

In this one file, you are defining:

  • The number of replicas you want to have for your deployment.
  • The Docker image for the deployment you want to make.
  • The service and port you want to create to expose the deployment.

Remember how you had to configure each of those things individually with kubectl commands? This file makes it possible to do all of these with one swift command!

Now you’re going to configure Helm to work with your Kubernetes cluster, and make quick work of your deployment commands!

Setting Up Helm and Tiller

Good news — Helm is already technically installed, thanks to the Kitura CLI! However, your Kubernetes cluster isn’t yet set up to receive commands from Helm, which means you need to set up Tiller.

In Terminal, enter the following command:

helm init

If you see output that ends with “Happy Helming!”, then you’re ready to go. Type helm version and make sure that your client and server versions match like so:

Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}

Deploying RazeKube With Helm

Next, you’re going to make two changes to your chart for RazeKube: Navigate to chart/razekube and open values.yaml in a text editor of your choice.

Note: It is critical that you make sure your spacing for text in these YAML documents is perfectly aligned. YAML can be frustrating to work with due to this need, but the hierarchy of components in a Helm chart is easy to see this way.

Update lines 3 and 8 of this file so that they look like so:

replicaCount: 5
revisionHistoryLimit: 1
image:
  repository: razekube-swift-run
  tag: 1.0.0
  pullPolicy: IfNotPresent

Here’s what you just updated:

  • Rather than deploy one replica of your app at first, then scaling to five, you are writing that your desired state should contain five replicas of your deployment.
  • Also, when you are choosing to pull an image from a remote container registry, you only choose to look for a remote version of the container image if it is not present in your Docker file system already. You could update this to be any remote image you have access to if you want, but since this image is available locally, you are choosing to use what is present.

Save this file, and navigate back to the root directory of your app in Terminal. Enter the following command to do everything at once:

helm install -n razekube-app chart/razekube/

Behold Your Charted RazeKube!

After you run this command, Helm will give you output that should look very very similar to what you get when using kubectl to check your app status:

NAME:   razekube-app
LAST DEPLOYED: Wed Jul 10 17:29:15 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME                          TYPE      CLUSTER-IP    EXTERNAL-IP  PORT(S)         AGE
razekube-application-service  NodePort  10.105.48.55  <none>       8080:32086/TCP  1s

==> v1beta1/Deployment
NAME                 DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
razekube-deployment  5        0        0           0          1s

==> v1/Pod(related)
NAME                                  READY  STATUS             RESTARTS  AGE
razekube-deployment-7f5694f847-9qnzc  0/1    Pending            0         0s
razekube-deployment-7f5694f847-9zfb8  0/1    Pending            0         0s
razekube-deployment-7f5694f847-dfp9v  0/1    ContainerCreating  0         0s
razekube-deployment-7f5694f847-pxn67  0/1    Pending            0         0s
razekube-deployment-7f5694f847-v5bq2  0/1    Pending            0         0s

Look at you! That was quite a bit easier than all those kubectl commands, wasn’t it? It’s important to know how kubectl works, but it’s equally as important to know that you can combine all of the work that those commands do into a Helm chart.

In my example, look at the port that was assigned to the service: 32086. This means that my app should be available at localhost:32086. Open a web browser and navigate to the app at the port open on your service:

Nice work! Now, just like before, access the /uhoh route for your port, and notice how the app crashes. Then access your homepage or the /kubed?number=4 route again, and notice that your app is back up and running!

In Terminal, enter the command helm list — your output should look like this:

NAME          REVISION	UPDATED                   STATUS    CHART           
razekube-app  1       	Wed Jul 10 17:29:15 2019  DEPLOYED  razekube-1.0.0
APP VERSION    NAMESPACE
	       default

This shows you the status of your deployments with Helm.

Now, run kubectl get all to look at your output:

NAME                                       READY     STATUS    RESTARTS   AGE
pod/razekube-deployment-7f5694f847-9qnzc   1/1       Running   3          7m
pod/razekube-deployment-7f5694f847-9zfb8   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-dfp9v   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-pxn67   1/1       Running   2          7m
pod/razekube-deployment-7f5694f847-v5bq2   1/1       Running   3          7m

NAME                                   TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes                     ClusterIP   10.96.0.1      <none>        443/TCP          21h
service/razekube-application-service   NodePort    10.105.48.55   <none>        8080:32086/TCP   7m

NAME                                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/razekube-deployment   5         5         5            5           7m

NAME                                             DESIRED   CURRENT   READY     AGE
replicaset.apps/razekube-deployment-7f5694f847   5         5         5         7m

Helm gives you a powerful tool to make deploying and managing your apps much easier than if you only had access to kubectl. Again, it’s still important to have access to kubectl, and to have a working understanding of it, so you can configure individual components of your app. More importantly, you can use these commands to learn how to automate your deployments with Helm too!

To clean up, type helm delete razekube-app, and use either helm list or kubectl get all to check the status of everything after it’s been cleaned up.

Where to Go From Here?

You can download the final project using the Download Materials button at the top or bottom of this tutorial.

Thankfully, both inside and outside of the Swift community, you have a plethora of resources at your fingertips to learn more about how you can manage these tools with your Swift REST APIs.

You’ve probably heard about these books by now, but both our Vapor and Kitura books talk about using industry standard tools like Docker. The Kitura book specifically touches on using Nginx as an Ingress controller, and Prometheus and Grafana for performance monitoring. Also, tools like Appsody exist to make the integration of these tools easy! Additionally, you can try another Kitura tutorial on Github to learn how to deploy your own PostgreSQL database into Kubernetes, as well as an API that works with it.

Please write to us in the forums below if you have more questions, or want to ask about other tools that exist in this space!

Average Rating

5/5

Add a rating for this content

3 ratings

Contributors

Comments