Skip to main content
Version: 2.4.2

Installing WKP to EKS

Install the dependencies#

On the computer that will be used for the installation, you need to install:

  • git
  • kubectl
  • The wk binary. You can ensure it is in your path by running wk version

Entitlements#

Ensure that wk can load a valid entitlements file.

Install WKP on an EKS cluster#

First, create a directory which will contain the cluster management scripts and binaries.

mkdir wkp-eks-cluster && cd wkp-eks-cluster
wk setup install --entitlements=/path/to/my/entitlements

The main configuration file will be unpacked at setup/config.yaml.

The required values are your git provider organization or user, your Docker Hub user, and an absolute path to a file containing your Docker Hub password:

mkdir -p ~/.wks
echo 'my-dockerhub-password' > ~/.wks/dockerhub-password
chmod 600 ~/.wks/dockerhub-password

Enter your gitProvider, gitProviderOrg, dockerIOUser, and dockerIOPasswordFile in your setup/config.yaml. (See Git Config Repository for details about git parameters)

Set the track field to eks, and optionally, set the clusterName, clusterRegion, and kubernetesVersion fields.

The WKP UI is not publicly accessible by default. If you want to expose it via an Application Load Balancer, set the uiALBIngress field to true.

Finally, enter any node group configuration you may require:

vim setup/config.yaml

Example snippet of config.yaml:

track: eks
clusterName: my-cluster
gitProvider: gitlab
gitUrl: git@git.acme.org:app-team/dev-cluster.git
dockerIOUser: my-docker-user
dockerIOPasswordFile: /home/my-user/.wks/my-dockerhub-password

WKP uses a personal access token to create the cluster repository on GitHub. The token needs to have permissions in the repo scope. The github documentation on how to create one can be found on this page. Once you have created one, set the environment variable for it:

export GITHUB_TOKEN=my-token

Finally, make sure your AWS CLI credentials are configured properly.

Now we are ready to install the cluster:

wk setup run

Access the WKP UI#

via wk ui command#

To expose the WKP UI via wk ui command, run:

wk ui

You should now be able to view it at http://localhost:8090

To expose the WKP UI to a different port other than the default, run:

wk ui --port 8081

via Application Load Balancer#

Ensure that the uiALBIngress field is set to true:

eksConfig:
uiALBIngress: true

To access the WKP UI via its assigned ingress, get the allocated address:

kubectl get ingress --namespace wkp-ui wkp-ui-alb-ingress
NAME HOSTS ADDRESS PORTS AGE
wkp-ui-alb-ingress * my-wkp-cluster.mycompany.com 80 7m5s

and navigate to it from your browser.

In this example the address is my-wkp-cluster.mycompany.com.

Specifications of managed nodegroups#

The specifications of the managed nodegroups of the cluster can be specified in a YAML file.

An example file can be seen below:

managedNodeGroups:
- name: managed-1
instanceType: m5.large
minSize: 2
desiredCapacity: 3
maxSize: 4
availabilityZones: ['us-east-2a', 'us-east-2b']
volumeSize: 20
ssh:
allow: true
publicKeyPath: ~/.ssh/id_rsa.pub
labels: { role: worker }
tags:
nodegroup-role: worker
iam:
withAddonPolicies:
externalDNS: true
certManager: true

Once created, save it inside of the cluster/platform directory, and set the path, either relative from cluster/platform or absolute, in your setup/config.yaml.

eksConfig:
nodeGroups: []
managedNodeGroupFile: managedNodeGroups.yaml

Node Requirements#

Clusters can run on a single node or multiple, depending on the processing requirements. The default node group WKP will deploy on EKS, is of instance type m5.large. A recommended minimum for nodes is 2 CPU cores and 2GB of RAM.

If you are building a large cluster the Kubernetes docs cover the specifications.

Recommended instance types for AWS:

  • 1-5 nodes: m3.medium
  • 6-10 nodes: m3.large
  • 11-100 nodes: m3.xlarge
  • 101-250 nodes: m3.2xlarge
  • 251-500 nodes: c4.4xlarge
  • more than 500 nodes: c4.8xlarge

Delete a WKP cluster#

You can use the cleanup.sh script:

./setup/cleanup.sh