Cluster creation on EKS
#
Install the dependenciesOn the computer that will be used for the installation, you need to install:
#
EntitlementsEnsure that wk
can load a valid entitlements file.
#
Install WKP on an EKS clusterFirst, create a directory which will contain the cluster management scripts and binaries.
The main configuration file will be unpacked at setup/config.yaml
.
The required values are your git provider organization or user, your Docker Hub user, and an absolute path to a file containing your Docker Hub password:
Enter your gitProvider
, gitProviderOrg
, dockerIOUser
, and dockerIOPasswordFile
in your setup/config.yaml
. (See Git Config Repository for details about git parameters)
Set the track
field to eks
, and optionally, set the clusterName
, clusterRegion
, and kubernetesVersion
fields.
You can provide a path to an eksctl config file directly to configure any of the available options, or set some of the commonly used configuration in the setup/config.yaml
file.
Compatible with this version of WKP is eksctl's
ClusterConfig
,apiVersion: eksctl.io/v1alpha5
. Also, note that if an eksctl config file path is provided, it will override any other fields set in the eksConfig section of thesetup/config.yaml
.
A sample eksctl config file is provided in the cluster repository at setup/eksctl-config.yaml
, for the documentation of its schema,
please refer to the eksctl docs. The config file used will be copied to setup/eksctl-config.yaml
and
committed to the cluster repository.
The WKP UI is not publicly accessible by default. If you want to expose it via an Application Load Balancer, set the uiALBIngress
field to true
.
Finally, enter any node group configuration you may require:
Example snippet of config.yaml
:
WKP uses a personal access token to create the cluster repository on GitHub. The token needs to have permissions in
the repo
scope. The github documentation on how to create one can be found on this page. Once you have created one,
set the environment variable for it:
Finally, make sure your AWS CLI credentials are configured properly.
Now we are ready to install the cluster:
#
Access the WKP UI#
via wk ui commandTo expose the WKP UI via wk ui command, run:
You should now be able to view it at http://localhost:8090
To expose the WKP UI to a different port other than the default, run:
#
via Application Load BalancerEnsure that the uiALBIngress
field is set to true
:
To access the WKP UI via its assigned ingress, get the allocated address:
and navigate to it from your browser.
In this example the address is my-wkp-cluster.mycompany.com
.
#
Specifications of managed nodegroupsThe specifications of the managed nodegroups of the cluster can be specified in a YAML file.
An example file can be seen below:
Once created, save it inside of the cluster/platform
directory,
and set the path, either relative from cluster/platform
or absolute, in your setup/config.yaml
.
#
Node RequirementsClusters can run on a single node or multiple, depending on the processing requirements. The default node group WKP will deploy on EKS, is of instance type m5.large. A recommended minimum for nodes is 2 CPU cores and 2GB of RAM.
If you are building a large cluster the Kubernetes docs cover the specifications.
Recommended instance types for AWS:
- 1-5 nodes: m3.medium
- 6-10 nodes: m3.large
- 11-100 nodes: m3.xlarge
- 101-250 nodes: m3.2xlarge
- 251-500 nodes: c4.4xlarge
- more than 500 nodes: c4.8xlarge
#
Delete a WKP clusterYou can use the cleanup.sh
script: