Provision OpenShift clusters. Allow installer to provision infrastructure.
- Install AWS and openshift-install CLIs, and jq.
- Set AWS access key secrets as environment variables in
.env
. - Get a pull secret for Red Hat's container registries from https://console.redhat.com/openshift/downloads#tool-pull-secret. Set this as
OPENSHIFT_PULL_SECRET
in.env
. - Set up a subdomain in AWS (see docs). Set its name as
BASE_DOMAIN
in.env
.
- See create-dns-delegation.sh for some hints.
- Set
CLUSTER_NAME
andAWS_REGION
in.env
or rely on defaults. - Review cluster config in
install-config.yaml.tpl
. - Run
deploy.sh
to install cluster.
Run destroy.sh
to destroy the cluster and AWS resources.
Cluster state will be saved in this directory at ./_workdir/
. Rename or move this
directory to retain it so that you can later destroy the cluster or get
credentials for it. It uses Terraform and relies on its tfstate
file.
deploy.sh
will exit if a workdir already exists. Configure it to overwrite the
current workdir by setting env var OVERWRITE=1
; configure it to resume a
previous installation by setting env var RESUME=1
.
Credentials for the cluster will be echoed to stdout at the end of the
installation. You can also find these credentials in the workdir at
./_workdir/auth
.
To use the credentials from the workdir, set export KUBECONFIG=$(pwd)/_workdir/auth/kubeconfig
and then use kubectl
or oc
as
usual.
The cluster will be accessible at this URL:
https://console-openshift-console.apps.${CLUSTER_NAME}.${BASE_DOMAIN}
, e.g.
https://console-openshift-console.apps.ipi.aws.joshgav.com/
.