- Install necessary tools and establish connection
- Login to existing EKS-Cluster
- Create your Amazon EKS Cluster and Worker Nodes
- Delete EKS Cluster
- Test if your cluster is working
- Deploy the Kubernetes Web UI (Dashboard)
- Using Helm with Amazon EKS
- Kubernetes Metrics Server
- Deploy own Docker
- Deployments Services and Pods (incomplete)
- Cleaning Up your Amazon ECS Resources
- Linux only Log
- Create new Node
(docs)
- Create an account to access AWS (Amazon Web Services).
- Create the required client security token for cluster API server communication:
pip3 install awscli --upgrade --user
aws --version
- If you are unable to install version 1.16.156 or greater of the AWS CLI on your system, you must ensure that the AWS IAM Authenticator for Kubernetes is installed on your system. For more information, see Installing aws-iam-authenticator.
- Configure Your AWS CLI Credentials in your environment
aws configure
- AWS Access Key ID [None]: Put your account key ID here
- AWS Secret Access Key [None]: Put your secret access key here
- Default region name [None]: Put the region where you want to deploy it here
- get region here
- e.g. Tokyo is
ap-northeast-1
- Default output format [None]: json
- Install the
eksctl
command line utilitycurl --silent --location "/~https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
- Install and Configure kubectl for Amazon EKS (only Linux. See kubernetes-docs for more)
- latest:
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
- Make the kubectl binary executable:
chmod +x ./kubectl
- Move the binary in to your PATH:
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version
- latest:
To login to an existing cluster, a User with access rights have to enable your user credentials or role in the cluster!
Give Root to one User (from Amazon-guide):
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: arn:aws:iam::11122223333:role/EKS-Worker-NodeInstanceRole-1I00GBC9U4U7B
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::11122223333:user/designated_user
username: designated_user
groups:
- system:masters
get rolearn
from aws-console -> EKS -> cluster.
get userarn
on the other machine with aws sts get-caller-identity
.
designated_user
is the username of the account you want to add.
(git-docs)
eksctl create cluster \
--name prod \
--version 1.12 \
--nodegroup-name standard-workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--node-ami auto
flag | description |
---|---|
--name string | EKS cluster name (generated if unspecified, e.g. "unique-creature-1561094398") |
--version string | Kubernetes version (valid options: 1.10, 1.11, 1.12) (default "1.12") |
--nodegroup-name string | name of the nodegroup (generated if unspecified, e.g. "ng-80a14634") |
--node-type string | node instance type (default "m5.large") Amazon-docs |
--nodes int | total number of nodes (for a static ASG) (default 2) Amazon-docs |
--nodes-min int | minimum nodes in ASG (default 2) |
--nodes-max int | maximum nodes in ASG (default 2) |
--node-ami string | Advanced use cases only. If 'static' is supplied (default) then eksctl will use static AMIs; if 'auto' is supplied then eksctl will automatically set the AMI based on version/region/instance type; if any other value is supplied it will override the AMI to use for the nodes. Use with extreme care. (default "static") (Amazon-docs) |
If the following Error appears follow this instructions to install it:
neither aws-iam-authenticator nor heptio-authenticator-aws are installed
Testing:
kubectl get svc
Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 21h
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json && kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json && kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json && kubectl rolling-update redis-slave --image=k8s.gcr.io/redis-slave:v2 --image-pull-policy=Always && kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json && kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json && kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json && kubectl get services -o wide --watch
CLICK ME
- Create the Redis master replication controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-controller.json
- Create the Redis master service
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-master-service.json
- Create the Redis slave replication controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-controller.json
- Update the container image for the Redis slave replication controller (see)
kubectl rolling-update redis-slave --image=k8s.gcr.io/redis-slave:v2 --image-pull-policy=Always
- Create the Redis slave service
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/redis-slave-service.json
- Create the guestbook replication controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-controller.json
- Create the guestbook service
kubectl apply -f https://raw.githubusercontent.com/kubernetes/examples/master/guestbook-go/guestbook-service.json
- Query the services in your cluster and wait until the External IP column for the guestbook service is populated
kubectl get services -o wide --watch
- After your external IP address is available, point a web browser to that address at port 3000 to view your guest book. For example, http://a7a95c2b9e69711e7b1a3022fdcfdf2e-1985673473.us-west-2.elb.amazonaws.com:3000
kubectl delete rc/redis-master rc/redis-slave rc/guestbook svc/redis-master svc/redis-slave svc/guestbook
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml && kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml && kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml && kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
Create a file called eks-admin-service-account.yaml with the text below. This manifest defines a service account and cluster role binding called eks-admin.
echo $'apiVersion: v1\nkind: ServiceAccount\nmetadata:\n name: eks-admin\n namespace: kube-system\n---\napiVersion: rbac.authorization.k8s.io/v1beta1\nkind: ClusterRoleBinding\nmetadata:\n name: eks-admin\nroleRef:\n apiGroup: rbac.authorization.k8s.io\n kind: ClusterRole\n name: cluster-admin\nsubjects:\n- kind: ServiceAccount\n name: eks-admin\n namespace: kube-system' > eks-admin-service-account.yaml
Apply the service account and cluster role binding to your cluster:
kubectl apply -f eks-admin-service-account.yaml
Copy token here:
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-admin | awk '{print $1}')
Start server:
kubectl proxy
Open Dashboard in default browser
https://docs.aws.amazon.com/eks/latest/userguide/dashboard-tutorial.html
Use helm and tiller only on local machine!
https://medium.com/faun/helm-basics-using-tillerless-dac28508151f
github: /~https://github.com/rimusz/helm-tiller
sudo snap install helm --classic
brew install kubernetes-helm
Install:
helm init --client-only
helm plugin install /~https://github.com/rimusz/helm-tiller
Start local Tiller with the plugin (change my-tiller-namespace as you want):
helm tiller start my-tiller-namespace
Install test Chart:
helm tiller run my-tiller-namespace -- helm repo update
helm tiller run my-tiller-namespace -- helm install stable/mysql
Confirm if deployed:
kubectl get deployments
Stop local Tiller with the plugin:
helm tiller stop
Usage to use helm with plugin (replace HELM_COMMANDS):
helm tiller run my-tiller-namespace -- HELM_COMMANDS
Example:
helm tiller run my-tiller-namespace -- helm list
helm tiller run my-tiller-namespace -- bash -c 'echo running helm; helm list'
(Amazon-guide-metrics-server, Amazon-guide-Prometheus)
Navigate to a directory where you would like to download the latest metrics-server release.
mkdir metrics-server && cd metrics-server
Make sure you have this tools:
curl --version
tar --version
gzip --version
jq --version
Download and apply metrics-server:
DOWNLOAD_URL=$(curl --silent "https://api.github.com/repos/kubernetes-incubator/metrics-server/releases/latest" | jq -r .tarball_url)
DOWNLOAD_VERSION=$(grep -o '[^/v]*$' <<< $DOWNLOAD_URL)
curl -Ls $DOWNLOAD_URL -o metrics-server-$DOWNLOAD_VERSION.tar.gz
mkdir metrics-server-$DOWNLOAD_VERSION
tar -xzf metrics-server-$DOWNLOAD_VERSION.tar.gz --directory metrics-server-$DOWNLOAD_VERSION --strip-components 1
kubectl apply -f metrics-server-$DOWNLOAD_VERSION/deploy/1.8+/
Verify that the metrics-server deployment is running the desired number of pods with the following command:
kubectl get deployment metrics-server -n kube-system
Change $DOWNLOAD_VERSION with the downloaded Folder version if not available in the environment anymore.
kubectl delete -f metrics-server-$DOWNLOAD_VERSION/deploy/1.8+/
!!!Start helm with tiller plugin!!!
Install Prometheus with helm plugin:
kubectl create namespace prometheus
helm tiller run my-tiller-namespace -- helm install stable/prometheus \
--name prometheus \
--namespace prometheus \
--set alertmanager.persistentVolume.storageClass="gp2",server.persistentVolume.storageClass="gp2"
Verify that all of the pods in the prometheus namespace are in the READY state:
kubectl get pods -n prometheus
Use kubectl to port forward the Prometheus console to your local machine:
kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
Open Prometheus console in default browser: localhost:9090
helm tiller run my-tiller-namespace -- helm delete prometheus && kubectl delete namespace prometheus
Create repository. (replace hello-repository and region)
aws ecr create-repository --repository-name hello-repository --region region
flag | description |
---|---|
--repository-name (string) | The name to use for the repository. The repository name may be specified on its own (such as nginx-web-app ) or it can be prepended with a namespace to group the repository into a category (such as project-a/nginx-web-app ) |
--region | the region where it will be created. e.g. Tokyo is ap-northeast-1 . get region here |
Output (Note the repositoryUri in the output. It will be used to push Dockers):
{
"repository": {
"registryId": "aws_account_id",
"repositoryName": "hello-repository",
"repositoryArn": "arn:aws:ecr:region:aws_account_id:repository/hello-repository",
"createdAt": 1505337806.0,
"repositoryUri": "aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository"
}
}
Tag docker-image with with the repositoryUri value:
Example:
docker-image = hello-world
repositoryUri = aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository
docker tag hello-world aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository
Run the aws ecr get-login --no-include-email command to get the docker login authentication command string for your registry
Change region to the region of the docker-repository
aws ecr get-login --no-include-email --region region
Run the docker login command that was returned in the previous step. This command provides an authorization token that is valid for 12 hours.
Push the image to Amazon ECR with the repositoryUri value from the earlier step
Example: Change aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository to your docker-repository
docker push aws_account_id.dkr.ecr.region.amazonaws.com/hello-repository
https://www.bogotobogo.com/DevOps/Docker/Docker_Kubernetes_NodePort_vs_LoadBalancer_vs_Ingress.php
https://kubernetes.io/docs/concepts/services-networking/connect-applications-service/
https://kubernetes.io/docs/concepts/services-networking/service/
https://kubernetes.io/docs/concepts/cluster-administration/networking/
https://kubernetes.io/docs/tasks/access-application-cluster/connecting-frontend-backend/
https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/
A deployment is basically the receipt and the manager for the pods. It will create the connections, replica sets, the ports, the amount of pods, etc. ...
Example (will create 5 pods from my docker-image on ecr):
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: hello
tier: frontend
track: stable
replicas: 5
strategy:
type: Recreate
template:
metadata:
labels:
app: hello
tier: frontend
track: stable
spec:
containers:
- name: hello
image: "COPY_HERE_DOCKER_IMAGE_URL_FROM_ECR"
ports:
- containerPort: 5000
resources:
limits:
memory: "128Mi"
cpu: "400m"
livenessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 30
periodSeconds: 30
readinessProbe:
httpGet:
path: /
port: 5000
initialDelaySeconds: 15
periodSeconds: 3
Create service to access from other pods (to make it accessible from outer-world add type: LoadBalancer
in spec: ):
apiVersion: v1
kind: Service
metadata:
name: service-frontend
labels:
run: service-frontend
spec:
selector:
app: hello
tier: frontend
ports:
- protocol: "TCP"
port: 5000
targetPort: 5000
Create ssh pod to test the connection:
kubectl apply -f https://k8s.io/examples/application/shell-demo.yaml
kubectl get pod shell-demo --watch
kubectl exec -it shell-demo -- /bin/bash
in the shell:
apt update
apt install curl --yes
printenv | grep SERVICE
curl -v $host_adresss:$and_hostPort
alias print_kubectl_res="echo $'kubectl get nodes:' && kubectl get nodes && echo $'\n\n' && echo $'kubectl get namespaces -A:' && kubectl get namespaces -A && echo $'\n\n' && echo $'kubectl get replicationcontroller -A:' && kubectl get replicationcontroller -A && echo $'\n\n' && echo $'kubectl get deployments -A:' && kubectl get deployments -A && echo $'\n\n' && echo $'kubectl get pods -o wide -A:' && kubectl get pods -o wide -A && echo $'\n\n' && echo $'kubectl get rc,services -A:' && kubectl get rc,services -A"
journalctl -f -u kubelet