2020年/02月/25日

首页回退

AWS上搭建K8S

Aws托管的K8S叫做EKS,其控制面由AWS确保高可用,控制层面包含至少两个 API 服务器节点和跨区域内三个可用区运行的三个 etcd 节点。Amazon EKS 会自动检测并替换运行状况不佳的控制层面实例,并根据需要跨区域内的可用区重启它们。Amazon EKS 利用 AWS 区域的架构以保持高可用性。

用户购买EC2作为工作节点,搭建步骤如下:

首先启动一台干净的EC2

安装pip

curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user

安装Aws CLI

pip install awscli --upgrade --user

在IAM中心分配一个凭证然后配置CLI

aws configure


AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: us-west-2
Default output format [None]: json

安装 eksctl

curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version

安装Kubectl

curl -o kubectl https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
kubectl version --short --client

使用 eksctl 创建集群,注意集群名字和region

eksctl create cluster \
--name prod \
--version 1.14 \
--region us-west-2 \
--nodegroup-name workers \
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key my-public-key \
--managed

–ssh-public-key my-public-key 引用你购买EC2时用的KeyPair,这样在集群创建好之后可以ssh到工作节点以便诊断问题

然后等待片刻直到显示ready

eksctl version 0.11.1
[ℹ]  using region us-west-2
[ℹ]  setting availability zones to [us-west-2c us-west-2a us-west-2d]
[ℹ]  subnets for us-west-2c - public:192.168.0.0/19 private:192.168.96.0/19
[ℹ]  subnets for us-west-2a - public:192.168.32.0/19 private:192.168.128.0/19
[ℹ]  subnets for us-west-2d - public:192.168.64.0/19 private:192.168.160.0/19
[ℹ]  using SSH public key "my-public-key.pub" as "eksctl-prod-nodegroup-standard-workers-00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00"
[ℹ]  using Kubernetes version 1.14
[ℹ]  creating EKS cluster "prod" in "us-west-2" region with managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-west-2 --cluster=prod'
[ℹ]  CloudWatch logging will not be enabled for cluster "prod" in "us-west-2"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --region=us-west-2 --cluster=prod'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "prod" in "us-west-2"
[ℹ]  2 sequential tasks: { create cluster control plane "prod", create managed nodegroup "standard-workers" }
[ℹ]  building cluster stack "eksctl-prod-cluster"
[ℹ]  deploying stack "eksctl-prod-cluster"
[ℹ]  building managed nodegroup stack "eksctl-prod-nodegroup-standard-workers"
[ℹ]  deploying stack "eksctl-prod-nodegroup-standard-workers"
[✔]  all EKS cluster resources for "prod" have been created
[✔]  saved kubeconfig as "/home/user/.kube/config"
[ℹ]  nodegroup "standard-workers" has 3 node(s)
[ℹ]  node "ip-192-168-26-79.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-51-105.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-80-87.us-west-2.compute.internal" is ready
[ℹ]  waiting for at least 1 node(s) to become ready in "standard-workers"
[ℹ]  nodegroup "standard-workers" has 3 node(s)
[ℹ]  node "ip-192-168-26-79.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-51-105.us-west-2.compute.internal" is ready
[ℹ]  node "ip-192-168-80-87.us-west-2.compute.internal" is ready
[ℹ]  kubectl command should work with "/home/user/.kube/config", try 'kubectl get nodes'
[✔]  EKS cluster "prod" in "us-west-2" region is ready

完成之后即可用kubectl访问集群

kubectl get svc

负载均衡ALB

我们需要为Pod引入负载均衡器才能满足生产需求,一般选择7层负载ALB

两种流量模式

实例 – 将您的集群中的节点注册为 ALB 的目标
IP – 将 Pod 注册为 ALB 的目标

对于需要负载的子网打标签(如果您已使用 ekctl 部署集群,则标签已存在)

kubernetes.io/cluster/<cluster-name> shared
kubernetes.io/role/elb 1  //公网
kubernetes.io/role/internal-elb 1 //私网

安装

eksctl utils associate-iam-oidc-provider \
    --region us-east-1 \
    --cluster prod \
    --approve
aws iam create-policy \
    --policy-name ALBIngressControllerIAMPolicy \
    --policy-document https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/iam-policy.json

记住返回的: “Arn”: “arn:aws:iam::336029128397:policy/ALBIngressControllerIAMPolicy”,

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/rbac-role.yaml
kubectl annotate serviceaccount -n kube-system alb-ingress-controller \
eks.amazonaws.com/role-arn=arn:aws:iam::111122223333:role/eks-alb-ingress-controller

注意替换arn

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/alb-ingress-controller.yaml
kubectl edit deployment.apps/alb-ingress-controller -n kube-system
spec:
      containers:
      - args:
        - --ingress-class=alb
        - --cluster-name=prod
        - --aws-vpc-id=vpc-03468a8157edca5bd
        - --aws-region=us-east-1
kubectl get pods -n kube-system

部署游戏2048例子

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-deployment.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-service.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-ingress.yaml

kubectl get ingress/2048-ingress -n 2048-game

kubectl logs -n kube-system   deployment.apps/alb-ingress-controller

通过web界面找到负载均衡器,复制DNS名称,在浏览器打开即可访问应用

最后通过Route53配置域名的A记录指向ALB

这样就完成了自己的域名到k8s的解析

测试成功之后删除此次部署

kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-ingress.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-service.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-deployment.yaml
kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.1.4/docs/examples/2048/2048-namespace.yaml

如何关联证书

上述例子的ingress配置如下

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: "2048-ingress"
  namespace: "2048-game"
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/scheme: internet-facing
  labels:
    app: 2048-ingress
spec:
  rules:
    - http:
        paths:
          - path: /*
            backend:
              serviceName: "service-2048"
              servicePort: 80

如果需要支持ssl,在annotations下加上证书arn,比如

    alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:ap-xxxxxxxxxxxxxxxx
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'

然后就可以通过https协议访问到k8s中的pod了

如何在另一台机器访问K8s

安装 kubectl

安装Aws CLI

还是使用安装eks那个凭证

安装 aws-iam-authenticator

curl -o aws-iam-authenticator https://amazon-eks.s3-us-west-2.amazonaws.com/1.14.6/2019-08-22/bin/linux/amd64/aws-iam-authenticator
chmod +x ./aws-iam-authenticator
mkdir -p $HOME/bin && cp ./aws-iam-authenticator $HOME/bin/aws-iam-authenticator && export PATH=$PATH:$HOME/bin
echo 'export PATH=$PATH:$HOME/bin' >> ~/.bashrc
aws-iam-authenticator help

创建 kubeconfig

aws sts get-caller-identity
aws eks --region region update-kubeconfig --name cluster_name
kubectl get svc

当您创建一个 Amazon EKS 集群时,将在集群的 RBAC 配置中自动为创建集群的 IAM 实体用户或角色(例如,联合身份用户)授予 system:masters 权限。要授予其他 AWS 用户或角色与您的集群进行交互的能力,您必须编辑 Kubernetes 内的 aws-auth ConfigMap。

要授权其他用户访问k8s,需要由集群创建者来授权

kubectl describe configmap -n kube-system aws-auth
kubectl edit -n kube-system configmap/aws-auth

将 IAM 用户、角色或 AWS 账户添加到 configMap。

apiVersion: v1
data:
  mapRoles: |
    - rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
      username: system:node:
      groups:
        - system:bootstrappers
        - system:nodes
  mapUsers: |
    - userarn: arn:aws:iam::555555555555:user/admin
      username: admin
      groups:
        - system:masters
    - userarn: arn:aws:iam::111122223333:user/ops-user
      username: ops-user
      groups:
        - system:masters