Saturday, December 21, 2019

EKS access from kubectl



1. Access EKS when using AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

(for any user who has the key and secret, it is easy)

$ export AWS_ACCESS_KEY_ID=
$ export AWS_SECRET_ACCESS_KEY=
$ export KUBECONFIG=~/.kube/config...


2. Access EKS without AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY

(for user 'xyz' who has no aws key and secret, do this)

2.1. Ask admin to update the aws-auth.cm.yaml to add the 'xyz' user to aws-auth configmap:

  mapUsers: |
    - userarn: arn:aws:iam::226347999999:user/xyz
      username: xyz
      groups:
        - system:masters

2.2. Ask admin to run: kubectl apply -f aws-auth-cm.yaml

2.3. On command line window of user 'xyz', do this:

$ export KUBECONFIG=~/.kube/config...

2.4. Now user 'xyz' should be able to run:

$ kubectl get pod


** To add additional role (instead of user), you add this to configmap file under 'mapRoles' :

    - rolearn: arn:aws:iam::853899999999:role/test-role
      username: aws
      groups:
        - system:masters

3. Useful commands

3.1. Get the client caller identity

$ aws sts get-caller-identity

Sample result:

{
    "Account": "226347999999",
    "UserId": "AIDATJM2ZIW2LLLLLLLLL",
    "Arn": "arn:aws:iam::226347999999:user/jzeng"
}

3.2. Find who can access EKS

$ kubectl describe configmap aws-auth -n kube-system

Sample result:

Name:         aws-auth
Namespace:    kube-system
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"v1","data":{"mapRoles":"- rolearn: arn:aws:iam::853844999999:role/co-ec-eks-node-iam-role-vpc-078a850cd7eeeeeee\n  username...

Data
====
mapUsers:
----
- userarn: arn:aws:iam::226347999999:user/jzeng
  username: jzeng
  groups:
    - system:masters

mapRoles:
----
- rolearn: arn:aws:iam::853844999999:role/co-ec-eks-node-iam-role-vpc-078a850cd7eeeeeee
  username: system:node:{{EC2PrivateDNSName}}
  groups:
    - system:bootstrappers
    - system:nodes

3.3. Get access token

$ aws-iam-authenticator token -i {eks_cluster_name}


Reference:

https://aws.amazon.com/premiumsupport/knowledge-center/amazon-eks-cluster-access/


Sunday, December 1, 2019

Create secrets for terraform code to access docker image from different AWS account


Setup secret for accessing ECR and loading docker image:

  • Copy this python snippet below into generate_secret_key.py.
#!/usr/bin/env python
 
import re
import subprocess
 
def execute_cmd(cmd):
  proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  comm = proc.communicate()
 
  if comm[1] != '':
    print(comm[1].rstrip('\n'))
    exit(-1)
 
  return comm[0]
 
def generate_secret_key():
  login_cmd = execute_cmd('aws ecr get-login').rstrip('\n')
  creds = re.sub(r"(-e none\ |docker login\ |-u\ |-p\ )", '', login_cmd).split(' ')
  generate_secret_cmd = "kubectl create secret docker-registry {0} --docker-username={1} --docker-password={2} --docker-server={3} --docker-email=YOUR_EMAIL_ADDRESS"
  execute_cmd(generate_secret_cmd.format(‘ecr.us-west-2’, creds[0], creds[1], creds[2].replace('https://', '')))
 
if __name__ == "__main__":
  generate_secret_key()
NOTE: Remember to change YOUR_EMAIL_ADDRESS .
  • Change the file permission and execute it.
  • Make sure the right AWS account info is used by running ‘aws ecr get-login’ (Use ‘export AWS_PROFILE={profile_name_in_.aws_config}’ to switch AWS account – make sure both kubeconfig and aws conf profile are pointing to the same AWS account!!!)

Above step will create a secret called ‘ecr.us-west-2’ for terraform to use to access ECR without permission issue.  Secrets will expire so we need to re-run such script to regenerate it.  Run ‘kubectl get secrets’ to check if the secrets ‘ecr.us-west-2’ is still there or not.

Put following to terraform code to use such secret to get docker image:

image_pull_secrets {
 
name = "ecr.us-west-2"
}


Setup secret for accessing ECR from different AWS accout and loading docker image:


1.     Use ‘export AWS_PROFILE={profile-name}’ to switch to the account we will deploy ECR image to.
2.     Deploy EKS, DynamoDB, etc.
3.     Add permission to the ECR in source account for each image:

  "Statement": [
    {
      "Sid": "AllowPull",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::853844999999:user/terraform-project-development"
      },
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:GetDownloadUrlForLayer"
      ]
    }
  ]


4.     Run following python code:

LT-2018-6666:dev jzeng$ cat ../../us-east-1/dev-test/generate_secret_key.py
#!/usr/bin/env python

import re
import subprocess

def execute_cmd(cmd):
  proc = subprocess.Popen(cmd, shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
  comm = proc.communicate()

  if comm[1] != '':
    print(comm[1].rstrip('\n'))
    exit(-1)

  return comm[0]

def generate_secret_key():
  login_cmd = execute_cmd('aws ecr get-login --registry-ids 226349999999 --region us-west-2').rstrip('\n')
  creds = re.sub(r"(-e none\ |docker login\ |-u\ |-p\ )", '', login_cmd).split(' ')
  generate_secret_cmd = "kubectl create secret docker-registry {0} --docker-username={1} --docker-password={2} --docker-server={3} --docker-email=john.lastname@company.com"
  execute_cmd(generate_secret_cmd.format('ecr.secret.226349999999.us-west-2', creds[0], creds[1], creds[2].replace('https://', '')))

if __name__ == "__main__":
  generate_secret_key()


5.     Use “ecr.secret.226349999999.us-west-2” as secret name in terraform code.


Access ECR images from different accounts without secret


(Used for DTAP env) Use following JSON to set up permission on each source ECR repository:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "dev account access",
      "Effect": "Allow",
      "Principal": {
        "AWS": [
          "arn:aws:iam::874429999999:root",  //D
          "arn:aws:iam::853848888888:root",   //T
          "arn:aws:iam::527037777777:root",   //A
          "arn:aws:iam::387656666666:root"   //P
        ]
      },
      "Action": [
        "ecr:BatchCheckLayerAvailability",
        "ecr:BatchGetImage",
        "ecr:CompleteLayerUpload",
        "ecr:DescribeImages",
        "ecr:DescribeRepositories",
        "ecr:GetDownloadUrlForLayer",
        "ecr:GetLifecyclePolicy",
        "ecr:GetLifecyclePolicyPreview",
        "ecr:GetRepositoryPolicy",
        "ecr:InitiateLayerUpload",
        "ecr:ListImages",
        "ecr:PutImage",
        "ecr:PutLifecyclePolicy",
        "ecr:UploadLayerPart"
      ]
    }
  ]
}



Reference:



Sunday, September 22, 2019

RxJava


Learning RxJava: a good book published in 2017


Cold Observables: pull; replay the emissions to each Observer; most likely for finite dataset.

Hot Observables: broadcast the same emission to all observers at the same time; most likely for infinite dataset.

ConnectableObservable is hot.  publish() or replay() will convert any observable to hot observable. connect() will fire emissions.

Using ConnectableObservable to force each emission to go to all Observers simultaneously is known as multicasting

Subject: is both Observable and Observer. acting as a proxy mulitcasting device (kind of like an event bus).  It is "mutable variables of reactive programming"

The simplest Subject type is the PublishSubject, which, like all Subjects, hotly broadcasts to its downstream Observers.

Subjects act like magical devices that can bridge imperative programming with reactive programming.

you will most likely use Subjects for infinite, event-driven (that is, user action-driven) Observables.

the onSubscribe(), onNext(), onError(), and onComplete() calls are not thread safe!

Following will safely sequentialize concurrent event calls so no train wrecks occur downstream:
            Subject<String> subject =
                    PublishSubject.<String>create().toSerialized();

Use subscribeOn(Schedulers.computation()) to use a thread pool to emit concurrently.
use observeOn() to intercept each emission and push them forward on a different Scheduler

when you have a flow of 10,000 emissions or more, you will definitely want to use a Flowable (which supports backpressure) instead of an Observable.

.buffer()
.window() for batch
.sample(, TimeUnit.MILLISECONDS) for sampling
.throttleFirst()
.onBackpressureBuffer(16, () -> {}, BackpressureOverflow.ON_OVERFLOW_DROP_OLDEST) for customized action when the buffer fills up
.onBackpressureDrop for dropping