Saturday, December 15, 2018

HTTP to HTTPS Behind Elastic Load Balancer in AWS

In the most common configurations, when running your web app behind ​Nginx or Apache, ​your https:// request will get redirected to ​http://. Sometimes, you may want to rewrite all HTTP requests to HTTPS.

The Amazon Elastic Load Balancer (ELB) supports a HTTP header called X-FORWARDED-PROTO. All the HTTPS requests going through the ELB will have the value of X-FORWARDED-PROTO equal to “HTTPS”. For the HTTP requests, you can force HTTPS by adding a simple rewrite rule, as follows:

1. Nginx

In your nginx site config file check if the value of X_FORWARDED_PROTO is https, if not, rewrite it:


server {
    listen 80;
    ....
    location / {
        if ($http_x_forwarded_proto != 'https') {
            rewrite ^ https://$host$request_uri? permanent;
        }
    ....
    }
}
 2. Apache

Same goes for Apache, add this rewrite rule to your site’s config file:


<VirtualHost *:80>
...
RewriteEngine On
RewriteCond %{HTTP:X-Forwarded-Proto} !https
RewriteRule ^.*$ https://%{SERVER_NAME}%{REQUEST_URI}
...
</VirtualHost>
3. IIS

Install IIS Url-Rewrite module, using the configuration GUI add these settings

<rewrite xdt:Transform="Insert">
<rules>
<rule name="HTTPS rewrite behind ELB rule" stopProcessing="true">
<match url="^(.*)$" ignoreCase="false" />
<conditions>
<add input="{HTTP_X_FORWARDED_PROTO}" pattern="^http$" ignoreCase="false" />
</conditions>
<action type="Redirect" redirectType="Found" url="https://{SERVER_NAME}{URL}" />
</rule>
</rules>
</rewrite>

4. HAProxy
frontend node1-https
        bind 192.168.20.19:443 ssl crt /etc/ssl/cert.pem
        mode http
        maxconn 50000
        option httpclose
        option forwardfor
        reqadd X-Forwarded-Proto:\ https

Minikube installatin on Ubuntu 16.04

Before you begin
Enable Virtualization in Bios
VT-x or AMD-v virtualization must be enabled in your computer’s BIOS.

Install a Hypervisor
If you do not already have a hypervisor installed, install the appropriate one for your OS now:

macOS: VirtualBox or VMware Fusion, or HyperKit.

Linux: VirtualBox or KVM.

Note: Minikube also supports a --vm-driver=none option that runs the Kubernetes components on the host and not in a VM. Using this driver requires Docker and a linux environment, but not a hypervisor.
Windows: VirtualBox or Hyper-V.

Installation on Debian/Ubuntu

sudo apt-get update && sudo apt-get install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl


If you are on Ubuntu or one of other Linux distributions that support snap package manager, kubectl is available as a snap application.

Switch to the snap user and run the installation command:

sudo snap install kubectl --classic
Test to ensure the version you installed is sufficiently up-to-date:

kubectl version


Check the kubectl configuration
Check that kubectl is properly configured by getting the cluster state:

kubectl cluster-info
If you see a URL response, kubectl is correctly configured to access your cluster.

If you see a message similar to the following, kubectl is not correctly configured or not able to connect to a Kubernetes cluster.

The connection to the server <server-name:port> was refused - did you specify the right host or port?

For example, if you are intending to run a Kubernetes cluster on your laptop (locally), you will need a tool like minikube to be installed first and then re-run the commands stated above.

If kubectl cluster-info returns the url response but you can’t access your cluster, to check whether it is configured properly, use:
kubectl cluster-info dump

Enabling shell autocompletion
kubectl includes autocompletion support, which can save a lot of typing!

The completion script itself is generated by kubectl, so you typically just need to invoke it from your profile.

Common examples are provided here. For more details, consult kubectl completion -h.

On Linux, using bash
On CentOS Linux, you may need to install the bash-completion package which is not installed by default.

yum install bash-completion -y
To add kubectl autocompletion to your current shell, run source <(kubectl completion bash).

To add kubectl autocompletion to your profile, so it is automatically loaded in future shells run:

echo "source <(kubectl completion bash)" >> ~/.bashrc

Install Minikube

Got to https://github.com/kubernetes/minikube/releases

download latest version & Install

sudo dpkg -i minikube_0.30-0.deb

Start minikube

sudo minikube start

It will download the required component and start the minikube

sudo kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443
CoreDNS is running at https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Sunday, June 10, 2018

Setting up CloudWatch with your EC2 Instances on AWS with CloudWatch Agent

Requirement

  • IAM Roles for the Instance to run with AmazonSSMFullAccess
  • AWS System Manager, AmazonEC2RoleforSSM Policy attached to your user
  • Install or Update the SSM Agent
  • AWS CloudWatch Agent
  • Your AWS Instance must have internet access or direct access to CloudWatch so the data can be pushed to CloudWatch
Create IAM Roles to Use with CloudWatch Agent on Amazon EC2 Instances

The first procedure creates the IAM role that you need to attach to each Amazon EC2 instance that runs the CloudWatch agent. This role provides permissions for reading information from the instance and writing it to CloudWatch.

The second procedure creates the IAM role that you need to attach to the Amazon EC2 instance being used to create the CloudWatch agent configuration file, if you are going to store this file in Systems Manager Parameter Store so that other servers can use it. This role provides permissions for writing to Parameter Store, in addition to the permissions for reading information from the instance and writing it to CloudWatch. This role includes permissions sufficient to run the CloudWatch agent as well as to write to Parameter Store.


To create the IAM role necessary for each server to run CloudWatch agent

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the navigation pane on the left, choose Roles, Create role.

For Choose the service that will use this role, choose EC2 Allows EC2 instances to call AWS services on your behalf. Choose Next: Permissions.

In the list of policies, select the check box next to CloudWatchAgentServerPolicy. Use the search box to find the policy, if necessary.

If you will use SSM to install or configure the CloudWatch agent, select the check box next to AmazonEC2RoleforSSM. Use the search box to find the policy, if necessary. This policy is not necessary if you will start and configure the agent only through the command line.

Choose Next: Review

Confirm that CloudWatchAgentServerPolicy and optionally AmazonEC2RoleforSSM appear next to Policies. In Role name, type a name for the role, such as CloudWatchAgentServerRole. Optionally give it a description, and choose Create role.

The role is now created.


The following procedure creates the IAM role that can also write to Parameter Store. You need to use this role if you are going to store the agent configuration file in Parameter Store so that other servers can use it.

To create the IAM role necessary for an administrator to save an agent configuration file to Systems Manager Parameter Store

Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/.

In the navigation pane on the left, choose Roles, Create role.

For Choose the service that will use this role, choose EC2 Allows EC2 instances to call AWS services on your behalf. Choose Next: Permissions.

In the list of policies, select the check box next to CloudWatchAgentAdminPolicy. Use the search box to find the policy, if necessary.

If you will use SSM to install or configure the CloudWatch agent, select the check box next to AmazonEC2RoleforSSM. Use the search box to find the policy, if necessary. This policy is not necessary if you will start and configure the agent only through the command line.

Choose Next: Review

Confirm that CloudWatchAgentAdminPolicy and optionally AmazonEC2RoleforSSM appear next to Policies. In Role name, type a name for the role, such as CloudWatchAgentAdminRole. Optionally give it a description, and choose Create role.

The role is now created.

Install or Update the SSM Agent

Before you can use Systems Manager to install the CloudWatch agent, you must make sure that the instance is configured correctly for Systems Manager.

SSM Agent is installed, by default, on Amazon Linux base AMIs dated 2017.09 and later. SSM Agent is also installed, by default, on Amazon Linux 2 and Ubuntu Server 18.04 LTS AMIs. Here is the documentation to install the agent on various version.

Attach an IAM Role to the Instance

An IAM role for the instance profile is required when you install the CloudWatch agent on an Amazon EC2 instance. This role enables the CloudWatch agent to perform actions on the instance. Use one of the roles you created earlier. 

If you are going to use this instance to create the CloudWatch agent configuration file and copy it to Systems Manager Parameter Store, use the role you created that has permissions to write to Parameter Store. This role may be called CloudWatchAgentAdminRole.

For all other instances, select the role that includes just the permissions needed to install and run the agent. This role may be called CloudWatchAgentServerRole.

Installing CloudWatch Agent on your Linux Instances
  • Navigate to your EC2 section 
  • In the navigation pane, choose Run Command.
  • In the Command document list, choose AWS-ConfigureAWSPackage
  • In the Targets area, choose the instance or multiple instances on which to install the CloudWatch agent. If you do not see a specific instance, it might not be configured for Run Command.
  • In the Action list, choose Install.
  • In the Name field, type AmazonCloudWatchAgent.
  • Leave Version set to latest to install the latest version of the agent.
  • Choose Run.
  • Optionally, in the Targets and outputs areas, select the button next to an instance name and choose View output. Systems Manager should show that the agent was successfully installed.

Optionally You can use an Amazon S3 download link to download the CloudWatch agent package on an Amazon EC2 instance server.

To use the command line to install the CloudWatch agent on an Amazon EC2 instance

Download the CloudWatch agent. For a Linux server, type the following:

wget https://s3.amazonaws.com/amazoncloudwatch-agent/linux/amd64/latest/AmazonCloudWatchAgent.zip

Unzip the package.

unzip AmazonCloudWatchAgent.zip

Install the package. On a Linux server, change to the directory containing the package and type:

sudo ./install.sh


Create the CloudWatch Agent Configuration File with the Wizard

Start the CloudWatch agent configuration wizard by typing the following:

sudo /opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent-config-wizard

If you are going to use Systems Manager to install and configure the agent, be sure to answer Yes when prompted whether to store the file in Systems Manager Parameter Store. You can also choose to store the file in Parameter Store even if you aren't using the SSM Agent to install the CloudWatch agent. To be able to store the file in Parameter Store, you must use an IAM role with sufficient permissions. For more information, see Create IAM Roles and Users for Use With CloudWatch Agent.


Deploying CloudWatch Configuration File

We will now deploy cloud watch configuration to the client (the instances which we need to monitor)
  • In the navigation pane, choose Run Command.
  • Click on Run Command once the page loads up 
  • In the Command document list, choose AmazonCloudWatch-ManageAgent
  • In the Targets area, choose the instance or multiple instances on which you want to deploy CloudWatch Configuration on
  • Under Action select configure 
  • Under Mode leave it as ec2
  • Change the Optional Configuration Source to ssm
  • Under Optional Configuration Location enter the exact same name of the parameter you created in the Parameter Store (previous section). In this example, the parameter is called CloudWatchLinux
  • Optional Restart should be set to Yes (This will restart the CloudWatch agent, not the instance)
  • Now click on Run

Now go to Cloudwatch and you would start receiving custom metrics that you defined in CloudWatch Configuration.

Friday, March 9, 2018

About Kubernetes and it's Architecture

Kubernetes more frequently pronounced as (K8s) it’s an open source container orchestration platform designed for running distributed applications and services at scale. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation

Kubernetes components can be logically split up into these two categories:

  • Master (Control Panel): These components run on the master nodes of the cluster and form the control plane.
  • Node (Worker): Applications and services running on the worker nodes to receive instructions and run containers.
Master (Control Plane) has following components. These components are responsible for maintaining the state of the cluster:
  1. etcd
  2. API Server.
  3. Controller Manager
  4. SchedulerEvery worker node consists of the following components. 
Every worker node consists of the following components. These components are responsible for deploying and running the application containers.
  1. Kubelet
  2. Container Runtime (Docker etc)
These components are responsible for deploying and running the application containers.



















Let’s discuss more about Master components one by one

etcd 

It is a simple, key value storage which is used to store the kubernetes cluster data (such as number of pods, their states, namespaces etc). in simple words, it is the database of kubernetes. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.

kube-apiserver

Need to interact with your Kubernetes cluster? Talk to the API. The Kubernetes API is the front end of the Kubernetes control plane, handling internal and external requests.The kube-apiserver is responsible for API validation for all resource creation requests before the resources are actually generated and saved to the data store. Users can communicate with the API server via the kubectl command line client or through REST API calls.

kube-controller-manager

Kubernetes manages applications through various controllers that operate on the general model of comparing the current status against a known spec. These controllers are control loops that continuously ensure that the current state of the cluster (the status) matches the desired state (the spec). They watch the current cluster state stored in etcd through the kube-apiserver and create, update, and delete resources as necessary. kube-controller-manager is shipped with many controllers such as:
  • Node Lifecycle controller
  • DaemonSet controller
  • Deployment controller
  • Namespace controller

kube-scheduler

Since Kubernetes is an orchestration framework, it has builtin logic for managing the scheduling of pods. The kube-scheduler component is responsible for this. Scheduling decision depends on a number of factors such as:

Resource requirements of the app
Resource availability across nodes
Whether the pod spec has affinity labels requesting scheduling on particular nodes
Whether the nodes have certain taints/tolerations preventing them from being included in the scheduling process

The kube-scheduler takes all of these considerations into account when scheduling new workloads.

cloud-controller-manager

The cloud-controller-manager is responsible for managing the controllers associated with builtin cloud providers. These controllers are dedicated to abstracting resources offered by individual cloud providers and mapping them to Kubernetes objects.
Cloud controllers are the main way that Kubernetes is able to offer a relatively standardized experience across many different cloud providers.

Node Components

The node, or worker, machines are the primary workhorses of Kubernetes clusters. While the master components handle most of the organization and logic, the nodes are responsible for running containers, reporting health information back to the master servers, and managing access to the containers through network proxies.

Container Runtime(Famously Docker)

The container runtime is the component responsible for running the containers within pods. Rather than being a specific technology as with the other components, the container runtime in this instance is a descriptor for any container runtime that is capable of running containers with the appropriate features. Docker, CRI-O and containerd are some of the popular options for container runtimes.

kubelet

The kubelet component is an agent that runs on every worker node of the cluster. It is responsible for managing all containers running in every pod in the cluster. It does this by ensuring that the current state of containers in a pod matches the spec for that pod stored in etcd.

kube-proxy

The kube-proxy component is a network proxy that runs on each node. It is responsible for forwarding requests. The proxy is somewhat flexible and can handle simple or round robin TCP, UDP or SCTP forwarding. Each node interacts with the Kubernetes service through kube-proxy.