Wednesday, October 1, 2014

How to generate and applying SSL on Amazon ELB

In order to successfully install an SSL certificate you need the following things

CSR file
Private key
Certificate key
Certificate chain

I am assuming the below,

Domain Name: testssl.com
Certificate Provider: GoDaddy.com

1. Initiate a CSR

openssl req -new -newkey rsa:2048 -nodes -keyout testssl.key -out testssl.csr

After this command you need to provide the details about the organization and what type (wildcard or subdomain) of certificate you are going to generate.











Now you have completed generating the private key and the CSR.

This will create below files
testssl.csr--will be used in creating certificate on GoDaddy or other certificate issuer authority like verisign,digicert etc.
testssl.key--> Private key (need to convert in pem format)

2. Go to GoDaddy and use testssl.csr file to create the certificate. It will create the certificate and you can download that from that console. You have to tell what type of certificate (Apache, IIS, Nginx, Plesk or Tomcat etc)

This will have two files.

6eba0aaxxxx.crt  certificate file for your domain
gd_bundle-xxx.crt  is your certificate chain file

Now in all you need the below three files:

testssl.key--> Certificate private key file, you generated as part of your request for certificate
6eba0aaxxxx.crt-->  certificate file for your domain--Public Certificate – The public facing certificate provided by your certificate authority
gd_bundle-xxx.crt-->  Certificate Chain – An optional group of certificates to validate your certificate


3. Convert private key to PEM type

Amazon Web services work with PEM files for certificates and you’ll note none of the files we received were in that format. So before using the files, they have to be translated into a format that Amazon will understand.

Private key

The private key is something that you generated along with your certificate request. Hopefully, you kept it safe knowing that you would need it again one day. To get the Amazon supported format for your key,

openssl rsa -in testssl.key -outform PEM -out testssl.pem

this will create testssl.pem

Public certificate

The public certificate is the domain-specific file that you receive, This certificate file must be changed into PEM format for Amazon to use (your certificate might already be in PEM format, in which case you can just open it up in a text editor, copy the text, and paste it into the dialog). You can convert the certificate file into PEM format:

openssl x509 -inform PEM -in 6eba0aaxxxx.crt -out testssl-cert.pem

Certificate chain

The certificate chain is exactly what it sounds like: a series of certificates. For the AWS dialog, you need to include the intermediate certificate and the root certificate one after the other without any blank lines. Both certificates need to be in PEM format, so you need to go through the same steps as with the domain certificate.

openssl rsa -inform PEM -in gd_bundle-g2-g1.crt -out testssl-cert-chain.pem

4. Now you have three files:

testssl.pem -> private key (Generated with CSR)
testssl-cert.pem -> public certificate (domain)
testssl-cert-chain.pem -> certificate chain


5. Go to your AWS console>ELB>select the desired ELB> click edit to upload the new certificate

Certificate Name: any-name (name of certificat on Amazon to remember)
Private Key: vim testssl.pem > copy and paste it here
Public Key Certificate: vim testssl-cert.pem > copy and paste it here
Certificate Chain: vim testssl-cert-chain.pem > copy and paste it here
















Now You have certificate on your ELB, go to browser and check

Tuesday, August 19, 2014

Installing Oracle Java8 on CentOS/RHEL

Step 1: Download JAVA Archive

Download latest Java SE Development Kit 8 release from its download page.

# cd /opt/
# wget --no-cookies --no-check-certificate --header "Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie" "http://download.oracle.com/otn-pub/java/jdk/8u5-b13/jdk-8u5-linux-i586.tar.gz"
Note: If Above wget command doesn’t not worked for you watch this screencast to download JDK from terminal.

Now extract downloaded archive file

# tar xzf jdk-8u5-linux-i586.tar.gz
Step 2: Install JAVA using Alternatives

After extracting archive file use alternatives command to install it. alternatives command is available in chkconfig package.

# cd /opt/jdk1.8.0_05/
# alternatives --install /usr/bin/java java /opt/jdk1.8.0_05/bin/java 2
# alternatives --config java


There are 3 programs which provide 'java'.

  Selection    Command
-----------------------------------------------
*  1           /opt/jdk1.8.0/bin/java
 + 2           /opt/jdk1.7.0_55/bin/java
   3           /opt/jdk1.8.0_05/bin/java

Enter to keep the current selection[+], or type selection number: 3

At this point JAVA 8 has been successfully installed on your system.

Step 3: Check Version of JAVA .

Check the installed version of java using following command.

# java -version

java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode)

Step 4: Setup Environment Variables

Most of java based application’s uses environment variables to work. Set the java environment variables using following commands

Setup JAVA_HOME Variable
# export JAVA_HOME=/opt/jdk1.8.0_05
Setup JRE_HOME Variable
# export JRE_HOME=/opt/jdk1.8.0_05/jre
Setup PATH Variable
# export PATH=$PATH:/opt/jdk1.8.0_05/bin:/opt/jdk1.8.0_05/jre/bin

Installation of Oracle Java 8 (JDK8 and JRE8) in Ubuntu / Linux Mint

Applies to:
Ubuntu 14.04 LTS, 13.10, 12.10, 12.04 LTS and 10.04 and LinuxMint systems using PPA File. To Install Java 8 in CentOS, Redhat and Fedora read This Article.

Step 1: Install Java 8 (JDK 8)

Add the webupd8team java PPA repository in our system and install Oracle java8 using following set of commands.

$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer

Step 2: Verify JAVA Version

After successfully installing oracle java using above step verify installed version using following command.

$ java -version

java version "1.8.0_05"
Java(TM) SE Runtime Environment (build 1.8.0_05-b13)
Java HotSpot(TM) Client VM (build 25.5-b02, mixed mode)
Step 3: Setup JAVA Environment

Webupd8team is providing a package to set environment variables, Install this package using following command.

$ sudo apt-get install oracle-java8-set-default

References:
https://launchpad.net/~webupd8team/+archive/java

Nodejs on Ubuntu 12.04


Obtaining a recent version of Node or installing on older Ubuntu and other apt-based distributions may require a few extra steps. Example install:
sudo apt-get update
sudo apt-get install -y python-software-properties python g++ make
sudo add-apt-repository -y ppa:chris-lea/node.js
sudo apt-get update

sudo apt-get install nodejs

Check the node version:

node --version

Installation with Forever:

cd /usr/lib/node_modules

sudo npm install forever -g

npm http GET https://registry.npmjs.org/forever
npm http 200 https://registry.npmjs.org/forever
npm http GET https://registry.npmjs.org/forever/-/forever-0.10.11.tgz
npm http 200 https://registry.npmjs.org/forever/-/forever-0.10.11.tgz
npm http GET https://registry.npmjs.org/flatiron
npm http GET https://registry.npmjs.org/forever-monitor/1.2.3
npm http GET https://registry.npmjs.org/nconf
npm http GET https://registry.npmjs.org/nssocket

.

Friday, April 18, 2014

Cleaning Up Your Amazon EBS-Backed AMI

Deregistering Your AMI

You can deregister an AMI when you have finished using it. After you deregister an AMI, you can't use it to launch new instances.
When you deregister an AMI, it doesn't affect any instances that you've already launched from the AMI. You'll continue to incur usage costs for these instances. Therefore, if you are finished with these instances, you should terminate them.
The procedure that you'll use to clean up your AMI depends on whether it is backed by Amazon EBS or instance store.

Cleaning Up Your Amazon EBS-Backed AMI

When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that we created when you created the AMI. You'll continue to incur usage costs for this snapshot in Amazon EBS. Therefore, if you are finished with the snapshot, you should delete it.
The following diagram illustrates the process for cleaning up your Amazon EBS-backed AMI.
Process to clean up your Amazon EBS-backed AMI
To clean up your Amazon EBS-backed AMI
  1. Open the Amazon EC2 console.
  2. In the navigation pane, click AMIs. Select the AMI, click Actions, and then click Deregister. When prompted for confirmation, click Continue.
    The AMI status is now unavailable.
  3. In the navigation pane, click Snapshots. Select the snapshot and click Delete Snapshot. When prompted for confirmation, click Yes, Delete.
  4. (Optional) If you are finished with an instance that you launched from the AMI, terminate it. In the navigation pane, click Instances. Select the instance, click Actions, and then click Terminate. When prompted for confirmation, click Yes, Terminate.

Cleaning Up Your Instance Store-Backed AMI

When you deregister an instance store-backed AMI, it doesn't affect the files that you uploaded to Amazon S3 when you created the AMI. You'll continue to incur usage costs for these files in Amazon S3. Therefore, if you are finished with these files, you should delete them.
The following diagram illustrates the process for cleaning up your instance store-backed AMI.
Process to clean up your instance store-backed AMI
To clean up your instance store-backed AMI
  1. Deregister the AMI using the ec2-deregister command as follows.
    ec2-deregister ami_id
    The AMI status is now unavailable.
  2. Delete the bundle using the ec2-delete-bundle command as follows.
    ec2-delete-bundle -b myawsbucket/myami -a your_access_key_id -s your_secret_access_key -p image
  3. (Optional) If you are finished with an instance that you launched from the AMI, you can terminate it using the ec2-terminate-instances command as follows.
    ec2-terminate-instances instance_id
  4. (Optional) If you are finished with the Amazon S3 bucket that you uploaded the bundle to, you can delete the bucket. To delete an Amazon S3 bucket, open the Amazon S3 console, select the bucket, click Actions, and then click Delete.

Thursday, February 6, 2014

Amazon Terminology


Terminology
The following terms defined by Amazon Web Services are used in this help file and defined here for your convenience:
·    Amazon Machine Image (AMI)Amazon Machine Images are machine images stored within Amazon’s infrastructure. An AMI contains the operating system and other software such as ScaleOut StateServer. A pre-packaged AMI that is configured with ScaleOut StateServer is available in the AWS Marketplace.
·    InstanceAn instance represents a single running copy of an Amazon Machine Image (AMI).
·    RegionAmazon EC2 allows you to run EC2 instances in multiple geographic locations called regions. When deploying your ScaleOut StateServer instances, it is highly recommended that you select a region with the closest geographical proximity to the majority of your WAN traffic, if applicable.
·    Availability ZoneEvery AWS region comprises two or more isolated units of failure within the Amazon Web Services environment called availability zonesA failure in one availability zone is unlikely to propagate to other availability zones within the same region. Resources within the same availability zone will experience lower average network latency than resources that cross availability zones.
·    Key Pairkey pair is a public-key, private-key encryption system used by Linux-based instances for authentication when logging in to the systems via SSH. A key pair consists of a public key and a private key, and the matching key must be provided to authenticate against a running EC2 instance. An instance may have only one key pair defined at launch, and it may not be changed after launch. An instance without a key pair defined at launch will not be able to grant authentication for advanced administration via remote SSH login.
·    Private IP: A private IP address belongs to a single instance and is only routable from within the instance's associated EC2 Region. Data transfer fees do not apply to data transferred using private IP addresses. When operating within the same EC2 region, use of the private IP Address is preferred to avoid data transfer fees.
·    Public IP: A public IP address belongs to a single instance and is routable from within the EC2 environment, including from other EC2 regions, and from external, Internet locations.
·    Elastic IP (EIP)An Elastic IP (EIP) is a fixed (static) public IP address allocated through EC2 and assigned to a running virtual machine instance. Elastic IPs exist independently of virtual machine instances and may be attached to only a single instance at a time, but they may be reassigned to a different instances with complete transparency to end users. If an Elastic IP is associated with an instance, it invalidates and overrides the original public IP.
·    Security Groupsecurity group is a named set of allowed inbound network connection rules for EC2 instances. Each security group consists of a list of protocols, ports, and source IP address ranges. A security group can apply to multiple instances, and an instance can be a member of multiple security groups. Security groups may only be assigned to an instance when the instance is being launched. Changes to a security group’s allowed inbound network connections apply to all instances assigned to that Security Group. By default, the SOSS management tools create a new security group for each deployed SOSS store.
·    Placement GroupA cluster placement group is a logical entity that enables creating a cluster of instances with special characteristics, such as high speed networking. Using a placement group, a cluster of instances can have low latency, 10 gigabit Ethernet bandwidth connectivity between instances in the cluster.

Why an EC2 Instance Isn’t a Server



Why an EC2 Instance Isn’t a Server

From all outward appearances, and even functionally-speaking, an EC2 instance behaves as a virtualized server: A person can SSH (or Remote Desktop) into it and install nearly any application. In fact, from my experience, this is how >95% of developers are using AWS. However, this is a very myopic view of the platform. Most of the complaints levied against AWS (unreliable, costly, difficult to use) are by those who are trying to use EC2 as if it were a traditional server located in a datacenter.
To truly understand AWS, we have to examine Amazon’s DNA. In 2002-2003, a few years prior to the introduction of EC2, Bezos boldly committed the entire organization to embracing service-oriented architecture. SOA is not unlike object-oriented programming (OOP): each function is discretely contained into an isolated building block and these pieces connect together via an agreed-upon interface. In software design, for example, this would allow a programmer to use a third-party library without needing to understand how the innards work. SOA applies the same principles at the organizational level. Bezos mandated that all business units compartmentalize their operations and only communicate through well-defined (and documented) interfaces such as SOAP/RESTful APIs.
For an organization of Amazon’s size, this change was a big deal. The company had grown very rapidly over the previous decade which led to the web of interconnectedness seen in all large companies: purchasing and fulfillment might share the same database server without either knowing who’s ultimately responsible for it, IT operations might have undocumented root access to payroll records, etc. Amazon.com’s organic growth had left it full of the equivalent of legacy “spaghetti code.” Bezos’ decision to refactor the core of operations required a tremendous amount of planning and effort but ultimately lead to much more scalable and manageable organization. As a result of the SOA initiative, in 2006, a small team in South Africa released a service that allowed computing resources to be provisioned on-demand using an API: Amazon Elastic Compute Cloud (EC2) was born. Along with S3, it was the first product of the newly introduced Amazon Web Services unit (since spun off into a separate business).
Amazon Web Services is the world’s most ambitious—and successful—result of service-oriented architecture.  This philosophy drives product innovation and flows down to its intended usage. When competitors like Rackspace argue “persistence” as a competitive advantage, they’re missing the entire point of AWS. EC2 is the antithesis of buying a server, lovingly configuring it into a unique work of art, and then making sure it doesn’t break until it’s depreciated off the books. Instead, EC2 instances are intended to be treated as disposable building blocks that provide dynamic compute resources to a larger application. This application will span multiple EC2 instances (autoscaling groups) and likely use other AWS products such as DynamoDB, S3, etc. The pieces are then glued together using Simple Queue Service (SQS), Simple Notification Service (SNS), and CloudWatch. When a single EC2 instance is misbehaving, it ought to be automatically killed and replaced, not fixed. When an application needs more resources, it should know how to provision them itself rather than needing an engineer to be paged in the middle of the night.
Interconnecting loosely-coupled components is how a systems architect properly designs for AWS. Trying to build something on EC2 without SOA in mind is about as productive (and fun) as playing with a single Lego brick.

Wednesday, January 29, 2014

How to enable logging for HAProxy in Linux

I am using Ubuntu12.04LTS for the installation, I assume that HAProxy is installed on system if not then do it from here

I follow the below step to setting up the logging:

Step1. In Global Section of HAProxy config file (haproxy.cfg) but the below

log 127.9.9.1 local0

Step2.  Create config file in /etc/rsyslog.d

$ sudo vim /etc/rsyslog.d/haproxy.conf

$ModLoad imudp
$UDPServerRun 514 
$template Haproxy,"%msg%\n"
local0.=info -/var/log/haproxy/haproxy.log;Haproxy
local0.notice -/var/log/haproxy/haproxy-status.log;Haproxy
### keep logs in localhost ##
local0.* ~ 

Step3.  Restart the HAProxy Server

$ sudo /etc/init.d/haproxy restart

Step 4: Now create the log rotate file for HAProxy, this keep on rotating and compress file itself. If this is not present in /etc/logrotate.d directory then create it as below:

$ sudo vim /etc/logrotate.d/haproxy

/var/log/haproxy/haproxy.log {
    missingok
    notifempty
    sharedscripts
    rotate 120
    daily
    compress
    postrotate
        reload rsyslog >/dev/null 2>&1 || true
    endscript

}

Step5. Restart the syslog

$ sudo restart rsyslog

you are done, check the HAProxy logs:

$ tail -f /var/log/haproxy/haproxy.log

Note:
120 are the days that this will overwrite after 120 days.
local0.=info -/var/log/haproxy.log defines the http log will be saved in haproxy.log
local0.notice -/var/log/haproxy-status.log defines the Server status like start,stop,restart,down,up etc. will be saved in haproxy-status.log

UDPServerRun 514 means opening UDP port no. 514 to listen haproxy messages