Monday, August 22, 2016

My first exercise with Backbone.js

Files below itself says what it does, Hope it's not that difficult  :)
Also read following for more information 
https://addyosmani.com/backbone-fundamentals/#what-is-backbone.js
http://www.tutorialspoint.com/backbonejs/backbonejs_environment_setup.htm

Wednesday, August 10, 2016

Install Postgres on Ubuntu

1. Install

sudo apt-get update
sudo apt-get install postgresql postgresql-contrib

2 Start the postgres

sudo /etc/init.d/postgresql start
If does not start correctly try the solution at  this link
https://www.digitalocean.com/community/questions/language-problem-on-ubuntu-14-04

3. Login as postgres user
sudo -i -u postgres
This postgres is the default user that postgres adds to linux sytem when it is installing

4. Create a postgres role
createuser --interactive
This is an interactive mode to create a the postgres role/user, It will ask you two questions: the name of the role and whether it should be a superuser. Lets answer "dbuser1" and "y" for the two questions

5. Create a database
Here we have to create the database with the same name we have given for role (in above step 3.)
createdb dbuser1

6. Create a system user with the same name
Logout from postgres user
exit
Then create a user
sudo adduser dbuser1
This is reuired becase the way that Postgres is set up by default (authenticating roles that are requested by matching system accounts) also comes with the assumption that a matching database will exist for the role to connect to.

7. You may sometimes need to add the new user for the sudoer list 
sudo usermod -a -G sudo dbuser1

8. Now login as the newly created user (here we have linux username, postgres role name and db all having same name)
sudo su - dbuser

9. Give psql command, this will take you to the psql command line interface
psql

10. Give \conninfo command to check the connection information

Play arround
\list or \l: list all databases
\dt: list all tables in the current database
To switch databases:
\connect database_name
Create a db from a dbscript
\i path\of the\dbsrcipt.sql
Check the tables (show tables)
\dt


Allow remote connections
psql -U postgres -h 192.168.102.1
psql: could not connect to server: Connection refused
        Is the server running on host "192.168.102.1" and accepting
        TCP/IP connections on port 5432?

To enable other computers to connect to your PostgreSQL server, edit the file /etc/postgresql/9.1/main/postgresql.conf
Locate the line #listen_addresses = 'localhost' and change it to:
listen_addresses = '*'


To allow the access to database, Edit the file /etc/postgresql/9.1/main/pg_hba.conf to use MD5 authentication with the postgres user:
# TYPE  DATABASE        USER            ADDRESS                 METHOD
host    all             all             192.168.0.0/24          md5


Above config says it allows access from all databases for all users whome are connecting from 192.168.0.0/24. And the authentication method is encrypted password 


sudo /etc/init.d/postgresql start

Monday, August 01, 2016

There is no rocket science in Puppet :)


 My 1st puppet module to download Java


class wso2base::download_java {
  $oracle_repo = 'http://download.oracle.com/otn-pub/java/jdk/7u75-b13/'
  $oracle_header = '"Cookie: gpw_e24=http%3A%2F%2Fwww.oracle.com%2F; oraclelicense=accept-securebackup-cookie"'
  $java_package = 'jdk-7u75-linux-x64.tar.gz'
  $java_dir     = '/opt/java/'

  file { '/opt/java/': ensure => directory, recurse => true }

  exec {
    "${name}_download_java":
      path    => ["/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],
      cwd     => "/opt/java/",
      unless  => "test -f ${java_dir}${java_package}",
      command => "wget --no-cookies --no-check-certificate --header ${oracle_header} ${oracle_repo}/${java_package}";

    "${name}_extract_java":
      path    => ["/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"],
      cwd     => "/opt/java/",
      command => "tar xvfz ${java_package}",
      require => Exec["${name}_download_java"];

  }

}

Tuesday, July 12, 2016

LoadBalancing 'WSO2 API Manger Store' with Nginx

Hi All

In this post i'll share the LB configs which worked for me on LB'ing  two WSO2-API Manager Store nodes. In-case if anyone needs :)

In LB'ing I had 3 main tasks in mind
1. Forward 443 traffic to 9443
This is needed as I want the users to easily access the store without port  number in the address, eg https://apistore.wso2.test.com/store
2. Redirect the /carbon (management console) traffic to /store
As this is the store (portal) part of the API-Manager we donot want to expose the /carbon (management console) to outside, further we will redirect ant traffic comes to /carbon to /store
3. Redirect http traffic to https
This is to avoid http access, but at the same time redirect http aces request to https

For above tasks, I have used two files, one is for tasks 1. and 2. and other is for the task 3. I think these is nothing much to explain here, just notice the listen, location and rewrite keywords, which does the job.


apistore_443_to_9443.conf file

upstream apistore9443 {
  ip_hash;
  server 10.0.0.11:9443;
  server 10.0.0.12:9443;
}

server {
  listen   443;
  server_name apistore.wso2.test.com;

location / {
    proxy_set_header X-Forwarded-Host $host;
    proxy_set_header X-Forwarded-Server $host;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header Host $http_host;
    proxy_read_timeout 5m;
    proxy_send_timeout 5m;
    proxy_pass https://apistore9443;
}

location /carbon {
     rewrite ^/carbon(.*) https://apistore.wso2.test.com/store permanent;
}

  ssl on;

  ##SSL cert location
  ssl_certificate /etc/nginx/certs/apistore.crt;
  ssl_certificate_key  /etc/nginx/certs/apistore.pem;

  ssl_session_timeout 5m;
  client_max_body_size 100m;

  #Removed SSLv3 as a fix for the POODLE
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

  ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
  ssl_prefer_server_ciphers on;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;
}


apistore_80_to_443.conf file
server {
  listen 80;
  server_name apistore.wso2.test.com;
  rewrite ^/(.*) https://apistore.wso2.test.com/$1 permanent;

  access_log /var/log/nginx/access.log;
  error_log /var/log/nginx/error.log;
}


Thank should go to Manula(at wso2) for helping me.

Monday, June 06, 2016

Script to backup wso2-server directory to an external server

Note : Make sure you have only the wso2-server in the backupDir.
Please pay attention to the bold parts of the script below

Fill-out the backupUser, backupServer and backupDest varibles for the remote backup server.

Improvements
1. You can omit the password promting for scp by configuring keys between your server and remote server. refer[1]
2. You can use incron job[1] to automate backing up on daily (timely) basis.
http://susinda.blogspot.com/2016/05/wso2-depsync-with-rsync-and-incron.html


backupDir=/apps/wso2
backupUser=backupRoot
backupServer=backupVM
backupDest=/apps/backups
cd $backupDir
rm *.zip

a=$(hostname)
b=`date +%Y-%m-%d`
c='.zip'
zipName=$b-$a$c
pack=$(ls)
echo zipping $pack to $zipName
zip -r $zipName $pack
echo scp..ing $zipName to $backupUser@$backupServer:$backupDest
scp $zipName $backupUser@$backupServer:$backupDest
echo Backup completed ..!

Tuesday, May 31, 2016

WSO2 Depsync with rsync and incron

Setup password less authentication for ubuntu
1. create a private/pub key in master server (use empty passprase)
ssh-keygen

2. Copy the new key to your worker server:
ssh-copy-id worker_username@worker_host

Setup rsync and incron
1. install rsync
sudo apt-get install rsync

2. install incron
sudo apt-get install incron

3. Allow incron
vim /etc/incron.allow
add rootuser username in this file, save and close

4. Create a push_artifacts.sh file with following content in  APIM-HOME directory

#!/bin/bash
# push_artifacts.sh - Push artifact changes to the worker nodes.
master_artifact_path=/apps/wso2/wso2am-1.10.0/repository/deployment/server
worker_artifact_path=/apps/wso2/wso2am-1.10.0/repository/deployment/
worker_nodes=(worker1 worker2 worker3)
while [ -f /tmp/.rsync.lock ]
do
  echo -e ";[WARNING] Another rsync is in progress, waiting...";
  sleep 2
done
mkdir /tmp/.rsync.lock
if [ $? = "1" ]; then
echo ";[ERROR] : can not create rsync lock";
exit 1
else
echo ";INFO : created rsync lock";
fi
for i in ${worker_nodes[@]}; do
echo ";===== Beginning artifact sync for $i =====";
rsync -avzx --delete -e ssh $master_artifact_path rootuser@$i:$worker_artifact_path
if [ $? = "1" ]; then
echo ";[ERROR] : rsync failed for $i";
exit 1
fi
echo ";===== Completed rsync for $i =====";
done
rm -rf /tmp/.rsync.lock
echo ";[SUCCESS] : Artifact synchronization completed successfully";

6. Give it executable rights
chmod +x push_artifacts.sh

7. Execute below command to configure icron.
incrontab -e

8. Add the below text in the prompt opened by above step.
/apps/wso2/wso2am-1.10.0/repository/deployment/server IN_MODIFY,IN_CREATE,IN_DELETE /apps/wso2/wso2am-1.10.0/push_artifacts.sh

9. Test
Add a file in master node /apps/wso2/wso2am-1.10.0/repository/deployment/server  and check in worker node, that same file has been copied there

10. Further you can check the log to see the incron logs 
tail /var/log/syslog