OpenVPN requires IPv4 forwarding to allow routing between servers

OpenVPN requires IPv4 forwarding to allow routing between servers

The problem: no connectivity between two computers that are both connecting into an openVPN server.  the open VPN server is able to connect to both of the computers

Open VPN Setup

Two different computers connecting to the open VPN server on the same C class IP.

  • computer 1: ifconfig-push 10.1.11.13 10.1.11.1
  • computer 2: ifconfig-push 10.1.11.211 10.1.11.1

OpenVPN requires IPv4 forwarding to allow routing between servers Solution

The short term solution is to run a command that enables IPv4 forwarding

#sysctl -w net.ipv4.ip_forward=1

However this will not survive a reboot.  so open the sysctl configuration file and set it.

>vi /etc/sysctl.conf  #uncomment the net.ipv4.ip_forward line

# Uncomment the next line to enable packet forwarding for IPv4
net.ipv4.ip_forward=1

That’s it,  the 2 computers should be able to communicate

 

COMMANDDUMP – installing wpscan penetration tool on a clean ubuntu 14.04 server

COMMANDDUMP – installing wpscan penetration tool on a clean ubuntu 14.04 server

WPScan (http://wpscan.org/)  has instructions for installing on Ubuntu 14.04,  however when attempting to install it on a clean 14.04 there were several missing dependencies.

(In Ubuntu 14.04 the default is ruby1.8 so the commands I added address this)

So I came up with the following commanddump required to install  – this works as of 1/19/2016

 

sudo apt-get install libcurl4-openssl-dev libxml2 libxml2-dev libxslt1-dev build-essential libgmp-dev  #remove this package ruby-dev which links to an old package
sudo apt-get install ril1.9.1 
sudo apt-get install ruby1.9.1-dev #thanks stackoverflow
gem install addressable -v '2.4.0'  
#checkpoint you should receive a 'Successfully installed addressable-2.4.0
gem install ffi -v '1.9.10
#checkpoint you may need to install some ruby gems files
git clone https://github.com/wpscanteam/wpscan.git cd wpscan sudo gem install bundler && bundle install --without test
sudo gem install bundler && bundle install --without test

 

By the way, kudos to this guy (@_FireFart_) for getting his username displayed every time someone updates this awesome software

root@server:# ruby wpscan.rb --update
_______________________________________________________________
 __ _______ _____
 \ \ / / __ \ / ____|
 \ \ /\ / /| |__) | (___ ___ __ _ _ __
 \ \/ \/ / | ___/ \___ \ / __|/ _` | '_ \
 \ /\ / | | ____) | (__| (_| | | | |
 \/ \/ |_| |_____/ \___|\__,_|_| |_|

 WordPress Security Scanner by the WPScan Team
 Version 2.9
 Sponsored by Sucuri - https://sucuri.net
 @_WPScan_, @ethicalhack3r, @erwan_lr, pvdl, @_FireFart_
_______________________________________________________________

[i] Updating the Database ...

Remove Atlassian Stash from an Ubuntu system – CommandDump

Remove Atlassian Stash from an Ubuntu system – CommandDump

To remove atlassian stash from an Ubuntu system (in my case I needed a clean clone of a system similar to a system we Atlassian Stash on)

This assumes that you are using the default install and home locations ,  you may have to change the paths for your system (be careful,  you dont want to accidentally do this if you need the information)

sudo service stop atlstash
sudo rm /var/atlassian/stash  -rf
sudo rm /opt/atlassian/stash -rf
sudo update-rc.d -f atlstash remove 
rm /etc/init.d/atlstash 

Grep command to find all PHP shortcode entries

Grep command to find all PHP shortcode entries

As PHP files are moved from one server to the other,  occassionally we find a situateion where PHP was developed on a server that allowed shortcodes “<?”  which does not use the longer “<?php”  if this happens,  the PHP code does not execute and shows the code that would have executed,  on the output of the page.

When this happens I have found that using a simple command I can identify all of the places that use the short code

 grep -n '<?[^p=]' *.php

Postfix Configuration: The argument against reject_unverified_sender

Postfix Configuration: The argument against reject_unverified_sender

When configuring a mail server there techniques available which can help you to reduce spam.

A popular spam reduction technique is to ‘greylist’ emails.  This method temporarily rejects an email if the server has never seen a prior attempt to send the same combination of From Email, To Email and From IP address ,  legitimate emails are tried again after a few moments and the email goes through no problem.

Another option in the postfix system which can be used to reduce spam is the ‘reject_unverified_sender’ option.   As postfix confirming that the ‘Sender’ is a valid user,  a connection is made to the server associated with the Senders domain (by MX record).   It goes through the email sending process far enough to find out if the server accepts or rejects the Sender email (RCPT TO: sender@email.com).

While it seems like a good idea to confirm that the email address that is sending email to us is valid,   if the sender’s server has greylisting on their domain they would reject the ‘verification’ connection,  which would then ‘reject’ the attempted message.

For this reason,   we choose not to globally install postfix servers with reject_unverified_sender.

————————–

There is some argument though that this does not cause a permanent problem,    because when the reject_unverified_sender_code is set to 450.  Because the rejection of the email will only be temporary and when the email is attempted again,  the sender verification should pass the grey listing.

However,  this is not good enough for me because there are other reasons that the sender verification could fail.  Such as the fact that the server does not accept the MAIL FROM email from the verifying server.  This could be because the doublebounce@ email address used for verification is not accepted by the server for other reasons such as the fact that THEY may be doing some sender verification,  which would fail when an email is attempted to doublebounce,  additionally the verifying server could end up getting a bad ‘reputation’ by only checking to see if email addresses are deliverable,  but then never actually delivering a mail message.

For these reasons,  I recommend skipping this setting and perhaps only using the reject_unknown_sender_domain which just checks whether there is a valid MX record for a domain.

While postfix is powerful with a plethora of options, not all of them immediately helpful.

Ubuntu – Base Secure Apache

Ubuntu – Base Secure Apache

In order to install a server that is able to pass the many SSL problems out there you can not install the default servers.

apt-get install make gcc

Install the latest open ssl from the openssl site first.

– download it to a directory

extract , config and install

then install apache2

 

Recovering / Resyncing a distributed DRBD dual primary Split Brain – [servera] has a different data from [serverb]

Recovering / Resyncing  a distributed DRBD dual primary Split Brain – [servera] has a different data from [serverb]

A client had a pair of servers running drbd in order to keep a large file system syncronized and highly available.  However at some point in time the drbd failed and the two servers got out of sync and it went unnoticed for long enough,  that new files were written on both ‘servera’ and on ‘serverb’.

At this point both servers believe that they are the primary,  and the servers are running in what you call a ‘Split Brain’

To determine that split brain has happened you can run several commands.  In our scenario we have two servers servera and serverb

servera#drbd-overview
 0:r0/0 WFConnection Primary/Unknown UpToDate/DUnknown C r----- /data ocfs2 1.8T 1001G 799G 56%
serverb#drbd-overview
 0:r0/0  StandAlone Primary/Unknown UpToDate/DUnknown r----- /data ocfs2 1.8T 1.1T 757G 58%

From the output above (color added) we can see that servera knows that it is in StandAlone mode,   the server realizes that it can not connect.  We can research the logs and we can find out why it things it is in  StandAlone d.  To do this we grep the syslog.

serverb#grep split /var/log/syslog
Nov 2 10:15:26 serverb kernel: [41853948.860147] block drbd0: helper command: /sbin/drbdadm initial-split-brain minor-0
Nov 2 10:15:26 serverb kernel: [41853948.862910] block drbd0: helper command: /sbin/drbdadm initial-split-brain minor-0 exit code 0 (0x0)
Nov 2 10:15:26 serverb kernel: [41853948.862934] block drbd0: Split-Brain detected but unresolved, dropping connection! 
Nov 2 10:15:26 serverb kernel: [41853948.862950] block drbd0: helper command: /sbin/drbdadm split-brain minor-0
Nov 2 10:15:26 serverb kernel: [41853948.865829] block drbd0: helper command: /sbin/drbdadm split-brain minor-0 exit code 0 (0x0)

This set of log entries lets  us know that when serverb attempted to connect to servera,  it detected a situation where both file systems had been written to,  so it could no longer synchronize.  it made these entries and put itself into Standalone mode.

servera on the other hand says that it is waiting for a connection WFConnection.

The next step is to determine which of the two servers has the ‘master’ set of data.  This set of data will sync OVER THE TOP of the other server.

In our client’s case we had to do some investigation in order to determine what differences there were on the two servers.

After some discovery we realized that in our case serverb had the most up to date information, except in the case of one directory,  we simply copied that data from servera to serverb,  and then serverb was ready to become our primary.  In the terminology of DRBD,  servera is our ‘split-brain victim’ and serverb is our ‘splitbrain survivor’  we will need to run a set of commands which

  1. ensures the status of the victim to ‘Standalone’ (currently it is ‘WFConnection’)
  2. umount the drive on the victim(servera) so that the filesystem is no longer accessible
  3. sets the victim to be ‘secondary’ server,   this will allow us to sync from the survivor to victim KNOWING the direction the data will go.
  4. start the victim (servera) and let the let the ‘split brain detector’ know that it is okay to overwrite the data on the victim(servera) with the data on the survivor (serverb)
  5. start the survivor(serverb)  (if the serverb server was in WFConnection mode, it would not need to be started,  however ours was in StandAlone mode so it will need to be restarted)

At first we were concerned that we would have to resync 1.2 TB of data,  however we read here  that

The split brain victim is not subjected to a full device synchronization. Instead, it has its local modifications rolled back, and any modifications made on the split brain survivor propagate to the victim.

The client runs a dual primary,  however as we rebuild the synced pair, we need to ensure that the ‘victim’  is rebuilt from the survivor,  so we move the victim from a primary, to a secondary.   And it seems that we are unable to mount a drive (using our ocfs2 filesystem) while it is a secondary.  So we had to ‘umount’ the drive,  and we were unable to remount it while it is a secondary.  In a future test (in which restoring data redundancy primary / primary is less critical),  we will find out whether we are able to keep the primary/primary status while we are rebuilding from a split brain.

While the drbd-overview tool shows all of the ‘resources’ we are required to use a third parameter specifying the ‘resource’ to operate on .  If you have more than one drbd resource defined you will need to identify which resource you are working with.   You can look in your /etc/drbd.conf file or in your /etc/drbd.d/disk.res (your file may be named differently).  The file has the form of

resource r0 { 
....................
}

where r0 is your resource name,   you can also see this buried in your output of drbd-overview

servera# drbd-overview
0:r0/0 WFConnection Primary/Unknown UpToDate/DUnknown C r----- /data ocfs2 1.8T 1001G 799G 56%

So we ran the following commands on servera to prepare it as the victim

servera# drbd-overview #check the starting status of the victim
 0:r0/0 WFConnection Primary/Unknown UpToDate/DUnknown C r----- /data ocfs2 1.8T 1001G 799G 56%
serverb# drbd-overview #check the starting status of the survivor
 0:r0/0 StandAlone Primary/Unknown UpToDate/DUnknown r----- /data ocfs2 1.8T 1.1T 760G 58%

From this above we can see that serverb has 58% usage and 760GB free, were server a has 56% usage and 799GB free.
Based on what I know about the difference between servera and serverb, this helps me to confirm that serverb has more data and is the ‘survivor’

servera# drbdadm disconnect r0 # 1. ensures the victim is standalone 
servera# drbd-overview #confirm it is now StandAlone 
 0:r0/0 StandAlone Primary/Unknown UpToDate/DUnknown r----- /data ocfs2 1.8T 1001G 799G 56%
servera# umount /data # 2. we can not mount the secondary drive with read write
servera# drbdadm secondary r0 # 3. ensures the victim is the secondary 
servera# drbd-overview #confirm it is now secondary 
  0:r0/0 StandAlone Secondary/Unknown UpToDate/DUnknown r-----
servera# drbdadm connect --discard-my-data r0 # 4. start / connect the victim up again knowing that its data should be overwritten with a primary 
servera# drbd-overview #confirm the status and that it it is now connected [WFConnection]
  0:r0/0 WFConnection Secondary/Unknown UpToDate/DUnknown C r-----


I also checked the logs to confirm the status change

servera#grep drbd /var/log/syslog|tail -4
Nov 4 05:14:03 servera kernel: [278068.555213] drbd r0: conn( StandAlone -> Unconnected )
Nov 4 05:14:03 servera kernel: [278068.555247] drbd r0: Starting receiver thread (from drbd_w_r0 [19105])
Nov 4 05:14:03 servera kernel: [278068.555331] drbd r0: receiver (re)started
Nov 4 05:14:03 servera kernel: [278068.555364] drbd r0: conn( Unconnected -> WFConnection )

Next we simply have to run this command on serverb to let it know that it can connect as the survivor (like I mentioned above,  if the survivor was in WFConnection mode,  it would automatically reconnect,  however we were in StandAlone mode)

serverb# drbd-overview #check one more time that serverb is not yet connected
 0:r0/0 StandAlone Primary/Unknown UpToDate/DUnknown r----- /data ocfs2 1.8T 1.1T 760G 58%
serverb# drbdadm connect r0 # 5. start the surviving server to ensure that it reconnects
serverb# drbd-overview #confirm serverb and servera are communicating again 
 0:r0/0 SyncSource Primary/Secondary UpToDate/Inconsistent C r----- /data ocfs2 1.8T 1.1T 760G 58%
 [>....................] sync'ed: 0.1% (477832/478292)M
servera# drbd-overview #check that servera confirms what serverb says about communicating again 
 0:r0/0 SyncTarget Secondary/Primary Inconsistent/UpToDate C r-----
 [>....................] sync'ed: 0.3% (477236/478292)M

Another way to confirm that the resync started happening is to check the logs

servera# grep drbd /var/log/syslog|grep resync
Nov 4 05:18:09 servera kernel: [278314.571951] block drbd0: Began resync as SyncTarget (will sync 489771348 KB [122442837 bits set]).
serverb# grep drbd /var/log/syslog|grep resync
Nov 4 05:18:09 serverb kernel: [42008909.652451] block drbd0: Began resync as SyncSource (will sync 489771348 KB [122442837 bits set]).


Finally,   we simply run a command to promote servera to be a primary again,  and then both servers will be writable

servera#drbdadm primary r0
servera# drbd-overview
 0:r0/0 Connected Primary/Primary UpToDate/UpToDate C r-----
servera# mount /data #remount the data drive we unmounted previously

 

 

Now that we ‘started’ recovering from the split-brain issue we just have to watch the two servers to confirm once they have fully recovered.  once that is complete we will put in place log watchers and FileSystem tests to send out a notification to the system administrator if it should happen again.

Find out which PHP packages are installed on ubuntu / debian

Find out which PHP packages are installed on ubuntu / debian

As we have moved or upgraded sites from one server to another,  sometimes we have needed to know which PHP5 dependencies were installed on one server servera,  so that we could make sure those same dependencies were met on another server serverb

To do this we can run a simply command line tool on server a

servera# echo `dpkg -l|awk '$1 ~ /ii/ && $2 ~ /php5/{print $2}'`
libapache2-mod-php5 php5 php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mysql php5-pgsql 

and then we copy the contents of the output and past it after the apt-get install command on serverb

serverb# apt-get install libapache2-mod-php5 php5 php5-cli php5-common php5-curl php5-gd php5-mcrypt php5-mysql php5-pgsql 

Dont forget to reload apache as some packages do not reload it automatically

serverb# /etc/init.d/apache2 reload

In XenCenter Console – mount DVD drive in Ubuntu 14.04

In XenCenter Console – mount DVD drive in Ubuntu 14.04

When running Ubuntu 14.04 LTS as a guest under XenServer6.5  I was attempting to install xs-tools.iso by mounting it into server using the drop down box.

 

However at the console,  i was unable to find /dev/cdrom or /dev/dvd* or /dev/sr*  or anything that seemed to fit.

 

So I ran fdisk -l

#fdisk -l

and I found a disk I didnt recognize

Disk /dev/xvdd: 119 MB, 119955456 bytes
255 heads, 63 sectors/track, 14 cylinders, total 234288 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/xvdd doesn't contain a valid partition table

So I mounted it and looked at the contents

#mount /dev/xvdd /mnt
#ls /mnt
dr-xr-xr-x 4 root root 2048 Jan 27 04:08 Linux
-r--r--r-- 1 root root 1180 Jan 27 04:08 README.txt
-r--r--r-- 1 root root 65 Jan 27 04:07 AUTORUN.INF
-r--r--r-- 1 root root 802816 Jan 27 04:07 citrixguestagentx64.msi
-r--r--r-- 1 root root 802816 Jan 27 04:07 citrixguestagentx86.msi
-r--r--r-- 1 root root 278528 Jan 27 04:07 citrixvssx64.msi
-r--r--r-- 1 root root 253952 Jan 27 04:07 citrixvssx86.msi
-r--r--r-- 1 root root 1925120 Jan 27 04:07 citrixxendriversx64.msi
-r--r--r-- 1 root root 1486848 Jan 27 04:07 citrixxendriversx86.msi
-r--r--r-- 1 root root 26 Jan 27 04:07 copyright.txt
-r--r--r-- 1 root root 831488 Jan 27 04:07 installwizard.msi
-r-xr-xr-x 1 root root 50449456 Jan 27 04:03 dotNetFx40_Full_x86_x64.exe
-r-xr-xr-x 1 root root 1945 Jan 27 04:03 EULA_DRIVERS
-r-xr-xr-x 1 root root 1654835 Jan 27 04:03 xenlegacy.exe
-r-xr-xr-x 1 root root 139542 Jan 27 04:03 xluninstallerfix.exe


So I found it!  Now just to install the tools and reboot

#cd Linux && ./install.sh
#reboot

Ubuntu Server Time Resetting

Ubuntu Server Time Resetting

We have a server that was having trouble resetting the date on the server to todays date and time,  in the year 2020.  It appeared that the problem happened randomly and in some cases it would happen and then go away.  Here are some of the steps I went through to debug this.

My server has a daily 1:01 AM cronjob to the the date from another local server (to keep all of our servers in sync)

This command syncs the date with that server.

/usr/sbin/ntpdate -v my.localsever.com

Anytime I noticed the date off at 2020, when i would run this command and it would properly reset to the correct time,  so it seems it has to be coming from somewhere other than the my.localserver.com

 

So I decide to try to pinpoint when this happened.  Do to this I started a cron log,  which dumps the date,  every 30 seconds into a file, so I could look at that file to find out when the dates change

 /bin/date >> /tmp/bin.date.log

Now,  next time it happens I will have a history of the minute during which the issue happens and perhaps I can tie it to some process I have running.

Compare the packages (deb / apache) on two debian/ubuntu servers

Compare the packages (deb / apache) on two debian/ubuntu servers

Debian / Ubuntu

I worked up this command and I don’t want to lose it

#diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'

This command shows a list of all of the packages installed on 111.222.33.44 that are not installed on the current machine

To make this work for you,  just update the ssh 111.222.33.44 command to point to the server you want to compare it with.

I used this command to actually create my apt-get install command

#apt-get install `diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'`

Just be careful that you have the same Linux kernels etc,  or you may be installing more than you expect

Apache

The same thing can be done to see if we have the same Apache modeuls enabled on both machines

diff <(a2query -m|awk '{print $1}'|sort) <(ssh 111.222.33.44 a2query -m|awk '{print $1}'|sort)

This will show you which modules are / are not enabled on the different machines

 

Connecting to a HANA Database using PHP from Ubuntu 14.04 LTS

Connecting to a HANA Database using PHP from Ubuntu 14.04 LTS

I need to setup a PHP Website to connect to a HANA.  The site is already installed with PHP and MySQL. Time permitting,  I will also document the process of converting the existing MySQL Database to HANA. The environment:

  • Hana OS: SUSE Linux Enterprise Server 11 SP3 x86_64 (64-bit)
  • Web Server: Ubuntu 14.04 LTS
  • PHP Version:5.5.9-1ubuntu4.5

First install the php odbc package

  1. sudo apt-get install php5-odbc

PHP ODBC Information

#php -i|grep odbc
/etc/php5/cli/conf.d/20-odbc.ini,
/etc/php5/cli/conf.d/20-pdo_odbc.ini,
odbc
ODBC_LIBS => -lodbc
odbc.allow_persistent => On => On
odbc.check_persistent => On => On
odbc.default_cursortype => Static cursor => Static cursor
odbc.default_db => no value => no value
odbc.default_pw => no value => no value
odbc.default_user => no value => no value
odbc.defaultbinmode => return as is => return as is
odbc.defaultlrl => return up to 4096 bytes => return up to 4096 bytes
odbc.max_links => Unlimited => Unlimited
odbc.max_persistent => Unlimited => Unlimited
PDO drivers => mysql, odbc

Install HANA Database Client

Before starting,   Since the Web server is on Amazon,  I just took a quick snapshot so I could rollback to a ‘Pre HANA Client State’ on the server if I needed to.

  1. Download the HANA using this link  SAP Hana Client  download the client,  you must be a member.  Sign up as a public user with all of your information,  you will be required to give them some information about your profession industry and company.    There are several screen and it takes quite a few before you get to the download.  BTW:If this link is broken,  please email me and let me know. -[ if this link does not work,  try it here }
  2. Extract the file into a temporary location and cd into the working directory
    1. cd /tmp
    2. tar -xvzf /home/michael/sap_hana_client.tgz
    3. cd sap_hana_linux64_client_rev80/
  3. Nexe ensure that the scripts that need to execute have execute privileges
    1. chmod +x hdbinst
    2. chmod +x hdbsetup
    3. chmod +x hdbuninst
    4. chmod +x instruntime/sdbrun
  4. Finally,  run the installation program
    1. sudo ./hdbinst -a client
    2. press <enter> to accept the default installation path: /usr/sap/hdbclient
    3. You will see  informaiotn like the following then you can take a look at the log file to see any other details of the installation.
      Checking installation...
      Preparing package 'Python Runtime'...
      Preparing package 'Product Manifest'...
      Preparing package 'SQLDBC'...
      Preparing package 'REPOTOOLS'...
      Preparing package 'Python DB API'...
      Preparing package 'ODBC'...
      Preparing package 'JDBC'...
      Preparing package 'HALM Client'...
      Preparing package 'Client Installer'...
      Installing SAP HANA Database Client to /usr/sap/hdbclient...
      Installing package 'Python Runtime'...
      Installing package 'Product Manifest'...
      Installing package 'SQLDBC'...
      Installing package 'REPOTOOLS'...
      Installing package 'Python DB API'...
      Installing package 'ODBC'...
      Installing package 'JDBC'...
      Installing package 'HALM Client'...
      Installing package 'Client Installer'...
      Installation done
      Log file written to '/var/tmp/hdb_client_2014-12-18_02.06.04/hdbinst_client.log'
    4. more /var/tmp/hdb_client_2014-12-18_02.06.04/hdbinst_client.log

I saw some reports online that you would have to isntall ‘libaio-dev’,  I did not have to,  but in case you do,  here is the command

  1.  sudo apt-get install libaio-dev

Now you can connect to the hana client

  1. /usr/sap/hdbclient/hdbsql

Ignoring unknown extended header keyword

This happened to me,  this is most likely because the tgz file was created on BSD.  to address this install bsd tar and replace the ‘tar’ command above with the bsdtar command below.

  1. apt-get install bsdtar
  2. bsdtar -xvzf /home/michael/sap_hana_client.tgz

Testing that the client can connect to a HANA server

To confirm that the client works, run the client and connect to the HANA server

  1. # /usr/sap/hdbclient/hdbsql
    Welcome to the SAP HANA Database interactive terminal
    Type: h for help with commands
          q to quit
  2. hdbsql=> c -n xxx.xxx.xxx.xxxx:port -u System -p mypass

I had to type a port,  most likly because the server was not installed on a default port.

Creating an ODBC Connection to HANA Server

First Install the unixodbc packages

  1. sudo apt-get install unixodbc unixodbc-dev odbcinst
  2. ls  /etc/odbc*
    /etc/odbc.ini /etc/odbcinst.ini

Open the /etc/odbc.ini file and type the following

[hanadb]
Driver = /usr/sap/hdbclient/libodbcHDB.so
ServerNode =xxx.xxx.xxx.xxx:30015

Note that the blue matches the location you installed to,  you must make sure these match or you will receive a cryptic [ISQL]ERROR: Could not SQLConnect message.

Connect using your username and password

  1. isql hanadb username passwor
    +---------------------------------------+
    | Connected! |
    | sql-statement |
    | help [tablename] |
    | quit |
    +---------------------------------------+

 

Implementing the HANA  ODBC in PHP

As long as you have been able to connect via the ODBC methods above,  PHP should be a breeze.

Just place the following in your code.

  1. $link=odbc_connect(“hanadb”,”myusername”,”mypassword”, SQL_CUR_USE_ODBC);
  2. $result = odbc_exec($link,”select * from sys.m_tables “);
  3. while($arr=odbc_fetch_array($result)) print_r($arr);

 

A couple of gotchas:

  • Hana does not have the concept of ‘databases’,  they use schemas so the user you connect with must be setup to use the Schema you want by default.
    • Or you can prefix every table with your Schema name
    • Or you can set the default  context to the one you want before each connection
    • I accomplished this by adding an extra line under the odbc_connect();   odbc_exec($dblink, “set schema MYSSCHEMA”);

That’s it!   More posts to come on HANA.

 

 

 

 

Thanks to this post for help http://scn.sap.com/community/developer-center/hana/blog/2012/09/14/install-hana-client-on-ubuntu

Setting up DRBD with OCSF2 on a Ubuntu 12.04 server for Primary/Primary

Setting up DRBD with OCSF2 on a Ubuntu 12.04 server for Primary/Primary

We run in a virtual environment and so we thought we would go with the virtual kernel for the latest linux kernls
We learned that we should NOT not in the case we want to use the OCFS2 distributed locking files system because ocfs2 did not have the correct modules so we would have had to doa  custom build of the modules so we decided against it.   we just went with the latest kernel,   and would install ocfs2 tools from the package manager.

DRBD on the other hand had to be downloaded, compiled and installed regardless of kernel,   here are the procedures,  these must be run on each of a pair of machines.
We assume that /dev/xvdb has a similar sized device on both machines.

apt-get install make gcc flex
wget http://oss.linbit.com/drbd/8.4/drbd-8.4.4.tar.gztar xzvf drbd-8.4.4.tar.gz 
cd drbd-8.4.4/
./configure --prefix=/usr --localstatedir=/var --sysconfdir=/etc --with-km
make all

Connfigure both systems to be aware of eachother without dns /etc/hosts

192.168.100.10 server1
192.168.100.11 server2

Create a configuration file at /etc/drbd.d/disk.res

resource r0 {
protocol C;
syncer { rate 1000M; }
startup {
wfc-timeout 15;
degr-wfc-timeout 60;
become-primary-on both;
}
net {
#requires a clustered filesystem ocfs2 for 2 prmaries, mounted simultaneously
allow-two-primaries;
after-sb-0pri discard-zero-changes;
after-sb-1pri discard-secondary;
after-sb-2pri disconnect;
cram-hmac-alg sha1;
shared-secret "sharedsanconfigsecret";
}
on server1 {
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.10:7788;
meta-disk internal;
}
on server2 {
device /dev/drbd0;
disk /dev/xvdb;
address 192.168.100.11:7788;
meta-disk internal;
}
}

 

configure drbd to start on reboot verify that DRBD is running on both machines and reboot,  and verify again

update-rc.d drbd defaults
/etc/init.d/drbd start
drbdadm -- --force create-md r0
drbdadm up r0
cat /proc/drbd

at this point  you should see that both devices are connected Secondary/Secondary and Inconsistent/Inconsistent.
Now we start the sync fresh,   on server1 only both sides are blank so drbd should manage any changes from here on.  cat /proc/drbd will show UpToDate/UpToDate
Then we mark both primary and reboot to verify everything comes back up

server1>drbdadm -- --clear-bitmap new-current-uuid r0
server1>drbdadm primary r0
server2>drbdadm primary r0
server2>reboot
server1>reboot

I took a snapshot at this point
Now it is time to setup the OCFS2 clustered file system on top of the device first setup a /etc/ocfs2/cluster.conf

cluster:
node_count = 2
name = mycluster
node:
ip_port = 7777
ip_address = 192.168.100.10
number = 1
name = server1
cluster = mycluster
node:
ip_port = 7777
ip_address = 192.168.100.11
number = 2
name = server2
cluster = mycluster

get the needed packages, configure them and setup for reboot,  when reconfiguring,   remember to put the name  of the cluster you want to start at boot up mycluster run the below on both machines

apt-get install ocfs2-tools
dpkg-reconfigure ocfs2-tools
mkfs.ocfs2 -L mycluster /dev/drbd0 #only run this on server1
mkdir -p /data
echo "/dev/drbd0  /data  ocfs2  noauto,noatime,nodiratime,_netdev  0 0" >> /etc/fstab
mount /data
touch /data/testfile.`hostname`
stat /data/testfile.*
rm /data/testfile* # you will only have to run this on one machine
reboot

So,  everything should be running on both computers at this point when things come backup make sure everythign is connected.
You can run these commands from either server

/etc/init.d/o2cb status
cat /proc/drbd

			
Call Now Button(208) 344-1115

SIGN UP TO
GET OUR 
FREE
 APP BLUEPRINT

Join our email list

and get your free whitepaper