Connecting to a database with PHP

Connecting to a database with PHP

Install these packages:

#apt-get install apache2
#apt-get install mysql
#apt-get install php
#apt-get install php5-mysql

Create a test user, password and database

At the sql server, Log into mysql:

#mysql -u root -p

Issue the following commands to create a user “test” and a password “password”:

CREATE USER ‘test’@’localhost’ IDENTIFIED BY ‘password’;
CREATE USER ‘test’@’%’ IDENTIFIED BY ‘password’;

GRANT ALL ON *.* TO ‘test’@’localhost’;
GRANT ALL ON *.* TO ‘test’@’%’;

CREATE DATABASE instruments

 

Exit mysql:

q

Log back in as the user you just created, attaching to the new database:

mysql -u test -p instruments

Execute a

s

to see the status. Verify the user and database.

Test PHP Functionality:

Create a file named “something”.php and insert the following text:

<?php echo ‘hello world’.time();
/* <?php echo ‘mysqli_connect(); print_r(mysqli_query(‘select now()’)) ; ?> */
?>

Place this file in the /var/www directory

Open a browser and point to that file:

http://<your server>”something”.php

You should see hello world and the date.

To test your connection to the database via PHP:

Create a file with the following text and name it “something”.php

Edit the line “$db = mysql_connect(“206.207.94.34″,”test”,”password”);” to reflect your server & user.

<?php
$db = mysql_connect(“206.207.94.34″,”test”,”password”);
if (!$db) {
die(“Database connection failed miserably: ” . mysql_error());
}
else

die(“Database Success!!!: ” . mysql_error());
$db_select = mysql_select_db(“instruments”,$db);
if (!$db_select) {
die(“Database selection also failed miserably: ” . mysql_error());
}
?>
<html>
<head>
<title>Step 3</title>
</head>
<body>
<?php
$result = mysql_query(“SELECT * FROM mytable”, $db);
if (!$result) {
die(“Database query failed: ” . mysql_error());
}
?>
</body>
</html>

Place this file in the /var/www directory

Open a browser and point to that file:

http://<your server>”something.php

Success!!!

HANDY MYSQL COMMANDS:

Note that all text commands must be first on line and end with ‘;’
? (?) Synonym for `help’.
clear (c) Clear the current input statement.
connect (r) Reconnect to the server. Optional arguments are db and host.
delimiter (d) Set statement delimiter.
edit (e) Edit command with $EDITOR.
ego (G) Send command to mysql server, display result vertically.
exit (q) Exit mysql. Same as quit.
go (g) Send command to mysql server.
help (h) Display this help.
nopager (n) Disable pager, print to stdout.
notee (t) Don’t write into outfile.
pager (P) Set PAGER [to_pager]. Print the query results via PAGER.
print (p) Print current command.
prompt (R) Change your mysql prompt.
quit (q) Quit mysql.
rehash (#) Rebuild completion hash.
source (.) Execute an SQL script file. Takes a file name as an argument.
status (s) Get status information from the server.
system (!) Execute a system shell command.
tee (T) Set outfile [to_outfile]. Append everything into given outfile.
use (u) Use another database. Takes database name as argument.
charset (C) Switch to another charset. Might be needed for processing binlog with multi-byte charsets.
warnings (W) Show warnings after every statement.
nowarning (w) Don’t show warnings after every statement.

For server side help, type ‘help contents’

Matt Long
01/27/2015

HANA Database Backup using Cron

HANA Database Backup using Cron

To create backups using HANA we

  1. create a user
  2. create a user key
  3. create a script
  4. schedule the script to run via cron job

1) Create a HANA user to use specifically for Cron Job – Daily HANA Database Backup

Open an SQL Window and run the following to create a user,  give them access to backup,  and then remove their password so they can not connect via login.

create user backup_operator password Xxxxxxxx1;
grant backup operator to backup_operator;
alter user backup_operator disable password lifetime;

2)  Create a HANA user key to be used for automated connection to the HANA database by the Backup User

Login to the server console using putty.
Under the hdbclient/ directory use the hdbuserstore command to create a key, -i makes it so you will then have to type in a password key from the command above

#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbuserstore -i SET cronbackupkey localhost:30015 backup_operator

Next,  list all keys so that we know where the keys are that we will use to run the cron job automatically.

#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbuserstore list
DATA FILE : /home/backupuser/.hdb/sid-hdb/SSFS_HDB.DAT

KEY CRONBACKUPKEY
 ENV : localhost:30015
 USER: backup_operator

Run a quick check of the new key by running a test command, this is where the password you entered above will be used

#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U cronbackupkey "select now() from dummy;"|more
CURRENT_TIMESTAMP
"2014-12-26 21:46:24.799000000"
1 row selected (overall time 600 usec; server time 98 usec)

If you run into a situation where the password is wrong, you may end up with a message usage as this:

* 416: user is locked; try again later: lock time is 1440 minutes; user is locked until 2014-12-27 21:35:34.3170000 (given in UTC) [1440,2014-12-27 21:35:34.3170000] SQLSTATE: HY000

If that happens, fix the password by running the hdbuserstore -i DELETE cronbackupkey and hdbuserstore -i SET command above with the correct password than run the following commands to allow the user access again.

alter user backup_operator RESET CONNECT ATTEMPTS;

Using these method the automated method finally came together. Keep in mind the the password for connecting to the database is stored in the key, so if you update the password in the database for the user, you will need to also update the password stored in the key.

 

3) Create a bash script to backup the HANA database to a time and date file

Create a bash script file you can run from a cron job,  that does the work of creating a backup file.  I create a wrapper script instead or running  a command from the cron job,  so I can decide in the script whether I would like to receive an email with the output of the command.

#touch /customcommands/hanabackup
#chown hdbadm.sapsys /customcommands/hanabackup
#chmod 754 /customcommands/hanabackup
#vi /customcommands/hanabackup
tmpfile=/tmp/`date +s`
textsearch="successful"
sqlcommand="BACKUP DATA USING FILE ('$(date +F_%k%M)')"
/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U CRONBACKUPKEY $sqlcommand>$tmpfile
#look up the backup that just completed"
/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U CRONBACKUPKEY "SELECT top 1 *
 FROM M_BACKUP_CATALOG
 WHERE ENTRY_TYPE_NAME = 'complete data backup'
 ORDER BY UTC_START_TIME desc">>$tmpfile
success=`grep $textsearch $tmpfile`
if [ "$success" == "" ]; then
 echo "HANA BACKUP FAILED: `date`"
 echo "SQL COMMAND: $sqlcommand"
 echo "TEXT NOT FOUND:$textsearch"
 echo
 echo "File Out Put Below"
 echo ---------------------------------
 echo
 cat $tmpfile
 exit 10
fi
exit 0

This script will output a message ONLY if the HANA backup fails,  if it is successful,  it just quietly completes

 

4) Finally setup a cron job which runs the HANA database backup once a day

create a crontab entry to run one time per day at 1:01 am

#crontab -e
MAILTO=myemail@myemail.com
1 1 * * * /customcommands/hanabackup

In order to test that this works, you can force a test failure by setting a data_backup_buffer that is to high for your system see a recent post where we corrected this issue.
You can set a much higher limit than you system allows, and you can confirm that you have the CRON process send you an email each time that the process fails.

HANA error [447] backup could not be completed, [1000002] Allocation failed – Solution

HANA error [447] backup could not be completed, [1000002] Allocation failed  – Solution

When attempting to backup a HANA database, I received an error which stopped the backup from progressing

[447] backup could not be completed, [1000002] Allocation failed ; $size$=536870912; $name$=ChannelUtils::copy; $type$=pool; $inuse_count$=1; $allocated_size$=536870912

When I looked at which files were created as part of the backup I found that the entire backup did not fail,  actually it was only the ‘statisticsserver’ file that would have failed.

While the exact reasons for failure are still a mystery,  I found several posts online encouraging me to reduce the size of the data_backup_buffer_size.
Since I had already had several successful backups using the default information,  I was skeptical.   Turns out it did work,  by reducing the size of the buffer to only 256MB instead of 512MB,  the backup started to work again

 

But the thing that had changed in my HANA installation was the license,  so what was it about the reduction that made it work,  I have 33GB of space I can use in the system.

So,  while I have been able to resolve the issue and I can now create backups,   I do still have an open question to HANA to find out what it is about the data_backup_buffer_size that prevented me from using the default 512  to run the system after it has worked previously?

  • Do I now have full memory?
  • Am I only licensed to use a certain amount of memory and I can’t exceed it?
  • Do I have another setting in one of the other .ini files which limits how the data can be used?

 

Any comments on how i can control how this setting is used is appreciated!

 

Two step recipe – upgrading from Postgres 8.4 to 9.3 and then implementing 9.3 hot_standby replication

Two step recipe – upgrading from Postgres 8.4 to 9.3 and then implementing 9.3 hot_standby replication

Upgrade and existing postgresql database from Postgres 8.4 to 9.3 and then implementing a 9.3 hot_standby replication server so all backups and slow select queries can run from it.

The setup: two servers,   the current primary database server (will continue to be the primary database server when using 9.3,  but we will call it the master in the replication)
First get and install postgres 9.3 using the postgres apt repositories

master and standby> vi /etc/apt/sources.list
   - deb http://apt.postgresql.org/pub/repos/apt/ UNAME-pgdg main
#UNAME EXAMPLES: precise, squeeze, etch, etc
master and standby> apt-get update
master> apt-get install postgresql-9.3 postgresql-client-9.3 nfs-kernel-server nfs-client
standby> apt-get install postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3 nfs-kernel-server nfs-client

Next create some shared directorys via nfs for file based archiving

standby> mkdir -p /data/dbsync/primaryarchive master> mkdir -p /data/dbsync-archiveto master> vi /etc/exports
   - /var/lib/postgresql/9.3/main 192.168.*.*(ro,sync,no_subtree_check,no_root_squash)
standby> vi /etc/exports
   - /data/dbsync/primaryarchive 192.168.*.*(rw,sync,no_subtree_che
master> vi /etc/fstab
   - SECONDARYSERVERIP:/data/dbsync/primaryarchive /data/dbsync-archiveto nfs ro 0 1
standby>mkdir -p /mnt/livedb
standby> mount PRIMARYSERVERIP:/var/lib/postgresql/8.4/main/ /mnt/livedb
master> mount /data/dbsync-archiveto

Now,  configure postgres on the master to allow replication and restart,   put it on port 5433 so there are no conflictw with 8.4

master> vi /etc/postgresql/9.3/main/pg_hba.conf - host replication all SECONDARYSERVERIP trust
master> vi /etc/postgresql/9.3/main/postgresql.conf
  - wal_level=hot_standby
  - archive_mode = on 
  - port = 5433
  - archive_command = 'test -f /data/dbsync-archiveto/archiveable && cp %p /data/dbsync-archiveto/%f'
master> /etc/init.d/postgresql restart

Configure postgres on the standby to allow it to run as a hot_standby

standby> vi /etc/postgresql/9.3/main/postgresql.conf
  -restore_command = a018/usr/lib/postgresql/9.3/bin/pg_standby -d -t /tmp/pgsql.trigger.5432 /data/dbsync/primaryarchive %f %p %r 2>>/var/log/postgresql/standby.log
  -recovery_end_command = a018rm -f /tmp/pgsql.trigger.5432
  - wal_level=hot_standby
  - hot_standby = on
standby> /etc/init.d/postgresql stop

 

Now lets get a base backup on the standby

standby> mv /var/lib/postgresql/9.3/main /var/lib/postgresql/9.3/main.old 
standby>cd /var/lib/postgres/9.3; mv main main.old; 
standby> pg_basebackup -D main -R -h192.168.120.201 -p5433 -x -Upostgres 
standby> chown postgres.postgres main/ -R
standby> /etc/init.d/postgres start

Thats it!!,  you should not have a working replication server

primary> create table tmp as select now();
secondary> select * from tmp;

#check the progress several ways. postregres log,  which files and recovery are running and by being able to connect and see updates from the master,  on the secondary

standby> tail /var/log/postgresql/postgresql-9.3-main.log
standby> grep 'database system is ready to accept read only connections'
standby> ps ax|grep post
 - postgres: wal receiver process streaming 3/43000000
master> psql -Upostgres -c 'select pg_switch_xlog()'
and the log file would switch in the recovery file
standby> ps ax|grep post
- postgres: startup process recovering 000000010000000300000037
That was all to make sure that the replication is working on 9.3,   now that I am comfortable with it working,  I am going to turn off the replication,  copy the data from 8.4 to 9.3 and recreate the replication
First lets stop the postgresql daemon on the standby server so the VERY heavy load from copying the db from 8.4 to 9.3 is not duplication
 
standby> /etc/init.d/postgresql stop

Next,  copy the database from 8.4 to 9.3,   I have heard there may be some problems for conversion of some objects between 8.4 and 9.3 but not for me,  this went great.

master> pg_dump -C -Upostgres mydatabase| psql -Upostgres -p5433

Once that is successful,   lets switch ports on the 9.3 and 8.4 servers so 9.3 can take over

master>vi /etc/postgresql/9.3/main/postgresql.conf
  - port = 5432
master>vi /etc/postgresql/8.4/main/postgresql.conf
  - port = 5433
master> /etc/init.d/postgres reload
Last step, get a base backup and start again.
standby> mv /var/lib/postgresql/9.3/main /var/lib/postgresql/9.3/main.old 
standby>cd /var/lib/postgres/9.3; mv main main.old; 
standby> pg_basebackup -D main -R -h192.168.120.201 -x -Upostgres 
standby> chown postgres.postgres main/ -R
standby> /etc/init.d/postgres start
standby> rm /var/lib/postgres/9.3/main.old* -rf

Now..... to figure out what to do with the archivedir method we are currently using.....  It seems that it is just building up  when do we use it?

PHP to reset all primary key sequences in your postgresql database

PHP to reset all primary key sequences in your postgresql database

Use the following php code t reset all of the primary key sequences with the max(id) currently in the db.

We use wrapper functions db_query (which returns an array from the db when a select statement is run)  and db_exec()  which runs an update or insert command against the db.

[code language=”php”]$sql = &quot;SELECT t.relname as related_table,
a.attname as related_column,
s.relname as sequence_name
FROM pg_class s
JOIN pg_depend d ON d.objid = s.oid
JOIN pg_class t ON d.objid = s.oid AND d.refobjid = t.oid
JOIN pg_attribute a ON (d.refobjid, d.refobjsubid) = (a.attrelid, a.attnum)
JOIN pg_namespace n ON n.oid = s.relnamespace
WHERE s.relkind = ‘S’
AND n.nspname = ‘public’&quot;;
$qry = db_query($sql);

foreach($qry as $row)
{
$outsql = &quot;select setval(‘$row[sequence_name]’,(select max($row[related_column]) from $row[related_table]))&quot;;
db_exec($outsql);
}[/code]

 

Call Now Button(208) 344-1115

SIGN UP TO
GET OUR 
FREE
 APP BLUEPRINT

Join our email list

and get your free whitepaper