Asides
AWS Auto Scaling Group – CodeDeploy Challenges
AWS Auto Scaling Group – CodeDeploy Challenges
First here is my setup
- A single development / test server in the AWS cloud, backed by a separate Git Repository.
- WHen code is completed in the development environment it is commited to the development branch (using whichever branching scheme best fits the project)
- At the same time the code is merged to the test branch, and the code is available for client testing on the ‘test_stage’ site if they would like
- Then on an as needed basis the code in in the test branch (on the test_stage server) is deployed to AWS using their CodeDeploy api
- git archive test -> deploy.zip
- upload the file an S3 bucket (s3cmd)
- register the zip file as a revision using the AWS Register Revision API call
- This creates a file that can be deployed to any deployment group
- I setup two groups in my AWS account , test and live.
- When the client is ready, I run a script which deploys thes the ziped up revision to the Test server, where they are able to look atit and approve.
- Then I use the same method but move it instead of the www deployment group.
(The complexities of setting this up are deeper than I am going in this article, but for future prospects, all of this programming knowledges is stored in our deploy.php file)
A couple of tricks “they” dont tell you.
- Errors can be difficult to debug – if you update your code deployment to do more verbose logging it can help you to determine what some of the errors were.
- update /etc/codedeploy-agent/conf/codedeployment.yml, set verbose to yes.
- restart the service /etc/init.d/code-deployment restart (it can take several minutes to restart, this is normal)
- tail the log files to watch a deployment in real time, or investigate it after the fact (tail /var/log/aws/codedeploy-agent)
- Deploying a Revision to servers while they may be going through some termination instability, may likely cause your deployment to fail when one of you servers terminates.
- To prevent this, update the deployment autoscaling plan to have a minim and a maximum of the server, and do not take it under load during the 10 – 15 minutes (up to 2 hours) issues will cause errors
- Depending on the load on your servers, your deployment could take a lot of cpu and could generate an autoscaling alert and could spin up new tasks or send you an email. There is not a correct way to deal with this, however it is a good idea to know about it before you deploy.
- Finally the item that I wrote this because of, it appears that when you attempt to deploy a revision to an autoscaling group, it can cause some failures.
- The obvious one is that the deployment will fail if it is attempted while the server is shutting down
- However, it seems that if you have decided to upgrade your AMI, and your Launch Configuration, that a deployment will fail. And for me, it actually caused a key failure to login as well (this could have been because of multiple server terminations and then another server took over the IPs within a few minutes) Anyway, much caution about these things.
UPDATE:
Well, the problem was actually that the by ‘afterinstall.sh’ script, was cleaning up the /opt/codedeployment/ directory (so we didn’t run out of space after a couple dozen deployments), but I was also removing the appspec.yml file.
So I updated the command that runs in the afterinstall to be
/usr/bin/find /opt/codedeploy-agent/deployment-root/ -mindepth 2 -mtime +1 -not -path '*deployment-instruction*' -delete
Debugging CodeDeployment on AWS
Debugging CodeDeployment on AWS
This article is being written well after I have already installed the CodeDeployment daemon on an ubuntu server, created an AMI out of it and set it up as an auto launch server from a Scaling Group.
I am documenting the process I went through to dig into the error a bit more, this helps to identify and remember where the logs files are and how to get additional information, even if the issue is never the same again.
Issues with running the codedeployment showed up as a python error in the log
more /var/log/aws/codedeploy-agent/codedeployment-agent.log put_host_command_complete(command_status:"Failed",diagnostics:{format:"JSON",payload:"{"error_code":5,"script_name":"","message":"not opened for reading","log":""}"
I decided this is not enough information to troubleshoot an error so I had to dig in and fina way to make it more verbose. I found this file, just update verbose from false to true
vi /etc/codedeploy-agent/conf/codedeployagent.yml
Then restart the codedeploy-agent
/etc/init.d/code-deployagent restart
This can take quite a while since it runs quite a bit of background installing and checking for duplicate processes. but once it is complete you can check that the process is running again.
ps ax|grep codedeploy
Once this is running in verbose mode, monitor the log
tail -f /var/log/aws/codedeploy-agent/codedeployment-agent.log
and re-run the deployment and view the results of the log, the most useful thing for me was to grep for the folder that more specific error information was written to.
grep -i "Creating deployment" /var/log/aws/codedeploy-agent/codedeployment-agent.log
This showed me the folder that all of the code WAS going to be extracted to, since there was an error the system actually dumped the contents of an error into a file called bundle.tar in the folder that it would have exported to.
cat /opt/codedeploy-agent/deployment-root/7ddce865-0611-45f0-bf74-459fcf806f23/d-YK4NWBJD7/bundle.tar
This returned an error from the S3 showing that the Code Deploy was having an error downloading from S3, so I had to add access to the policy to download from S3 buckets as well
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials – Debug / My Fix
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials – Debug / My Fix
I created a Policy and Launch Configuration according to this documentation
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-service-role.html
However I still received this error
InstanceAgent::CodeDeployPlugin::CommandPoller: Missing credentials - please check if this instance was started with an IAM instance profile
The problem lay somewhere in the configuration of how I setup and Launched the server, however I set about exploring the server to evaluate and confirm the ‘missing credentials’.
Using these checks I would be able to confirm that the reason behind these errors (and there fore the problem with actually deploying code):
First run this command to check to see what roles were ‘requested’
echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/ -O - -q `
This command will return the list of Roles that were given to your server. You can then extend the request
MyRoleName
Then add the return on to the end of the URL to see the results of attempting to add the role, in my case the error was displayed
{ "Code" : "AssumeRoleUnauthorizedAccess", "Message" : "EC2 cannot assume the role MyRoleName. Please see documentation at http://docs.amazonwebservices.com/IAM/latest/UserGuide/RolesTroubleshooting.html.", "LastUpdated" : "2015-02-17T04:38:25Z" }
Basically, the EC2 was unable to assume the role MyRoleName did not have permission to Assume a Role, This could be due to the permissions of the development account that was used to start eh Scaling Group which launched the EC2. To test this,
- I login with an administrator account
- recreate the scaling group, identical
- login to the server
- run the echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/ -O – -q ` command
This didn’t do anything, So I thought I would look into the specifics of why we have a role that should be able to be assumed, but which has a message which explains that EC2 cannot assume the role.
So I looked back at the article http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-create-service-role.html and found somewhat of an anomaly, it seems that the article suggests that we build a trust relationship with gives the ‘codedeploy.us-west-2.amazonaws.com’ service the ability to use this policy. However, that does not jive with the Messages I see in the logs defining that EC2 is not able to assume the role. So I opened the trust relationship under the Role, and Added the bolded line
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": [ "ec2.amazonaws.com", "codedeploy.us-east-1.amazonaws.com", "codedeploy.us-west-2.amazonaws.com"
] }, "Action": "sts:AssumeRole" } ] }
Lo and Behold, it worked. Now when I run the following I get a Success Message
echo `wget http://169.254.169.254/latest/meta-data/iam/security-credentials/MyRoleName-O - -q `
So, it turns out the problem is that the article is either incorrect or was written in order to give the CodeDeploy service the ability to work on EC2, but not giving the EC2 servers access to the CodeDeploy service. ( the codedeploy.us-east-1 services are also required in order to to give the deployment group the IAM role.)
While, it was difficult to find the solution, this troubleshooting steps above are useful to help identify related or other issues, I hope you find some use from the tools
Michael Blood
Compare the packages (deb / apache) on two debian/ubuntu servers
Compare the packages (deb / apache) on two debian/ubuntu servers
Debian / Ubuntu
I worked up this command and I don’t want to lose it
#diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'
This command shows a list of all of the packages installed on 111.222.33.44 that are not installed on the current machine
To make this work for you, just update the ssh 111.222.33.44 command to point to the server you want to compare it with.
I used this command to actually create my apt-get install command
#apt-get install `diff <(dpkg -l|awk '/ii /{print $2}') <(ssh 111.222.33.44 "dpkg -l"|awk '/ii /{print $2}')|grep '>'|sed -e 's/>//'`
Just be careful that you have the same Linux kernels etc, or you may be installing more than you expect
Apache
The same thing can be done to see if we have the same Apache modeuls enabled on both machines
diff <(a2query -m|awk '{print $1}'|sort) <(ssh 111.222.33.44 a2query -m|awk '{print $1}'|sort)
This will show you which modules are / are not enabled on the different machines
Installing s3tools on SUSE using yast
Installing s3tools on SUSE using yast
We manage many servers with multiple flavors of Linux. All of them use either apt or yum for package management.
The concept of yast is the same as apt and yum, but was new to me, so I thought I would document it.
Run yast which pulls up an ncurses Control Center, use the arrows go to Software -> Software Repositories
#yast
Use the arrows or press Alt+A to add a new repository
I selected Specify URL (the default) and press Alt+x to go to the next screen where I typed into the url box
http://s3tools.org/repo/SLE_11/
and then pressed Alt+n to continue.
Now I have a new repository and I press Alt+q to quit.
At the command line I types
#yast2 -i s3cmd
And the s3cmd is installed, 15 minutes!
Bulk Domain NS, MX and A record lookup tool
Summary: We have two tools to help you lookup information on domains quickly
- quick-domain-research.php – See the NS, MX, A records and IPs for multiple domains in one table
- nameserver-compare.php – Compare NS, MX, A records for multiple domains, against multiple Name Servers
Bulk Domain NS, MX and A record lookup tool
Occassionally, we come across some sort of project in which we have to work through a list of multiple domain names and make some sort of changes.
In some cases we simply have to update contact records, in other cases we have to determine ownership, hosting and mail setups so we can assist with an ownership transfer.
There are a plethora of domain tools out there which help one at a time, But we were hard pressed to find a tool that could do a bulk lookup of multiple domains with table based out put.
So, we built the tool
https://www.matraex.com/quick-domain-research.php
This tool has the
- A records for the root domain (@) and the (www) domain.
- MX records for the root domain
- NS records for the root domain
This tool was thrown together quickly to help us identify whether an OLD but active nameserver, which had dozens of domain names on it, was actually being used for the domains.
We were able to delete more than 20 domains cluttering up the DNS entries.
Additionally we were able to clean up associated webservers that had not been cleaned of hosting accounts after a client left the account.
Some future ideas which will make their way in next time:
- Display whois information for the domain
- Optionally group the domains based on which name servers, whois records or www C class they are hosted at
Update 11/28/2015 by Michael Blood
Since this original post, we have added several new features including the ability to upload a file with a large batch upload, and download a CSV file with the results. You can see all of the details in this Enhanced Bulk Domain NS, MX and A record lookup tool post.
HANA Database Backup using Cron
HANA Database Backup using Cron
To create backups using HANA we
- create a user
- create a user key
- create a script
- schedule the script to run via cron job
1) Create a HANA user to use specifically for Cron Job – Daily HANA Database Backup
Open an SQL Window and run the following to create a user, give them access to backup, and then remove their password so they can not connect via login.
create user backup_operator password Xxxxxxxx1; grant backup operator to backup_operator; alter user backup_operator disable password lifetime;
2) Create a HANA user key to be used for automated connection to the HANA database by the Backup User
Login to the server console using putty.
Under the hdbclient/ directory use the hdbuserstore command to create a key, -i makes it so you will then have to type in a password key from the command above
#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbuserstore -i SET cronbackupkey localhost:30015 backup_operator
Next, list all keys so that we know where the keys are that we will use to run the cron job automatically.
#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbuserstore list DATA FILE : /home/backupuser/.hdb/sid-hdb/SSFS_HDB.DAT KEY CRONBACKUPKEY ENV : localhost:30015 USER: backup_operator
Run a quick check of the new key by running a test command, this is where the password you entered above will be used
#/vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U cronbackupkey "select now() from dummy;"|more CURRENT_TIMESTAMP "2014-12-26 21:46:24.799000000" 1 row selected (overall time 600 usec; server time 98 usec)
If you run into a situation where the password is wrong, you may end up with a message usage as this:
* 416: user is locked; try again later: lock time is 1440 minutes; user is locked until 2014-12-27 21:35:34.3170000 (given in UTC) [1440,2014-12-27 21:35:34.3170000] SQLSTATE: HY000
If that happens, fix the password by running the hdbuserstore -i DELETE cronbackupkey and hdbuserstore -i SET command above with the correct password than run the following commands to allow the user access again.
alter user backup_operator RESET CONNECT ATTEMPTS;
Using these method the automated method finally came together. Keep in mind the the password for connecting to the database is stored in the key, so if you update the password in the database for the user, you will need to also update the password stored in the key.
3) Create a bash script to backup the HANA database to a time and date file
Create a bash script file you can run from a cron job, that does the work of creating a backup file. I create a wrapper script instead or running a command from the cron job, so I can decide in the script whether I would like to receive an email with the output of the command.
#touch /customcommands/hanabackup #chown hdbadm.sapsys /customcommands/hanabackup #chmod 754 /customcommands/hanabackup #vi /customcommands/hanabackup
tmpfile=/tmp/`date +s` textsearch="successful" sqlcommand="BACKUP DATA USING FILE ('$(date +F_%k%M)')" /vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U CRONBACKUPKEY $sqlcommand>$tmpfile #look up the backup that just completed" /vol/vol_HDB/sysfiles/hana/shared/HDB/hdbclient/hdbsql -U CRONBACKUPKEY "SELECT top 1 * FROM M_BACKUP_CATALOG WHERE ENTRY_TYPE_NAME = 'complete data backup' ORDER BY UTC_START_TIME desc">>$tmpfile success=`grep $textsearch $tmpfile` if [ "$success" == "" ]; then echo "HANA BACKUP FAILED: `date`" echo "SQL COMMAND: $sqlcommand" echo "TEXT NOT FOUND:$textsearch" echo echo "File Out Put Below" echo --------------------------------- echo cat $tmpfile exit 10 fi exit 0
This script will output a message ONLY if the HANA backup fails, if it is successful, it just quietly completes
4) Finally setup a cron job which runs the HANA database backup once a day
create a crontab entry to run one time per day at 1:01 am
#crontab -e MAILTO=myemail@myemail.com 1 1 * * * /customcommands/hanabackup
In order to test that this works, you can force a test failure by setting a data_backup_buffer that is to high for your system see a recent post where we corrected this issue.
You can set a much higher limit than you system allows, and you can confirm that you have the CRON process send you an email each time that the process fails.
HANA error [447] backup could not be completed, [1000002] Allocation failed – Solution
HANA error [447] backup could not be completed, [1000002] Allocation failed – Solution
When attempting to backup a HANA database, I received an error which stopped the backup from progressing
[447] backup could not be completed, [1000002] Allocation failed ; $size$=536870912; $name$=ChannelUtils::copy; $type$=pool; $inuse_count$=1; $allocated_size$=536870912
When I looked at which files were created as part of the backup I found that the entire backup did not fail, actually it was only the ‘statisticsserver’ file that would have failed.
While the exact reasons for failure are still a mystery, I found several posts online encouraging me to reduce the size of the data_backup_buffer_size.
Since I had already had several successful backups using the default information, I was skeptical. Turns out it did work, by reducing the size of the buffer to only 256MB instead of 512MB, the backup started to work again
But the thing that had changed in my HANA installation was the license, so what was it about the reduction that made it work, I have 33GB of space I can use in the system.
So, while I have been able to resolve the issue and I can now create backups, I do still have an open question to HANA to find out what it is about the data_backup_buffer_size that prevented me from using the default 512 to run the system after it has worked previously?
- Do I now have full memory?
- Am I only licensed to use a certain amount of memory and I can’t exceed it?
- Do I have another setting in one of the other .ini files which limits how the data can be used?
Any comments on how i can control how this setting is used is appreciated!
Connecting to a HANA Database using PHP from Ubuntu 14.04 LTS
Connecting to a HANA Database using PHP from Ubuntu 14.04 LTS
I need to setup a PHP Website to connect to a HANA. The site is already installed with PHP and MySQL. Time permitting, I will also document the process of converting the existing MySQL Database to HANA. The environment:
- Hana OS: SUSE Linux Enterprise Server 11 SP3 x86_64 (64-bit)
- Web Server: Ubuntu 14.04 LTS
- PHP Version:5.5.9-1ubuntu4.5
First install the php odbc package
- sudo apt-get install php5-odbc
PHP ODBC Information
#php -i|grep odbc /etc/php5/cli/conf.d/20-odbc.ini, /etc/php5/cli/conf.d/20-pdo_odbc.ini, odbc ODBC_LIBS => -lodbc odbc.allow_persistent => On => On odbc.check_persistent => On => On odbc.default_cursortype => Static cursor => Static cursor odbc.default_db => no value => no value odbc.default_pw => no value => no value odbc.default_user => no value => no value odbc.defaultbinmode => return as is => return as is odbc.defaultlrl => return up to 4096 bytes => return up to 4096 bytes odbc.max_links => Unlimited => Unlimited odbc.max_persistent => Unlimited => Unlimited PDO drivers => mysql, odbc
Install HANA Database Client
Before starting, Since the Web server is on Amazon, I just took a quick snapshot so I could rollback to a ‘Pre HANA Client State’ on the server if I needed to.
- Download the HANA using this link SAP Hana Client download the client, you must be a member. Sign up as a public user with all of your information, you will be required to give them some information about your profession industry and company. There are several screen and it takes quite a few before you get to the download. BTW:If this link is broken, please email me and let me know. -[ if this link does not work, try it here }
- Extract the file into a temporary location and cd into the working directory
- cd /tmp
- tar -xvzf /home/michael/sap_hana_client.tgz
- cd sap_hana_linux64_client_rev80/
- Nexe ensure that the scripts that need to execute have execute privileges
- chmod +x hdbinst
- chmod +x hdbsetup
- chmod +x hdbuninst
- chmod +x instruntime/sdbrun
- Finally, run the installation program
- sudo ./hdbinst -a client
- press <enter> to accept the default installation path: /usr/sap/hdbclient
- You will see informaiotn like the following then you can take a look at the log file to see any other details of the installation.
Checking installation...
Preparing package 'Python Runtime'...
Preparing package 'Product Manifest'...
Preparing package 'SQLDBC'...
Preparing package 'REPOTOOLS'...
Preparing package 'Python DB API'...
Preparing package 'ODBC'...
Preparing package 'JDBC'...
Preparing package 'HALM Client'...
Preparing package 'Client Installer'...
Installing SAP HANA Database Client to /usr/sap/hdbclient...
Installing package 'Python Runtime'...
Installing package 'Product Manifest'...
Installing package 'SQLDBC'...
Installing package 'REPOTOOLS'...
Installing package 'Python DB API'...
Installing package 'ODBC'...
Installing package 'JDBC'...
Installing package 'HALM Client'...
Installing package 'Client Installer'...
Installation done Log file written to '/var/tmp/hdb_client_2014-12-18_02.06.04/hdbinst_client.log'
- more /var/tmp/hdb_client_2014-12-18_02.06.04/hdbinst_client.log
I saw some reports online that you would have to isntall ‘libaio-dev’, I did not have to, but in case you do, here is the command
- sudo apt-get install libaio-dev
Now you can connect to the hana client
- /usr/sap/hdbclient/hdbsql
Ignoring unknown extended header keyword
This happened to me, this is most likely because the tgz file was created on BSD. to address this install bsd tar and replace the ‘tar’ command above with the bsdtar command below.
- apt-get install bsdtar
- bsdtar -xvzf /home/michael/sap_hana_client.tgz
Testing that the client can connect to a HANA server
To confirm that the client works, run the client and connect to the HANA server
- # /usr/sap/hdbclient/hdbsql
Welcome to the SAP HANA Database interactive terminal
Type: h for help with commands
q to quit
- hdbsql=> c -n xxx.xxx.xxx.xxxx:port -u System -p mypass
I had to type a port, most likly because the server was not installed on a default port.
Creating an ODBC Connection to HANA Server
First Install the unixodbc packages
- sudo apt-get install unixodbc unixodbc-dev odbcinst
- ls /etc/odbc*
/etc/odbc.ini /etc/odbcinst.ini
Open the /etc/odbc.ini file and type the following
[hanadb]
Driver = /usr/sap/hdbclient/libodbcHDB.so
ServerNode =xxx.xxx.xxx.xxx:30015
Note that the blue matches the location you installed to, you must make sure these match or you will receive a cryptic [ISQL]ERROR: Could not SQLConnect message.
Connect using your username and password
- isql hanadb username passwor
+---------------------------------------+
| Connected! |
| sql-statement |
| help [tablename] |
| quit |
+---------------------------------------+
Implementing the HANA ODBC in PHP
As long as you have been able to connect via the ODBC methods above, PHP should be a breeze.
Just place the following in your code.
- $link=odbc_connect(“hanadb”,”myusername”,”mypassword”, SQL_CUR_USE_ODBC);
- $result = odbc_exec($link,”select * from sys.m_tables “);
- while($arr=odbc_fetch_array($result)) print_r($arr);
A couple of gotchas:
- Hana does not have the concept of ‘databases’, they use schemas so the user you connect with must be setup to use the Schema you want by default.
- Or you can prefix every table with your Schema name
- Or you can set the default context to the one you want before each connection
- I accomplished this by adding an extra line under the odbc_connect(); odbc_exec($dblink, “set schema MYSSCHEMA”);
That’s it! More posts to come on HANA.
Thanks to this post for help http://scn.sap.com/community/developer-center/hana/blog/2012/09/14/install-hana-client-on-ubuntu
Converting Assembla SVN repositor to GIT
Converting Assembla SVN repositor to GIT
Move a subversion repository, hosted at Assembla to GIT , For this, we expect that you already have a GIT server and repository setup with permission for you to write to it. there is an error to make sure it is 100% clean)
- first create 4 directories
- /data/code/svndump – to download your subversion repositories into
- /data/code/tmpsvn – to load your temporary subversion repository into after downloading
- /data/code/tmpgit1 – to create a temp git repository from your svn repo
- /data/code/tmpgit2 – to create a bare reformatted git repository from your first git
- first open the project / repo in assembla
- Click Import / Export > Download Dump
- right click on the link and copy link then open command line on a machine with svn and git
- cd /data/code wget -O svndump/myrepo.gz “https://linkcopoiedfromassempla” # download the file with quotes (to escape ampersands
- svnadmin create tmpsvn/myrepo #create an empt repositorycd
- gunzip -c svndump/myrepo.gz | svnadmin load tmpsvn/myrepo
- svn log -q file://data/code/tmpsvn/myrepo | awk -F ‘|’ ‘/^r/ {sub(“^ “, “”, $2); sub(” $”, “”, $2); print $2″ = “$2” <“$2″>”}’ | sort -u > users.txt
- git svn clone file:///data/code/tmpsvn/myrepo/ –no-metadata -A users.txt –stdlayout tmpgit1/myrepo
- cd tmpgit1/myrepo
- touch .gitignore #if I dont have any ignore files
- git svn show-ignore > .gitignore #if I do have ignore files i svn, this will return an error if there are no ignore files
- git add .gitignore
- git commit -m ‘Converting Properties from SVN to GIT’ .gitignore
- cd /data/code
- mkdir tmpgit2/myrepo
- cd tmpgit2/myrepo
- git init –bare
- git symbolic-ref HEAD refs/heads/trunk
- Go back to the tmpgit1 repo and push it to the new bare repo and rename the ‘trunk’ to ‘master’
- cd ../../tmpgit1/myrepo
- git remote add bare /data/code/tmpgit2/myrepo
- git config remote.bare.push ‘refs/remotes/*:refs/heads/*’
- git push bare
- cd /data/code/tmpgit2/myrepo
- git branch -m trunk master
- Clean up branches and tags (thanks to http://john.albin.net/git/convert-subversion-to-git for most of this)
- git for-each-ref –format=’%(refname)’ refs/heads/tags |
cut -d / -f 4 |
while read ref
do
git tag “$ref” “refs/heads/tags/$ref”;
git branch -D “tags/$ref”;
done
Now you have a correctly formatted repo at /data/code/tmpgit2/myrepo you can push it to your final git repository
- git remote add origin <your final git repository url >
- git push origin master
This may take a bit of time depending on how much code you have. But once it is complete you can browse your repository in stash.