Your software developer is quitting: now what?

Matraex builds custom software applications for clients – and we often takeover (read “rescue”) projects started by other developers. I wrote this article to help business owners and software founders address an issue that they are often not prepared to address. When a software founder comes up with a software idea – often times they go looking for a developer to help them implement it. When they start the project – they rarely put any thought into how to set the project up so that it is “easier” to handle things when that developer quits one day. Most of the time the founder puts full trust and faith in the developer to set everything up. And now that the developer is leaving, the world seems to be falling apart. We have had dozens of clients looking to restart a project that stalled after their developer left. Often times those projects are disorganized and difficult to reconstruct. In some cases they can not be saved. This article is an attempt to help clients gather the project so that it can be restarted with confidence.

So the truth is out!  The developer that knows everything about your project is not going to be around to finish it.  

You may have cursed a bit under your breath and tried to quickly maneuver to keep them on board – however, you are reading this article because you now know it is up to you to confirm you have possession of your project.  It is up to you to take control of the software so you can direct it and secure the future of it.  Here is how you do it.

  1. First of all,  don’t freak out,  a calm and professional response to the news is the best way to encourage a smooth transfer of all project assets and assistance identifying ‘forgotten’ assets.
  2. Second, create a shared ‘offboarding’ document and / or folder where you can coordinate the process of collecting information about your project.
  3. Request information from your developer – Let them know that the most important part of their offboarding is to help you understand the project – so you can help the next person take it over.
  4. Confirm that you have the information you are looking for – it can be tempting to just assume you have it,  but by taking the time to verify it,  you will identify things that are missed and save headaches later.

Don’t Freak Out

Why is this a step? Because we have found that some of our past clients needed this reminder.  Too often we hear of hasty hire or poorly orchestrated offboarding while the manager spends time realizing that the project was not going to be done the way they had previously assumed.   Often, the decision will be made to try to ‘get it all done’ in the next two weeks. They drive the developer to spend every moment of their last two weeks getting writing code. This should not be the first priority!   The first priority should be to ensure that the possession of the project is fully in the control of your company and the company can continue development of the project in the future.

One of the reasons that we see managers ‘Freak Out’ at this stage is because they have not previously recognized that they should have set their project up in a way that made sure that the project was already in the possession of the company.   Ultimately,  any project that was going to be owned by your company had to be in company possession anyway,  right?  So this is a good thing – we are going through the process of putting your project in your control – so your project will survive this ‘developer change’ and any ‘developer changes’ in the future.

Your systematic and practical approach to offboarding your current developer will result in a strong understanding of your project and an improved ability to select the ideal replacement when you restart your project.  The possession you end up with will give you the confidence to restart and finish the project in the future.

Create a shared offboarding folder

Create a shared folder where you can work with your developer over the next days and weeks to collect information about your projects. You will want to create the area yourself and setup the structure to have the information you want.

I use Google Documents because the ability to collaborate is better than any other tool, but what is important is that it works for you and your developer and you are able to see each other’s work. Create several documents and give your developer access so they can make changes where you can see them. I recommend that you create the following documents at a minimum – be sure that each of the folders has permissions which allow your developer to make changes:

  • Document: Offboarding Checklist / Instructions
  • Document: Offboard TeamMemberName – MM/DD/YYYY – Projects / Assets / Credentials
  • Folder: Project Asset Uploads
  • Folder: Recordings

I have created an example (with some sample entries)

Request the following from your developer:

  • Fill out the offboarding document with credentials and links to each project and project source
  • List all code repositories and transfer ownership to me
  • Provide me with access to your workstation
  • Record a video of you working through opening, making a simple development change and then deploying
  • Record a video showing the workstation and development environment and configuration
  • Record a video explaining for each project what your next steps and recommendations would be

Confirm possession and control of your code

  • Understand and confirm your understanding of all project assets
  • Test credentials and confirm owner level access to all project assets
  • Confirm project source links and documentation, add notes and request updates
  • Review recordings and request additional recordings.
  • Time permitting: Request a review / audit of code / transfer confirmation from a third party

Note: The lists above are presented to identify areas that should not be missed – as time permits this article will be expanded to provide detail on each of these. Until the article is expanded please feel free to email michael@matraex.com with questions – I will answer and expanded this article at the same time.

What Would You Do Differently If You Were To Build Your App Again?

Most app developers start with one but have so many ideas they end up developing multiple apps. After
you’ve built your first app, it’s time to evaluate the process and figure out what you might do differently.
Let’s look at a few things to consider before you decide to develop your next app.

Fewer Features

Did you include too many features in your first app? After launching, maybe you found out users felt
overwhelmed by the features or simply didn’t use most of them.

While you want your new app to be the coolest with the best features, too many features can cause
issues. It’s better to start off with an app prototype with the most important features and add features
as users make suggestions.

Investing too much Money

Maybe your first app cost you a large sum of money and you really don’t want to go down that road
again. With our app prototype service, you can keep the initial investment lower and fine-tune your app
before investing a larger amount of cash.

Focus on the User More

Did you spend months upon months with tunnel vision the first time around? You treated your app like
a new baby joining your family and wanted it to the absolutely perfect. It turned out to be your vision
and launched just as you wanted, but the users barely factored into the equation.

Getting as much input from test groups, user groups, early adopters, early investors, and others helps to
make your app stronger. Our team will help you with your next app by providing an app prototype
service. In just two weeks, we deliver a working app to your phone, which you can use for testing with
users and presenting to investors.

Outsource the Work

Did you have your app developed in-house? This can get rather expensive since you pay for the people
working on your app day in and day out.

When you outsource the work to a development company, you pay when you need them, not when
they show up to work.

When you hire Matraex, we charge a flat fee for our app prototype service. It doesn’t matter how many
developers we have on the project or the amount of time it takes them each day, the fee is the same.
Outsourcing your app development can help you avoid unnecessary expenses. The right developers will
already know the programming language you need and will likely work faster than an in-house team.
Plus, ongoing expenses only occur when you need work done instead of when employees clock in for
the day.

If you’re ready to develop another app, our app prototype service offers the right solution for you. Why
should you have to fork over tens of thousands of dollars and wait months to see your app in action? Let
us create an app prototype for you in just two weeks!

Sending Email on Servers on AWS (SES)

Sending email from PHP is often automatic and simple if setup on a traditional hosting account – however the mail() function in PHP does not work on AWS EC2 servers. AWS requires that you use the SES service. (Simple Email Service).

There are several tutorials that walk you through getting your account setup

To send email FROM your email domains you will need to setup and authorize the domain (See TIP #1 below)
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-domains.html

The step that is often missed is that outgoing email in new accounts is allowed ONLY to the email addresses you verify – to verify individual emails addresses.
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/verify-email-addresses.html

To enable your account to send email from your verified domains, and to ANY email address, you will need to get the account removed from the sandbox mode (check the Email Sending -> Sending Statistics page for a message) Click the ‘Request a Sending Limit Increase’ button to ask to remove the account from the sandbox. (here is a tutorial –
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/request-production-access.html )

TIP #1: I would suggest using route 53 for DNS for your domains, as it is a simple couple button clicks to authorize the domains, and get DKIM setup

TIP #2: make sure that you have posted a privacy policy or an email policy on your site – which explains how you send emails and handle their email information. Include a link to your policy in your request for removal from the sandbox.

AWS – Change your Root Password

It is not a good idea to use the Root account to manage and work with your AWS account.   Ideally you have setup IAM user accounts with only the required permissions.

However occasionally you need to update your Root level account password.   This video quickly shows you how to do it

(if this video is low quality,  try opening it full screen and playing it again,  the video is short and may be done playing before YouTube can catch up with downloading a higher quality version)

AWS – Disable IAM User access to Billing Console

In an AWS Account, Root users can create IAM Users with Account Administrator Permissions

However those users do not have access to the Billing Reporting.

A Root user can enable this though – follow the steps in this video

(if this video is low quality,  try opening it full screen and playing it again,  the video is short and may be done playing before YouTube can catch up with downloading a higher quality version)

AWS – Enable IAM User access to Billing Console

In an AWS Account, Root users can create IAM Users with Account Administrator Permissions

However those users do not have access to the Billing Reporting.

A Root user can enable this though – follow the steps in this video

(if this video is low quality,  try opening it full screen and playing it again,  the video is short and may be done playing before YouTube can catch up with downloading a higher quality version)

Moving PHP sites to php 7.2 – undefined constants used as a string

PHP7.2 and above will no longer allow Undefined Constants

According to the “Promote the error level of undefined constants” section of the PHP 7.2 Backwards Incompatible Changes Document

Unqualified references to undefined constants will now generate an E_WARNING (instead of an E_NOTICE). In the next major version of PHP, they will generate Error exceptions.

There have been many changes to PHP over its many versions – For Matraex’s use of PHP,   each version has been mostly compatible with the previous one with only minor changes, until a major decision affected one of the ways we deliberately used what we once called a “Feature” of PHP.  (For a full list of incompatible changes look at the Backwards Incompatible Changes sections in the PHP Appendices )

On March 5th, 2017, Rowan Collins proposed to deprecate bareword strings.  In PHP 7.2   the messages throw an E_WARNING message and in PHP 8.0 it will through an E_ERROR.

PHP has always been a loosely typed language which allows flexibility in many ways,   And we had used this flexibility in order write code in a ways that we belive made it more legible, maintainable and supportable within our coding style.   With this new change, hundreds of thousands of lines of code will need to be rewrittend before we can put it on a 7.2 or above server,  keys may be difficult to search, we will have inconsistencies in usage of keys depending on whether they are inside or out side of quotes.

Take this one example where lines 1, 3 and 4 below would work,   but line 2 would throw a warning in 7.2 and would through an error in 8.0.

  1. echo “Hello, $user[firstname],”;
  2. echo “Hello, “.$user[firstname].”,”;
  3. echo “Hello, “.$user[‘firstname’].”,”;
  4. echo “Hello, “.$user[“firstname”].”,”;

Matraex would have previously preferred to use methods 1 and then 2,  as they require fewer quotes and a search in our IDE of ‘user[first’ would have highlighted both uses.

Mr Collins did evaluate both sides of the decision and wrote a bit about it.  He described that “The value of keeping the current behaviour would be for programs written to deliberately take advantage of it”,   however he really dismisses that value and gave a stronger argument undefined constants can “mask serious bugs”.

I agree with each of the arguments and our 7.2 scripts will all comply with this new syntax requirement.  However, I disagree with the way the solution was indiscriminately executed.  A more considerate solution would have been to create a configuration option in PHP to control the requirement and allow developers and system administrators to continue to ‘deliberately’ use ‘undefined constants’.   This option would also allow existing stable programs to continue to take advantage of the other features of PHP >=7.2 without a significant refactor.    Perhaps the Impact section of the article could attempted to get more feedback from users that had deliberately made heavy investment in this feature.

To be more direct here is my request: PHP developers,  please create / allow a configuration option in PHP which will allow undefined constants to be used as strings. 

Changing existing code across the 10 + years of PHP projects will take thousands of hours to modify and test,  and that is just the projects that still exist. This is a barrier to upgrading to PHP 8.0.

Arguments for a configuration option

  • Millions of lines of code which deliberately use undefined constants as string (more likely billions or trillions – I probably have close to one million myself overtime)
  • My random belief: PHP should enforce “standards” on those that want or need them,  and allow experience users to explicitly choose to ignore them.
  • The configuration option would be disabled by default to address all of the problems mentioned in ‘the problem’ section of the article

Dealing with undefined constant warnings

Now we get to more technical area where I document some of the methods we have used to find code that needs to be updated for PHP 7.2 code.

1)  Use grep to find all uses of the code

This code finds ALL uses of lower case strings without quotes – because our standards do require constants to be in upper case

grep -Rne ‘\$[A-Z\_a-z]*\[[A-Za-Z\_]\{1,\}\]’ *.php

2)  Suppress E_WARNING messages

This is a bad idea,   while it will certainly make it so that your code continues to work in 7.2,   it will not fix it going into 8.0,   and this WILL mask other issues that you do need to know about.

If you want to learn mroe about this,   take a look at this discussion about it on Stack Overflow. Definitely read the comments about hiding warnings to get a better feel for it.

3) Create PHP configuration options to make provisions for undefined constants

These options would require the good work of a C developer that works on the PHP source. Some of these ideas may just work as described,  they really are just a good start (or continuation) of a discussion for features which could be implemented.   I don’t have a ‘bounty’ system but if you are interested in creating any of these options,  or would like to group together to coordinate it, please contact me.

  1. undefined_constants_string_level – Have a PHP directive which declares what E_ level all undefined constant warnings should – default in 8.0 can be E_ERROR
  2. undefined_constants_string_lowercase – Allow users to configure options which would allow only lowercase (or mixed case) constants as strings – which would allow / reserve upper case for use as constants.
  3. undefined_constants_string_superglobal – Allow undefined constants to be used when attempting to reference any key to a super global array (such as $_POST[mykey] or S_SERVER[HTTP_HOST]);

Matraex Releases FrameTurn SaaS

Matraex Announces the Launch of FrameTurn Application to Improve Optical Frame Sales – for BridgePoint Optics

Boise, Idaho (28 July 2018) — Matraex, Inc. (https://www.matraex.com) announces the launch of FrameTurn (https://frameturn.com) a custom application designed to help independent optometry practices use data driven techniques to enhance their business. The app was designed for Bridgepoint Optics (http://www.bridgepointoptics.com), an optical industry sales and business consultancy for independent eye care practices throughout the U.S.

Adding an on-line application provides a Software as a Service (SaaS) that extends their ability to help the optical industry, giving Bridgepoint an additional, marketable service it can provide to its own clients.

Matraex has developed a powerful on-line tool for Bridgepoint Optics that provides their clients with information that can help them make purchasing decisions in a timely and profitable manner.

“Offering a Software as a Solution (SaaS) product to its clients provides BridgePoint with additional business opportunities,” says Michael Blood, president of Matraex, Inc. “Developing a tool of this type for businesses is what drives us. We are excited to see FrameTurn go live over the next few weeks.”

From Bridgepoint’s perspective, the FrameTurn application provides an additional tool in their existing tool box of services that they can offer. It also gives them a significant edge in marketing to the vision care industry. Most importantly, however, it provides a powerful analytical tool designed to increase their clients’ bottom line at a very affordable price.

“Independents have traditionally relied on their instincts, or even guess work, rather than data to make purchasing decisions about frames for their optical shops. We’re excited to end all that! With FrameTurn, these eye care practices will have the ability to record and automatically analyze past and existing sales in various ways to determine trends that can increase their profitability,” says Dr. Rook Torres, co-founder of BridgePoint Optics and FrameTurn.

About Matraex, Inc.

Matraex, Inc. (https://www.matraex.com)  is a Boise-based software and application development company. The company has served many local and national organizations for more than 15 years, including the Better Business Bureau, Hewlett Packard, Madison Square Garden, Penn State University and the Idaho Hospital Association. The services include custom designed mobile applications (iOS, Android, etc.), as well as website development and management.

About BridgePoint Optics

BridgePoint Optics (http://www.bridgepointoptics.com) is an optical industry sales and business consultancy. For more than 25-years, they have specialized in the growth and development of independent eye care practices throughout the U.S.

PDF Version

 

 

 

New Group Sales Application for Bogus Basin

Boise-based MATRAEX to Develop New Group Sales Application for Bogus Basin

12 June 2018 (Boise, Id.) — Matraex, Inc. (https://www.matraex.com), a Boise-based software and application development company, announced its most recent project with Bogus Basin Recreational Association, Inc. (https://www.bogusbasin.org) to develop a custom website application for Bogus Basin’s Life Sports and School Night programs. The website application will help Bogus Basin coordinate and manage school group functions. The project is expected to be completed in September 2018 in anticipation of the 2018-19 ski season.

“We are thrilled Bogus Basin selected us to help streamline their processes as they move forward in their quest to be one of Boise’s premier winter and summer group destinations,” said Michael Blood, President of Matraex, Inc.

According to Molly Myers, Group Sales and Event Coordinator at Bogus Basin, the website and application will replace a legacy application they have used for more than 15-years. The new site will update and streamline the booking process by providing a single repository for group information and eliminating multiple communication channels where mistakes can occur. In addition, it will ensure that information flows to the correct departments for accurate fulfillment of each group’s needs.

“Having a streamlined system that’s uniquely tailored to our needs is very appealing,” said Myers. “Anything that supports a fun and efficient visit to Bogus Basin in the long run aids in our goal of making lifelong skiers and snowboarders.”

School groups that participate in Bogus Basin’s Life Sports or School Night must provide a lot of information about the students when booking, including:

  • Group size
  • Age, height, weight and shoe size of each participant to ensure sufficient, well-fitting rental equipment
  • Ski ability levels for each group member so that appropriate ski instructors are available
  • Special needs or requests

The website application will have additional benefits for Bogus Basin, including the ability to track, process and report financial information for internal use.

Matraex’s work with Bogus Basin will also incorporate other group offerings, such as bookings for the new “Glade Runner Mountain Coaster” attraction and other “Fun Zone” activities.

The Matraex website app is projected to go “live” in September 2018, just in time for schools and other groups to book their group ski trips for the 2018-19 ski season.

About Matraex, Inc.

Matraex, Inc. is a Boise-based software and application development company. The company has served a number of local and national organizations, including the Better Business Bureau, Hewlett Packard, Madison Square Garden, Penn State University and the Idaho Hospital Association. Their services include custom designed mobile applications (iOS, Android, etc.), as well as website development and management.

About Bogus Basin Recreational Association

Bogus Basin Recreational Association, Inc. is a 501(C)(3) non-profit organization dedicated to accessible, affordable and fun year-round mountain recreation and education for the Treasure Valley Community.

For more information, please visit https://www.matraex.com or call Michael Blood at (208) 344-1115.

Matraex – GDPR Data Processing Addendum

Earlier Today Matraex announced an updated Privacy Policy.

Now, we announce that our GDPR Matraex – Data Processing Addendum.pdf template is available to our clients and customers.  This means all of our US based clients that work with Personal Information of European Economic Area (EEA) citizens, can fill out the agreement and submit it to Matraex and keep our Service Agreements compliant with the GDPR.

The DPA also includes EU Model Clauses, which were approved by the European Union (EU) data protection authorities, known as the Article 29 Working Party. This means that Matraex customers wishing to transfer personal data from the EEA to other countries can do so with the knowledge that when Matraex processes the subject personal data it will be given the same high level of protection it receives in the EEA.

  • This announcement is important to our previous customers,  we are able to provide followup work on their existing software which interacts with EEA personal data.
  • This announcement is VERY important to our existing customers, it provides assurances that Matraex can continue working with their applications and services which process EEA personal data.

Each Matraex customer using processing EEA personal data will need to have a data processing agreement to comply with GDPR.  Previous, existing and new customers are asked to download and read the agreement and fill out the requested information. While the entire agreement is important to read and understand,  certain areas require input:

  • Enter your company information on Page 2
  • Have a company authority execute the agreement on Page 5
  • Enter the Member State your business is established in on Pages 10 and 11
  • Enter the “Categories of Data” which will processed on Page 12

Please contact us directly to assist in filling out the information,   when ready, email it to legal@matraex.com so we may review it,  and counter sign it.

Matraex executes a Data Processing Addendum with each of our clients that process Personal Data in the European Economic Area,  this allows us to be in compliance with GDPR and other privacy laws,  as well as with our own Privacy Policy

Privacy Policy Updated

If you have an email account,  there is a good chance that it has been filling up with Terms of Service and Privacy Policy updates over the last two weeks.

I am sure you have read them all and you understand that the EEA’s GDPR goes into effect today – May 25, 2018.

Matraex is similar to many of these companies and we have updated our Privacy Policy to comply with GDPR.

  • Our policy is easy to read with a Summary at the top and the Full policy beneath
  • Our policy is a single document rather than a maze of links
  • Our policy applies to all personal data

Matraex is also Very Different than most of the companies you have seen emails from.

  • Matraex builds the software for the companies subject to GDPR
  • Matraex maintains and enhances the software for companies subject to GDPR
  • Matraex is subject to GDPR as a Data Importer and Data Processor on behalf of our clients,  (others are Data Controllers and Data Processors)

As a third party Data Importer and Data Processor GDPR requires that our policies must also govern our data responsibility between our parties by entering into a Data Protection Addendum (request an addendum here)

In the process of putting GDPR together the EEA has defined quite a bit of vocabulary to help define responsibility in handling Personal Information Data – check out this glossary of GDPR terms to help define some of the terms I used above

To discuss any privacy, GDPR or custom services please contact us.

1.4 billion email accounts compromised

The list of websites that have been hacked is growing, and it is VERY likely that every one of us has been affected.

Hackers have always compiled, sold and distributed a list of breached accounts, but in December 2017 researchers saw an unprecedented list show up on the Dark Web.

A hacker assembled 1.4 billion compromised email address and credentials into a very organized list. The hackers provided tools to quickly search, sort and add to the list.

The compromised accounts were from many recent hacks including LinkedIn, Equifax and Uber. To see an updated list of hacks, refer to the Wikipedia article.   https://en.wikipedia.org/wiki/List_of_data_breaches

The 42 GB list is currently shared on many Peer to Peer networks and it can be found by anyone that truly wants to.

Matraex researched this list and developed a Hack Check app at hackcheck.email to allow users to check whether they are on the list.

If you are a business with multiple domains and you need assistance researching how they were affected,  contact Michael Blood at 208.344.1115 x 250

Bridgepoint engages Matraex to create FrameTurn SAAS

Bridgepoint has engaged Matraex to build the FrameTurn SAAS product they plan to launch to the Optometry Industry.

FrameTurn is a dynamic product which helps small, medium and large Optometry clinics optimize the sales mix of their retail offerings to improve sales up to 30%.

Matraex, Inc is a Boise Idaho applications development company specializing in bringing custom business offerings to market.

 

COMMANDDUMP – Monitor File for error – Ding if found

An Elusive error was occuring that we needed to be notified of immediately.  The fastest way to catch it was to run the following script at a bash command prompt so that when the error happened the script would beep until we stopped it.

while true; do ret=`tail -n 1 error-error-main.log|grep -i FATAL `;if [ “$ret” != “” ] ; then echo $ret; echo -en “\0007”; fi; sleep 1; done

COMMANDDUMP: postfix – explore and fix spam clogged mailq at the command line

  • SystemA is a postfix mailserver
  • SystemA receives all email messages sent to @domain.com
  • All @domain.com messages are forwarded to a Gmail Account remote.user@gmail.com.  (a catchall alias)
  • when spammers saturate @domain.com gmail starts defering emails and the server becomes plugged waiting to forward the emails

450-4.2.1

The user you are trying to contact is receiving mail at a rate that prevents additional messages from being delivered.  Please resend your message at a later time. If the user is able to receive mail at that time, your message will be delivered.

List the email addresses that were originally sent to with the number of times each.

ServerA>for fl in `mailq|grep remote.user@gmail.com -B4| awk '$1 ~ /^[A-Z0-9]+$/{print $1}' `; do grep original_recipient "/var/spool/postfix/defer/${fl::1}/$fl" ; done|awk -F= '{print $NF}'|sort|uniq -c | sort -n

Delete from the mail queue all email messages sent to a specific user

ServerA>for fl in `mailq|grep remote.user@gmail.com -B4| awk '$1 ~ /^[A-Z0-9]+$/{print $1}'`; do grep original_recipient=honeypot@domain.com -l "/var/spool/postfix/defer/${fl::1}/$fl" ; done|awk -F/ '{print "postsuper -d "$NF}'|bash
ServerA>#OR
ServerA>grep original_recipient=original_recipient=honeypot@domain.com /var/spool/postfix/defer/ -rl|awk -F/ '{print "postsuper -d "$NF}'

Delete all mail messages from ‘Maria*’

mailq |awk '$1 ~ /^[A-Z0-9]+$/'|awk '$NF ~/^Maria/{print $0}'|awk '{print "postsuper -d "$1}'|bash

 

Proftpd PassivePorts Requirements (or Not Working)

After an exhaustive research session attempting to enabled Passive FTP on a Proftpd server I found and am now documenting this issue.

PassivePorts is a directive in Proftpd.conf to configure proftpd to use a specific set of ports for Passive FTP –   You would the allow these ports through your firewall to your server.

The documentation on the full configuration and reason that you would use Passive vs Active FTP,  and how to set it up on your server and firewall are beyond the scope of this document but I a couple of links that might get you there are here.

In my first attempts I was attempting to use the port range between 60000 and 65535,  the firewall ports were forwarded,  and things did not work

  • PassivePorts 60000 65535

So I had to dig in and find the details of why not,   I enabled debugging on filezilla and ran at the command line in order to try and see what was happening

  • proftpd -n -d30

I found a post somewhere that explained how I could read the response to the PASV  command,

  • Entering Passive Mode (172,31,10,46,148,107)

These last two octets in the response are the port number that is to be used  here is how you calculate it (148*256 +107)=37995.    Even though I had the server setup to use PassivePorts 60000 – 65535 it was still attempting to use 37995.    Once I figured out how to confirm which port was being sent,  I realized that the issue was not a firewall or other problem, but rather something in the system.

I happened across a Slacksite article which helped me find this in the Proftpd Document

PassivePorts restricts the range of ports from which the server will select when sent the PASV command from a client. The server will randomly choose a number from within the specified range until an open port is found. Should no open ports be found within the given range, the server will default to a normal kernel-assigned port, and a message logged.

In my research I was unable to find a message logged so I dont believe that a message shows anywhere,  however this article helped me realize that there may be some issue on my system which was preventing ports 60000 to 65535 to be available and I started playing with the system

  • 60000-61000 and 59000-60000 had no effect the system was still assigning ports within the 30000 to 40000 range.
  • 50000 to 51000 had the same effect

So I tried some entries within the 30000 and 40000 and I found I could consistently control the ports if I used any range between 30000 and 40000

  • PassivePorts 30000 32000 – gave me 31456, 31245, 30511,  etc
  • PassivePorts 32000 34000 – gave me 33098, 32734, 33516,  etc
  • etc

From this I figured out that I can only control the ports on this system in a range lower than the ones I was originally attempting

I did more research and found that there is a sysctl variable that shows the local anonymous port range

  • sysctl -a|grep ip_local_port_range

On my system for some reason this was set to

  • net.ipv4.ip_local_port_range = 32768 48000

I attempted setting this to a higher number

  • sysctl -w net.ipv4.ip_local_port_range=”32768 65535″

However this did not change the way the proftpd allocated the ports   only the lower range was available.   Perhaps I could have set the variabl in sysctl.conf and restarted,  but I stopped my investigation here.  Instead I changed the firewall rules to allow port 32000 to 34000 through and I stuck with the configuration

  • PassivePorts 32000 34000

What I learned from this was:

PassivePorts only suggests that your system use range of ports you specify,   If that range is not available the system quietly selects a port outside the range you specified,  If you have problems with your FTP hanging at MLSD check your logs to verify which PORT has been assigned. using the calculation (5th octet *256 + 6th octet).

Hacking a corrupt VHD on xen in order to access innodb mysql information.

A client ran into a corrupted .vhd file for the data drive for a xen server in a pool. We helped them to restore from a backup, however there were some items that they had not backed up properly, our task was to see if we could some how restore the data from their drive.

First, we had to find the raw file for the drive. To do this we looked at the Local Storage -> General tab on the XenCenter to find the UUID that will contain the  failing disk.

When we tried to attach the failing disk we get this error

Attaching virtual disk 'xxxxxx' to VM 'xxxx'
The attempt to load the VDI failed

So, we know that the xen servers / pool reject loading the corrupted vhd. So I came up with a way to try and access the data.

After much research I came across a tool that was published by ‘twindb.com’ called ‘undrop tool for innodb’.  The idea is that even after you drop or delete innodb files on your system, there are still markers in the file system which allow code to parse what ‘used’ to be on the system. They claimed some level of this worked for corrupted file systems.

The documentation was poor, and it took a long time to figure out, however they claimed to have 24-hour support, so I thought I would call them and just pay them to sort out the issue. They took a while and didn’t call back before I had sorted it out. All of the documentation he did have showed a link to his github account,  however the link was dead.  I searched and found a couple other people out there that had forked it before twindb took it down.  I am thinking perhaps they run more of an service business now and can help people resolve the issue and they dont want to support the code.  Since this code worked for our needs,  I have forked it so that we can make it permanently available: https://github.com/matraexinc/undrop-for-innodb

First step was for me to copy the .vhd to a working directory

# cp -a 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd /tmp/restore_vhd/orig.vhd
#cd /tmp/restore_vhd/
#git clone https://github.com/matraexinc/undrop-for-innodb
#cd undrop-for-innodb
#apt-get install bison flex
#apt-get install libmysqld-dev  #this was not mentioned anywhere,  however an important file was quitely not compiled without it.
#mv * ../.  #move all of the compiles files into your working directory
#cd ../
#./stream_parser -f orig.vhd # here is the magic – their code goes through and finds all of the ibdata1 logs and markers and creates data you can start to work through
#mv pages-orig.vhd pages-ibdata1  #the program created an organized set of data for you,  and the next programs need to find this at pages-ibdata1.
#./recover_dictionary.sh #this will need to run mysql as root and it will create a database named ‘test’ which has a listing of all of the databases, tables and indexes it found.

This was where I had to start coming up with a custom solution in order to process the large volume of customer databases.  I used some PHP to script the following commands for all of the many databases that needed to be restored.   But here are the commands for each database and table you must run a command that corresponds to an ‘index’ file that the previous commands created for you,  so you must loop through each of them.

 

select c.name as tablename

,a.id as indexid
from SYS_INDEXES a
join SYS_TABLES c on (a.TABLE_ID =c.ID)

 

This returns a list of the tables and any associated indexes,   Using this you must generate a command which

  1. generates a create statement for the table you are backing up,
  2. generate a load infile sql statement and associated data file

#sys_parser -h localhost -u username -p password -d test tablennamefromsql

This generates the createstatement for the tables,   save this to a createtable.sql file and execute it on your database to restore your table.

#c_parser -5 -o data.load -f pages-ibdata1/FIL_PAGE_INDEX/00000017493.page -t createtable.sql

This outputs a “load data infile ‘data.load’ statement,   you should pipe this to MYSQL and it will restore your data.


I found one example where the was createstatement  was notproperty created for table_id 754,   it appears that the sys_parser code relies on indexes,  and in one case the client tables did not have an index (not even a primary key),   this make it so that no create statement was created and the import did not continue.   To work around this,  I manually inserted a fake primary key on one of the columns into the database

#insert into SYS_INDEXES set id=1000009, table_id = 754,  name=PRIMARY, N_FIELDS=1, Type=3,SPACE=0, PAGE_NO=400000000
#insert into SYS_FIELDS set INDEX_ID=10000009, POS=0, COL_NAME=myprimaryfield

Then I was able to run the sys_parser command which then created the statement.


An Idea that Did not work ….

The idea is to create a new hdd device at /dev/xvdX create a new filesystem and mount it.   The using a tool use as dd or qemu-img ,  overwrite the already mounted device with the contents of the vhd.   While the contents are corrupted,  the idea is that we will be able to explore the corrupted contents as best we can.

so the command I ran was

#qemu-img convert -p -f vpc -O raw /var/run/sr-mount/f40f93af-ae36-147b-880a-729692279845/3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd/dev/xvde

 

Where 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd is the name of the file / UUID that is corrupted on the xen DOM0 server  and f40f93af-ae36-147b-880a-729692279845 is the UUID of the Storage / SR that it was located on

 

The command took a while to complete (it had to convert 50GB) but the contents of the vhd started to show up as I ran find commands on the mounted directory.   During the transfer,  the results were sporadic as the partition was only partially build,  however after it was completed,  I had access to about 50% of the data.

An Idea that Did not work (2) ….

This was not good enough to get the files the client needed.   I had a suspicion that the  qemu-img convert command may have dropped some of the data that was still available,  so i figured I would try another, somewhat similar command,  that actually seems to be a bit simpler.

This time I created another disk on the same local storage and found it using the xe vdi-list command on the dom0.

#xe vdi-list name-label=disk_for_copyingover

this showed me the UUID of this file was ‘fd959935-63c7-4415-bde0-e11a133a50c0.vhd’

i found it on disk and I executed a cat  from the corrupted vhd file into the mounted vhd file while it was running.

cat 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd > ../8c5ecc86-9df9-fd72-b300-a40ace668c9b/fd959935-63c7-4415-bde0-e11a133a50c0.vhd

Where 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd is the name of the file / UUID that is corrupted on the xen DOM0 server fd959935-63c7-4415-bde0-e11a133a50c0.vhd is the name of the vdi we created to copy over

 

This method completely corrupted the mounted drive, so I scrapped this method.

Next up:  

Try some  file partition recovery tools:

I started with testdisk (apt-get install testdisk)   and ran it directly againstt the vhd file

testdisk 3f204a06-ba18-42ab-ad28-84ca3a73d397.vhd

Commanddump – remove all kernel header packages

Servers fill up with kernels that are not in use.

Use this single command to remove them on ubuntu / debian.

 

 dpkg -l 'linux-*' | sed '/^ii/!d;/'"$(uname -r | sed "s/\(.*\)-\([^0-9]\+\)/\1/")"'/d;s/^[^ ]* [^ ]* \([^ ]*\).*/\1/;/[0-9]/!d' | xargs sudo apt-get purge -y

Automate and Measure Everything

Humans introduce errors as we perform manual tasks.  The following list includes only some of the ways that human nature causes issues:

  • Communication: We describe something incorrectly or we interpret something differently than someone intended.  This applies to written and verbal communication
  • Input: We type something wrong and dont correct it (the “typo”)
  • Sequence: We forget,  change the order and add steps
  • Timing: We take longer or less time than we should have

Some people are excellent, others are average,  and some are downright horrible in some or all of the areas above.
Basically as humans our inconsistency leaves much room for improvement when performing any kind of task.   This increases as the complexity, duration and number of steps in a task or process grows.

Computer software provides solutions to those problems when the issues above are considered.

  • Communication: Software can formalize the exact messaging to and from humans, requiring communication to be explicitly acknowledge and sending consistent reminders if not.
  • Input: Both input and output can be formalized,   exact data is transferred between systems
  • Sequence: The order that steps are completed can be done in the exact same order, every time, without fail,   rigorous rules can be applied to ensure this happens without fail.
  • Timing: The exact same amount of time can be guaranteed between steps, consistently without fail

Matraex builds software to that automates redundant and error prone task and improves on these issues.

Even with software that addresses these issues it is important at some level that a human can confirm that items are being completed.   Our software typically contains reports and displays dashboards which gives managers and companies the ability to quickly identify that issues are dealt with correctly.  Our favorite dashboards give users the ability to see exactly what they need to do next.

If you would like to explore custom software that helps you solve any of these issues – call Matraex at 208.344.1115

Solution -> Fatal error: Allowed memory size exhausted wp-includes/class.wp-dependencies.php on line 339

Recently one of our Managed WordPress Services clients came to me to describe a problem with a WordPress site they were working on.

If you need in depth help debugging an erro on your WordPress of PHP site,  contact us.

They were receiving an error: Fatal error: Allowed memory size of 134217728 bytes exhausted (tried to allocate 71 bytes) in /***REDACTED_PATH***/wp-includes/class.wp-dependencies.php on line 339

The client described that they had not done any kind of upgrades or changes to cause the issue and asked about whether we did any upgrades as part of the weekly site reviews we do under the Managed WordPress Service.   I found in the weekly email report sent by another person in our office,  that we had done some upgrades,  however the file that was throwing the error (class.wp-dependencies.php) had not been updated.   So I had to dig in deeper to find the root cause.

I ended up writing some debugging code that I placed directly in class.wp-dependencies.php that helped me to identify some of the causes.    Because I found lots of other sites when googling that had the same error, I decided I would post the debugging code,  in the case that it helps other users debug their issue.   The issue ended up being that site had an enqueued script jquery which   dependency on jquery-migrate which in turn had a dependency on jquery.  This circular reference looped in the WP code until the server ran out of memory. (in this case it was about 128MB).

The theme and plugins had two locations which enqueued  jquery scripts,  one was enqueud BEFORE a dependency existed,  then a dependency was created and the script was queued again. The code I wrote is intended for debuggin purposes,   because the the issue could easily come from other issues,  and I wanted to see the backtrace for exactly where the problematic scripts (jquery, jquery-migrate) were queued from.

Here is the output from my code:


MATRAEX Debugging Code This WordPress project has circular dependencies
Aborting script execution at /***PATH_REDACTED***/wp-includes/class.wp-dependencies.php:400
Debugging tip look for theme code that calls ‘enqueue’ on for items which already have dependencies)
If this error did not show, your script would have likely errored on line 339 when it ran out of memory from the circular debugging.
Most likely this is a plugin error!!


jquery has a dependency on jquery-migrate which has a circular dependecy on jquery
Backtrace of when jquery was enqueued:
Instance 0:

#0  WP_Dependencies->enqueue(jquery) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276]
#1  wp_enqueue_script(jquery, http://dev.art4healing.org/wp-content/plugins/jquery-updater/js/jquery-3.1.1.min.js, , 3.1.1) called at [/***PATH_REDACTED***/wp-content/plugins/jquery-updater/jquery-updater.php:26]
#2  rw_jquery_updater()
#3  call_user_func_array(rw_jquery_updater, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524]
#4  do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197]
#5  wp_enqueue_scripts()
#6  call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524]
#7  do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555]
#8  wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19]
#9  require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572]
#10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531]
#11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45]
#12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5]
#13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75]
#14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19]
#15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]

Instance 1:

#0  WP_Dependencies->enqueue(jquery) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276]
#1  wp_enqueue_script(jquery) called at [/***PATH_REDACTED***/wp-content/plugins/nextgen-gallery/nggallery.php:513]
#2  C_NextGEN_Bootstrap->fix_jquery()
#3  call_user_func_array(Array ([0] => C_NextGEN_Bootstrap Object ([_registry] => C_Component_Registry Object ([_searched_paths] => Array ( .......
#4  do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197]
#5  wp_enqueue_scripts()
#6  call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524]
#7  do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555]
#8  wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19]
#9  require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572]
#10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531]
#11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45]
#12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5]
#13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75]
#14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19]
#15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]

jquery-migrate has a dependency on jquery which has a circular dependecy on jquery-migrate
Backtrace of when jquery-migrate was enqueued:
Instance 0:

#0  WP_Dependencies->enqueue(jquery-migrate) called at [/***PATH_REDACTED***/wp-includes/functions.wp-scripts.php:276]
#1  wp_enqueue_script(jquery-migrate, http://dev.art4healing.org/wp-content/plugins/jquery-updater/js/jquery-migrate-3.0.0.min.js, Array ([0] => jquery), 3.0.0) called at [/***PATH_REDACTED***/wp-content/plugins/jquery-updater/jquery-updater.php:32]
#2  rw_jquery_updater()
#3  call_user_func_array(rw_jquery_updater, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524]
#4  do_action(wp_enqueue_scripts) called at [/***PATH_REDACTED***/wp-includes/script-loader.php:1197]
#5  wp_enqueue_scripts()
#6  call_user_func_array(wp_enqueue_scripts, Array ([0] => )) called at [/***PATH_REDACTED***/wp-includes/plugin.php:524]
#7  do_action(wp_head) called at [/***PATH_REDACTED***/wp-includes/general-template.php:2555]
#8  wp_head() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php:19]
#9  require_once(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php) called at [/***PATH_REDACTED***/wp-includes/template.php:572]
#10 load_template(/***PATH_REDACTED***/wp-content/themes/Parallax-One/header.php, 1) called at [/***PATH_REDACTED***/wp-includes/template.php:531]
#11 locate_template(Array ([0] => header.php), 1) called at [/***PATH_REDACTED***/wp-includes/general-template.php:45]
#12 get_header() called at [/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php:5]
#13 include(/***PATH_REDACTED***/wp-content/themes/Parallax-One/front-page.php) called at [/***PATH_REDACTED***/wp-includes/template-loader.php:75]
#14 require_once(/***PATH_REDACTED***/wp-includes/template-loader.php) called at [/***PATH_REDACTED***/wp-blog-header.php:19]
#15 require(/***PATH_REDACTED***/wp-blog-header.php) called at [/***PATH_REDACTED***/index.php:17]

Add this following code to the query() function,  just before the return statement withing the the “case ‘queue’ :line in the file wp-includes/class.wp-dependencies.php.  (in the version current as of this writing it is about line 387)

//BEGIN NON WP CODE - MATRAEX
                     foreach($this->queue  as $queued)
                            $check[$queued]=$this->registered[ $queued ]->deps; 
                    foreach($check as $queued => $deparr)
                        foreach($deparr as $depqueued) 
                            if($check[$depqueued] && in_array($queued,$check[$depqueued]))
                                $recursivedependencies[$queued]= $depqueued;
                    if($recursivedependencies)
                    {
                       $_SERVER[DOCUMENT_ROOT] = str_replace("/home","/data",$_SERVER[DOCUMENT_ROOT]);
                       ob_start();
                        echo "

 

MATRAEX Debugging Code

blog post
This WordPress project has circular dependencies
Aborting script execution at “.__FILE__.”:”.__LINE__.”
Debugging tip look for theme code that calls ‘enqueue’ on for items which already have dependencies)
If this error did not show, your script would have likely errored on line 339 when it ran out of memory from the circular debugging.
Most likely this is a plugin error!!


“; global $enqueue_backtrace; foreach($recursivedependencies as $queued =>$depqueued) { echo ”
$queued has a dependency on $depqueued which has a circular dependecy on $queued” ; echo ”
Backtrace of when $queued was enqueued: “; foreach($enqueue_backtrace[$queued] as $k=>$lines) { echo ”
Instance $k:

";
                                foreach(explode("\n",$lines) as $line)
                                    if(strstr($line,'enqueue'))
                                        echo "$line\n";
                                echo "

“;
}
}
$out=ob_get_clean();
$out = str_replace($_SERVER[DOCUMENT_ROOT],”/***PATH_REDACTED***/”,$out);
$out = str_replace(“/plugins/”,”/plugins/”,$out);
$out = str_replace(“/themes/”,”/themes/”,$out);
echo $out;
exit;
}
//END NON WP CODE – MATRAEX

The full function code should look like this

In Addition,  add the following code to the enqueue  function just below the line with public function enqueue ()  at the time of this writing,  that wa about line 378 of wp-includes/class.wp-dependencies.php.

        //BEGIN NON WP CODE - MATRAEX
            global $enqueue_backtrace;
            ob_start();
            debug_print_backtrace(); 
            $enqueue_backtrace[$handles][] = ob_get_clean();
        //END NON WP CODE - MATRAEX

The full function code should look like this

 

Wordfence – CPU issue with exhaustive scans – COMMANDDUMP

Wordfence has some default scans which run hourly.   On many systems this works well.  In at least one case we found a situation where Wordfence was running hourly scans on some VERY large libraries at the same time on multiple sites on the same server.

A fix was implemented for this,  but in the time that it took us to recognize this issue,  we came up with the following command which helped to kill the CPU hog so we could continue to use the WordPress websites.

 

 kill `apachectl fullstatus|grep wordfence_doScan|awk '{print $2}'`

Some of the ways you can find out that the issue is occuring is by running some of these investigative commands

  • apachectl fullstatus|grep wordfence – how many concurrent scans are running
  • mysqladmin  processlist|grep wf – the number of insert / update / select commands against Word Fence tables
  • vmstat 1 – run a monitor on your system to see how active you are
  • uptime – see your 1 , 5 and 10 minute loads

 

Command Dump – One line method to find errors in a large list of bind zone files

I have found need to go through a large list of bind zone files and find any that have errors.

This loop helps me identify them:

 

for a in `ls db.*.*|grep -v db.local.`; do named-checkzone localhost $a 2>&1 >/tmp/tmp; if [ "$?" != "0" ]; then echo "ERROR ON:$a"; cat /tmp/tmp; fi; done|more

 

  • ls db.*.*|grep -v db.local.` – list each file that you would like to check (I listed all files with db.*.* and excluded any of them that started with db.local.)
  • named-checkzone localhost $a 2>&1 >/tmp/tmp – run the check and save the results to a temp file
  • if [ “$?” != “0” ]; then echo “ERROR ON:$a”; cat /tmp/tmp; fi; – if the command fails then print out the file name and the results

Building your iPhone App for iOS10 on XCode8 – NSPhotoLibraryUsageDescription

If you successfully build, archive and upload your iOS app using XCode8 against the iOS10 SDK,  after you your application,  you may see it in your iTunes Connect account for just a moment while it is processing,  but then it disappears inexplicably.

Check your email from itunesconnect@apple.com,   there are some additional requirements to your iOS10 app that were not built into the XCode warning and requirements.

In one case we had a UIIMagePickerController so when we uploaded our application to iTunesConnect through the Organizer window in XCode8,  it appeared that iTunes COnnect accepted it with an “Upload Successful”.  I s immediately able to see that the application was “Processing” under the “Activity -> All Builds” tab of iTunesConnect.

However,  then it disappeared and I received a message from  itunesconnect@apple.com which a couple of messages in it.   One message was a warning prefaced by “Though you are not required to fix the following issues, we wanted to make you aware of them:”  That is a topic for another post,   however the important message was:

  • This app attempts to access privacy-sensitive data without a usage description. The app’s Info.plist must contain an NSPhotoLibraryUsageDescription key with a string value explaining to the user how the app uses this data.

I opened my projects plist.info and added the following “<key>NSPhotoLibraryUsageDescription</key><string>This application requires access to users Photo Library if the user would like to set a profile image</string>

When opening the plist.info page as a Property List in xcode,  I was able to see that the Full Name of the key was “Privacy – Photo Library Usage Description”.

Once I had corrected the issue again, I built,  archived and uploaded again and the build completed within iTunesConnect.

 

 

Find all PHP Short Tag instances – COMMANDLINE

Occassionally we have run across web products which were developed using the PHP short open tag “<?”  instead of “<?php”.

We could go into the php.ini file and update “short_open_tag” to “On”,  however this ends up creating software which can not run on as many servers,  and it is less transportable between servers.

The command below when run from the directory that houses all of your PHP files,  will identify all of the files which use short open tags.   You will then be able to make the changes to the files from <? to <?php

grep -rI '<?' -n . |grep -v '<?[(php)(xml)="]'

 

This command is running a first grep statement recursively in the current directory looking for any “<?”.   The output of this is passed through another grep statement which then ignores any instances of “<?php”, “<?xml”, “<?=”  and ‘<?”‘

Lets decompose:

  • -r  – means search the current (“.”) directory recursively
  • -I means ignore binary files
  • ‘<?’  search for all instances of ‘<?’
  • -n – add the line number of the found code to help you find it faster
  • -v – in the excludes anythign that matches in the second grep statement
  • ‘ the regular expression then matches each of the items we want to ignore.

 

Note:

I have put in double quote(“) in the regular expression which ignores <?” because we have some php functions which loop through some XML code and tests for “<?”.

 

Command Dump – Extending a disk on XenServer with xe

To expand the disk on a XenServer using the command line,   I assume that you have backed up the data elsewhere before the expansion,   as this method deletes everything on the disk to be expanded

  • dom0>xe vm-list name-label=<your vm name> # to  get the UUID of the host = VMUUID
  • dom0>xe vm-shutdown uuid=<VMUUID>
  • dom0>xe vbd-list  params=device,empty,vdi-name-label,vdi-uuid   vm-name-label=<your vm name>  # to get the vdi-uuid of the disk you would like to expand = VDIUUID
  • dom0>xe vdi-resize uuid=<VDIUUID> disk-size=120GB #use the size that you would like to expade to
  • dom0>xe vm-start uuid=<VMUUID>

Thats it on th dom0,  now as your vm boots up,  log in via SSH and complete the changes by deleting the old partition,  repartitioning and making a new filesystem,   I am going to do this as though the system is mounted at /data

  • domU>df /data # to get the device name =DEVICENAME
  • domU>umount /dev/DEVICENAME
  • domU>fdisk /dev/DEVICENAME
  •    [d]  to delete the existing partition
  •    [c] to create a new partition
  •    [w] to write the partition
  •    [q] to close fdisk
  • mkfs.ext3 /dev/DEVICENAME
  • mount /data
  • df /data #to see the file size expanded

 

Looking for help with XenServer?   Matraex can help.

Load problems after disk replacement on a ocfs2 and drbd system.

Notes Blurb on investigating a complex issue.    resolved,  however not with a concise description,  notes kept in order to continue the issue in the case it happens again.

Recently,    we had a disk failure on one of two SAN servers utilizing MD, OCFS2 and drbd to keep two servers synchronized.

We will call the two Systems: A  and B

 

The disk was replaced on System A, which required a reboot in order for the system to recognize the new disk,   then we ad to –re-add the disk to the MD.  Once this happened,  the disk started to rebuild.   The OCFS and drbd layers did not seem to have any issue rebuilding quickly as soon as the servers rebuilt,  the layers of redundancy made it fairly painless.   However,  the load on System B went up to 2.0+  and on System A up to 7.0+!

This slowed down System B significantly and made System A completely unusable.

I took a look at the many different tools to try to debug this.

  • top
  • iostat -x 1
  • iotop
  • lsof
  • atop

The dynamics of how we use the redundant sans should be taken into should be taken into account here.

We mount System B to an application server via NFS,  and reads and writes are done to System B,   this makes it odd that System A is having such a hard time keeping up,  it honly has to handle the DRBD and OCFS2 communication in order to keep synced (System A is handling reads and writes,   where System B is only having to handle writes on the DRBD layer when changes are made.  iotop shows this between 5 and 40 K/s,  which seemed minimal.

Nothing is pointing to any kind of a direct indicator of what is causing the 7+ load on System A.   the top two processes seem to be drbd_r_r0 and o2hb-XXXXXX,  which take up minimal amounts of read and write

The command to run on a disk to see what is happening is

#iotop -oa

This command shows you only the commands that have used some amount of disk reas or write (-o)  and it shows them cumulatively (-a) so you can easily see what is using the io on the system.    From this I figured out that a majority of the write on the system,  was going to the system drive.

What I found from this,  is that the iotop, tool does not show the activity that is occuring at the drbd / ocfs2 level.   I was able to see that on System B,  where the NFS drive was connected to,  that the nfsd command was writing MULTIPLE MB of information when I would write to the nfsdrive (cat /dev/zero> tmpfile),  but I would see only 100K or something written to drbd on System B,  and nothing on SystemA,  however I would be able to see the file on System A,

I looked at the cpuload on Sysetm A when running the huge write,  and it increased by about 1 (from 7+ to 8+)  so it was doing some work ,  iotop just did not monitor it.

So  i looked to iostat to find out if i would allow me to see the writes to the actual devices in the MD.

I ran

#iostat -x 5

So I could see what was being written to the devices,  here is could see that the disk utilization on System A and System B was similar (about 10% per drive in the MD Array)  and the await time on System B was a bit higher than System A.  When I did this test I caused the load to go up on all servers to about 7 (application server,  System A and System B)  Stopping the write made the load time on the application server, and on System B go back down.

While this did not give me the cause, it helped me to see that disk writes on System A are trackable through iostat, and since no writes are occurring when I run iostat -x 5 I have to assume that there is some sort of other overhead that is causing the huge load time.    With nothing else I felt I could test,  I just rebooted the Server A.

Low and behold,   the load dropped,    writing huge files,  deleting huge files was no longer an issue.   The only think I could think was that there was a large amount of traffic of something which  was being transferred back and forth to some ‘zombie’ server or something.   (I had attempted to restart ocfs2 and drbd and the system wouldn’t allow that either which seems like it indicates a problem with some process being held open by a zombie process)

In the end,  this is the best scenario I can use to describe the problem.  While this is not real resolution.  I publish this so that when an issue comes up with this in the future,  we will be able to investigate about three different possibilities in order to get closer to  figuring out the true issue.

  1. Investigate the network traffic (using ntop for traffic,   tcpdump for contents,  and eth for total stats and possible errors)
  2. Disconnect / Reconnect the drbd and ocfs2 pair to stop the synchronization and watch the load balance to see if that is related to the issue.
  3. Attempt to start and stop the drbd and ocfs2 processes and debug any problems with that process. (watch the traffic or other errors related to those processes)

COMMANDDUMP – Cloning a WordPress website for a Sandbox, Upgrade or Overhaul

Over the years,  we have had clients ask us to create an exact copy of their current website (files, database and all) in a sandbox environment that would not affect their existing website.    This typically involves setting up a temporary domain and hosting environment,  and a new MySQL database,  however they need them to be populated with an exact copy.

The needs they have varies:

  • often it is to just be able to test a change within a disposable Sandbox,
  • sometimes,  they may want to do some sort of an upgrade,  but they do not have  a dedicated development or test environment,
  • and commonly it is to start some sort of a site overhaul using the existing site’s pages, blog entries and design.   In this case they will often migrate this site to their production site in the future

While a copy and paste seems like the simply way to do this,  there is much more that must occur. This list below describes a list of all of the ones we have found so far

  • Copy all of the files from the OLD WordPress root,  to the NEW WordPress root
  • Copy the entire database from Database A to Database B
  • Update the NEW WordPress install to connect to Database B
  • Update the Database B install wp_options to have the NEW url (if you skip this step,  attempting to login to the NEW WordPress install will redirect you to the OLD WordPress install)
  • Update all posts, pages and other entries which have absolute links to the OLD WordPress install to have absolute links to the NEW WordPress install.  (if you do not change this,  you may end up with embedded images and links which point back to the OLD WordPress install,   sometimes this can be difficult to realize because the file structure is identical)

Once we realized this was going to be a common request and that we often need to do this from one directory on a server to another,  we wanted to automate this process.     We created a quick and dirty script which accomplishes all of the tasks of cloning the database and files,  and then updating the contents of the database to the new location.

If you would like help with this process please contact us, Matraex would be happy to help you clone your WordPress website.
If you need a company to Manage your WordPress security and updates on a monthly basis   please let us know here.

The script relies on some basic commands which should already be installed on your system,  but you may want to confirm first

  • sed
  • mysql
  • mysqldump

The script is one that you will run from the command line when you are within the existing WordPress website.   You will run the command with parameters about the new WordPress website (The new directory,  the new url,  the new MySQL connection information.

The script does a couple of basic checks to make sure that the directory you are cloning to,  does not already have a WordPress installation,  and that the MySQL database is available but does not already have a WordPress install in the ‘default’ location.

It also uses the wp-config.php of the current WordPress installation to get connection information to the existing WP database and get the current URL.

If everything checks out  the script

  • copies all files from the old directory to the new directory
  • dumps the existing database,  manipulates a file to replace the old url with the new url
  • imports the file into the new mysql database.
  • updates the new directory wp-config.php to use the new MySQL connection information

File: wordpress_clone.sh

#!/bin/bash
echo
echo Usage: $0 1-NEW_DIR 2-NEW_URL 3-NEW_DB_HOST 4-NEW_DB_NAME 5-NEW_DB_USER 6-NEW_DB_PASSWORD

if [ "$1" == "" ] || [ "$2" == "" ] || [ "$3" == "" ] || [ "$4" == "" ] || [ "$5" == "" ] || [ "$6" == "" ]; then
  echo
  echo "Invalid Parameters; please review usage";
  echo "Exiting"
  echo
  exit
fi


NEW_DIR=$1
NEW_URL=$2 #type the url address that the new WordPress website is located at
NEW_DB_HOST=$3 #TYPE the name of the database server for the NEW WordPress Install
NEW_DB_NAME=$4 # Type the name of the NEW WordPress Database you want to connect to
NEW_DB_USER=$5 #TYPE the username to connect to the NEW WordPress Database
NEW_DB_PASSWORD=$6 #TYPE the password to connect to the NEW WordPress Database

#this script assumes that you entered perfect information, it does not do any checking to confirm that any of the information you entered is valid before proceeding
ORIG_DIR=`pwd`
OLD_DIR=$ORIG_DIR


#load all of the DB_variables from the old database into memory so we can dump it
if [ ! -e wp-config.php ]; then
  echo
  echo "The current directory is not an existing WordPress installation"
  echo "Exiting"
  echo
  exit
fi


if [ ! -d $NEW_DIR ]; then
  echo
  echo "The new directory $NEW_DIR does not exist"
  echo "Exiting"
  echo
  exit
fi
cd $OLD_DIR
source <(grep "^define('DB" wp-config.php |awk -F"'" '{print $2"=\""$4"\""}')


EXISTING_NEW_DB=` mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD -N --execute='select now()' -h $NEW_DB_HOST $NEW_DB_NAME 2>/dev/null`
if [ "" == "$EXISTING_NEW_DB" ]; then
  echo
  echo "New Database Connection Failed; A new blank database must be available in order to continue"
  echo "Exiting"
  echo
  exit
fi
EXISTING_NEW_URL=` mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD -N --execute='select option_value from wp_options where option_id=1' -h $NEW_DB_HOST $NEW_DB_NAME 2>/dev/null`
if [ "" != "$EXISTING_NEW_URL" ]; then
  echo
  echo "There is already a WordPress database located at $NEW_DB_NAME: using '$EXISTING_NEW_URL'"
  echo "Exiting"
  echo
  exit
fi
OLD_URL=` mysql -u $DB_USER --password=$DB_PASSWORD -N --execute='select option_value from wp_options where option_id=1' -h $DB_HOST $DB_NAME`
if [ "" == "$OLD_URL" ]; then
  echo
  echo "The database configuration in wp-config.php for the current WP install does not have a valid connection to the database $DB_NAME $DB_USER:$DB_PASSWORD@$DB_HOST"
  echo "Exiting"
  echo
  exit
fi 
echo "from:$OLD_URL" 
echo "to :$NEW_URL"
cp -ar $OLD_DIR/. $NEW_DIR/.

TMPFILE=$(mktemp /tmp/`basename $0`.XXXXXXXXX)
echo "Dumping Database "
mysqldump -h $DB_HOST --extended-insert=FALSE -c -u $DB_USER --password=$DB_PASSWORD $DB_NAME >$TMPFILE
echo Temp DB File:$TMPFILE
sed -e"s|$OLD_URL|$NEW_URL|g" -i $TMPFILE
cat $TMPFILE | mysql -u $NEW_DB_USER --password=$NEW_DB_PASSWORD $NEW_DB_NAME
rm $TMPFILE
cd $ORIG_DIR
cd $NEW_DIR

sed -e"s/define('DB_USER', '[A-Za-Z0-9]*/define('DB_USER', '$NEW_DB_USER/" -i wp-config.php
sed -e"s/define('DB_PASSWORD', '[A-Za-Z0-9]*/define('DB_PASSWORD', '$NEW_DB_PASSWORD/" -i wp-config.php
sed -e"s/define('DB_HOST', '[A-Za-Z0-9\.]*/define('DB_HOST', '$NEW_DB_HOST/" -i wp-config.php
sed -e"s/define('DB_NAME', '[A-Za-Z0-9]*/define('DB_NAME', '$NEW_DB_NAME/" -i wp-config.php
echo "Wrote DB Changes to $NEW_DIR/wp-config.php"

Making Dahdi work on asterisk on ubuntu 14.04

Making asterisk+Dahdi work on ubuntu 14.04

Asterisk and Dahdi Setup

I installed asterisk  and the dahdi packages

apt-get install asterisk asterisk-config asterisk-core-sounds-en asterisk-core-sounds-en-gsm asterisk-dahdi asterisk-modules asterisk-moh-opsound-gsm asterisk-vocemail dahdi dahdi-dkms dahdi-linux

I rebooted at this point because it seems as though dahdi and asterisk act a bit different from a reload than from a reboot.

I confirmed that the dahdi modules were all loaded and then I ran:

dahdi_hardware #to confirm that my wcte12xp+ hardware was visible
dahdi_genconf # to automatically setup my /etc/dahdi/system.conf file
dahdi_scan # to make sure that my hardware was setup to run a PRI / T1 (it wasn't so I had to change the jumper from e1 to t1 and start over)

At this poing I am ready to run the config and make sure that my channels are all configured correctly

dahdi_cfg -vv # this display all of the channels that are were setup by dahdi_genconf,   if dahdi_genconf set up your span wrong edit /etc/dahdi/system.conf

I also confirmed that in my ‘ /etc/modprobe.d/dahdi.conf’ file I had a couple of entries

options dahdi auto_assign_spans=1
options wcte12xp default_linemode=t1

(the linemode may have been unnecessary,  and left over from when I was screwing around with figuring out the jumper)

After everything is setup again,  I reboot (Rebooting is always good idea when verifying configuration changes, as it confirms that your changes will survive a reboot)

The Issue – Errors with the PRI

When the reboot came back up ,  I was able to confrim again that dahdi_scan and dahdi_cfg -vv came back as expected.   Since the dahdi_genconf command autoatmically setup my dahdi-channels.conf file (included by chan_dahdi.conf)   Then now calls that come in through the pri should now start showing up on the asterisk console.

asterisk -rvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv #attach to the asterisk console with a good amount of debugging

This is where I start to see errors:

[Apr 24 10:07:07] ERROR[2863] chan_dahdi.c: PRI Span: 1 Write to 36 failed: Invalid argument
[Apr 24 10:07:07] ERROR[2863] chan_dahdi.c: PRI Span: 1 Short write: 0/5 (Invalid argument)

These errors occur 1000s of times an hour.

After digging around a bit more I found a couple more errors

#dmesg|grep dahdi
[ 12.024807] dahdi: module verification failed: signature and/or required key missing - tainting kernel
[ 12.025077] dahdi: unknown parameter 'auto_assign_spans' ignored

I did not know what these meant,  but it seemed worth some research.    It seems that when a kernel is tainted “It is in a state that is not supported by the community“.  [http://unix.stackexchange.com/questions/118116/linux-what-is-a-tainted-kernel].   However that just seems to show that there is a problem but does not help me figure out what it means.  However the ‘unknown parameter’  auto_assign_spans leads me to believe that I somehow have an old version of dahdi installed

dahdi is upto version 2.11 as of 4/26/2016

dahdi is upto version 2.11 as of 4/26/2016

# cat /sys/module/dahdi/version
2.5.0.1

A quick check shows that the dahdi installed from the binary is seriously out of date.

 

Install Latest versions from source

So,  I had to remove asterisk and dahdi as a set of packages and install from source.

#apt-get remove asterisk dahdi
#apt-get autoremove

Removed about 70 packages and I was left with an asterisk-less system,   then this article helped me to get asterisk and dahdi installed from source

#apt-get update && apt-get -y upgrade
#apt-get -y install linux-headers-$(uname -r)
#apt-get -y install build-essential subversion git libncurses5-dev uuid-dev libjansson-dev libxml2-dev  libsqlite3-dev
#cd /usr/local/src #wget http://downloads.asterisk.org/pub/telephony/dahdi-linux-complete/dahdi-linux-complete-current.tar.gz #tar zxvf dahdi-linux-complete-current.tar.gz #cd dahdi-linux-complete*/ #I changed this from the article since the latest version was 2.11 when I installed #make all #make install #make config
#cd /usr/local/src
#wget http://downloads.asterisk.org/pub/telephony/libpri/libpri-current.tar.gz
#tar zxvf libpri*
#cd libpri*
#make install && cd ..
# wget http://downloads.asterisk.org/pub/telephony/asterisk/asterisk-13-current.tar.gz
#tar xvzf asterisk*
#cd asterisk*
#contrib/scripts/get_mp3_source.sh
#contrib/scripts/install_prereq install
# make && make install && make config

Then I restart everything

/etc/init.d/dahdi start   
/etc/init.d/asterisk start

And my phone system is working,  there are no more errors and all phone traffic (in and out of the PRI) is working.

Of course,  I reboot,  just to confirm that dahdi and asterisk do start on their own.

#reboot

mysqldump fails with ‘help_topic’ is marked as crashed and should be repaired

mysql table is marked crashed,  but status is still ‘OK’

The Problem

We got an error when attempting to dump and backup the mysql database.  It would get an error saying the table was crashed and needed to be retored.

mysqldump: Error 1194: Table 'help_topic' is marked as crashed and should be repaired when dumping table `help_topic` at row: 507

This most likely could apply to any table.

The analysis

We just run a check against the database and it shows a warning

#mysqlcheck -c mysql help_topic
mysql.help_topic
warning : Size of datafile is: 484552 Should be: 482440
status : OK 

So it seems odd that we have a problem if it says the status is okay,  but the data file size issue is the problem

The solution

Even though it says it is okay,  it has to be repaired,   so we run

#mysqlcheck -r mysql help_topic

Voila!  No more error.

 

(This fix can be used on any table)

 

Call Now Button(208) 344-1115

SIGN UP TO
GET OUR 
FREE
 APP BLUEPRINT

Join our email list

and get your free whitepaper