To expand the disk on a XenServer using the command line, I assume that you have backed up the data elsewhere before the expansion, as this method deletes everything on the disk to be expanded
Thats it on th dom0, now as your vm boots up, log in via SSH and complete the changes by deleting the old partition, repartitioning and making a new filesystem, I am going to do this as though the system is mounted at /data
Looking for help with XenServer? Matraex can help.
We are small users of the XenServer and XenCenter software, and when we were first evaluating the Hyper Visor, we didn’t know much at all about Virtualizing servers.
At the same time as we were looking at XenServer, we were also looking into HyperV and VMWare. Of the 3, I found the open source model that XenServer had, backed by Cytrix’s large company status, to be the most appealing.
XenServer was also what Amazon AWS was based on, and with our experience with AWS it helped us lean towards XenServer.
To add to this, the XenCenter software was very simple to use, way that we were able to quickly create and manage Pools of servers and simply connect to the console seemed to address the features we would need, and not overcomplicate it like the VM Ware software did. An I liked the simple fast interface.
And finally, since we dont like to have Windows or GUI interfaces in our windows environment, we loved that the Hypervisor is a Linux install we can log into and run ‘xe’ command on.. This makes XenServer is very scriptable.
Looking back and why we have created so many blog posts about XenServer is simply, because it is so easy to do. As we have run into things that we have had difficulty doing, it has been simple to document the process of figuring it out, We have the option to simply cut and paste our command line history. This seem so much easier than creating picture snippets of a GUI based management system, and it makes it simple to turn our documentation of the process of troubleshooting an issue into a blog post.
When we find a solution to a problem, they can be very easy to implement and forget. What happens here is that we end up doing the same research a year later to find a solution to a problem. This is one of the reasons that many of our blog posts are not polished, the posts just read like a stream of consciousness troubleshooting session. We are not expert article writers, we are expert Website Developers, Server Administrators and technical implementers. However we recognized that when we solve a difficult problem, if we document that problem in a place that is easy to find (our own blog) we can easily come back to it. We simply search our own blog for it.
So really, the reasons above apply to many of our blog topics.
Here’s a little script that you can run at the dom0 console to automate loading patches on a fresh installation of XenServer 6.5 up to patch XS65E005. If they add more patches, just add more lines referencing the new patch name (e.g. XS65E006, etc) starting with the “wget command and ending with the “rm -f .xsupdate” command.
#!/bin/bash
wget http://downloadns.citrix.com.edgesuite.net/akdlm/10194/XS65E001.zip
unzip XS65E001.zip
xe patch-apply uuid=`xe patch-upload file-name=XS65E001.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *.zip
rm -f *.xsupdate
wget http://downloadns.citrix.com.edgesuite.net/akdlm/10195/XS65E002.zip
unzip XS65E002.zip
xe patch-apply uuid=`xe patch-upload file-name=XS65E002.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *.zip
rm -f *.xsupdate
wget http://downloadns.citrix.com.edgesuite.net/akdlm/10196/XS65E003.zip
unzip XS65E003.zip
xe patch-apply uuid=`xe patch-upload file-name=XS65E003.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *.zip
rm -f *.xsupdate
wget http://downloadns.citrix.com.edgesuite.net/akdlm/10201/XS65E005.zip
unzip XS65E005.zip
xe patch-apply uuid=`xe patch-upload file-name=XS65E005.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *.zip
rm -f *.xsupdate
To change the ip addresses on a XenServer 6.5 pool, start with the slaves, and use the following xe commands:
Remember: Slaves first, then the Master
NOTE: There is no need to change the IP from the Management Console.
Find the UUID of the Host Management PIF:
xe pif-list params=uuid,host-name-label,device,management
You will see a big list. Find the UUID for the slave that you’re working on. Use the “more” pipe if the UUID for your particular slave scrolls off the screen:
xe pif-list params=uuid,host-name-label,device,management | more
Change the IP Address on the first slave:
xe pif-reconfigure-ip uuid=<UUID of host management PIF> IP=<New IP> gateway=<GatewayIP> netmask=<Subnet Mask> DNS=<DNS Lookup IPs> mode=<dhcp,none,static>
Then:
xe-toolstack-restart
Verify the new address with ifconfig, and/or ping it from a workstation.
Point the slave to the new Master IP Address:
xe pool-emergency-reset-master master-address=NEW_IP_OF_THE_MASTER
Repeat the commands above on all slaves.
On the Master:
xe pif-list params=uuid,host-name-label,device,management
xe pif-reconfigure-ip uuid=<UUID of host management PIF> IP=<New IP> gateway=<GatewayIP> netmask=<Subnet Mask> DNS=<DNS Lookup IPs> mode=<dhcp,none,static>
xe-toolstack-restart
DO NOT run the emergency-reset-master command on the Master.
Reboot the Master, then reboot the Slaves and verify that they can find the Master.
Matt Long
04/06/2015
To add local storage XenServer 6.x
get your device id’s with:
ll /dev/disk/by-id
The host uuid can be copied and pasted from the general tab of your host in XenCenter.
Create your storage:
xe sr-create content-type=user device-config:device=/dev/sdb host-uuid=<Place the host’s UUID here> name-label=”<Name your local storage here>” shared=false type=lvm
NOTE: Make sure that “shared=” is false. If you have shared storage on a hypervisor, you won’t be able to add it to a pool. When a hypervisor is added to a pool, its local storage is automatically shared in that pool.
NOTE: Replace sdb in the above command with the device that you’re adding.
To Remove local storage XenServer 6.x
Go to console in XenCenter or log in to your xenserver host via ssh
List your storage repositories.
xe sr-list
You will see something like this:
uuid ( RO) : <The uuid number you want is here>
name-label ( RW): Local storage
name-description ( RW):
host ( RO): host.example.com
type ( RO): lvm
content-type ( RO): user
uuid string is the Storage Repository uuid (SR-uuid) that you need to be able to do the next step.
Get the Physical Block Device UUID.
xe pbd-list sr-uuid=Your-UUID
uuid ( RO) This is the PBD-uuid
Unplug the local storage.
xe pbd-unplug uuid=Your-PBD-uuid
Delete the PBD:
xe pbd-destroy uuid=your-PBD-uuid
Forget ( remove ) the Local storage from showing up as detached.
xe sr-forget uuid=your-SR-uuid
Now check your XenCenter that it’s removed.
I have four instances of freshly installed XenServer 6.2 and there are about a dozen patches for each that need to be applied. Herein, I will attempt to somewhat automated the application of these patches.
What you will need to know:
The list of patches required
The URLs of the download pages for each patch to be applied
The UUID of the patch
The UUID of the Target Host. This can be found by doing, on the console,
a:
xe host-list
The procedure is as follows:
Use the wget command to download the patch. I’ll use Service Pack 1 as an example. We want that patch first, as it is cumulative, and will cut down the number of other patches to be installed.
To find the URL where SP1 resides. go to XenCenter Console, Tools, Check for Updates. This will give you a list of patches available for your server, with links to the download location. Click on the link for XS62ESP1. On the web page that opens, click on “Download”. This will open another page. This is the URL that you want. Copy it to clipboard.
You’ll need the urls and filenames for all the applicable patches when you customize your script.
http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip
Now initiate the wget command at the console in XenCenter using this URL:
wget http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip NOTE:Case Sensitive!
Then unzip it:
unzip XS62ESP1.zip
This zip file contains a file ending with the extension “xsupdate” as in “XS62ESP1.xsupdate. That’s our patch
Register the patch on the Target Server:
xe patch-upload file-name=XS62ESP1.xsupdate
This will display the uuid of the patch. We’ll need that for the next command. You can call it up again with:
xe patch-list
Install the patch:
xe patch-apply uuid=<The patch uuid from th xe patch-list command> host-uuid=<The host uuid from the xe host-list command>
Due to limited disk space, we’re going to clean out our working directory:
rm *
Now we’ll write our script: Notice that we’re doing the upload/registration and apply in one command.
Start of Script:
wget http://downloadns.citrix.com.edgesuite.net/8707/XS62ESP1.zip
unzip XS62ESP1.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/8737/XS62ESP1002.zip
unzip XS62ESP1002.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1002.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9058/XS62ESP1005.zip
unzip XS62ESP1005.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1005.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9491/XS62ESP1008.zip
unzip XS62ESP1008.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1008.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9617/XS62ESP1009.zip
unzip XS62ESP1009.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1009.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9698/XS62ESP1011.zip
unzip XS62ESP1011.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1011.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9703/XS62ESP1013.zip
unzip XS62ESP1013.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1013.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/9708/XS62ESP1014.zip
unzip XS62ESP1014.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1014.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10128/XS62ESP1015.zip
unzip XS62ESP1015.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1015.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10134/XS62ESP1012.zip
unzip XS62ESP1012.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1012.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
wget http://downloadns.citrix.com.edgesuite.net/10174/XS62ESP1016.zip
unzip XS62ESP1016.zip
xe patch-apply uuid=`xe patch-upload file-name=XS62ESP1016.xsupdate 2>&1|tail -1|awk -F” ” ‘{print $NF}’` host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’`
rm -f *
Now just copy the text you created with your patches listed in place of mine and paste in to to the console. You’ll be off and running! (Go get Coffee)
Matt Long
02/17/2015
In our virtual environment on of the VM Host servers has a hardware raid controller on it . so natuarally we used the hardware raid.
The server is a on a Dell 6100 which uses a low featured LSI SAS RAID controller.
One of the ‘low’ features was that it only allows two RAID volumes at a time. Also it does not do RAID 10
So I decided to create a RAID 1 with two SSD drives for the host, and we would also put the root operating systems for each of the Guest VMs there. It would be fast and redundant. Then we have upto 4 1TB disks for the larger data sets. We have multiple identically configured VM Hosts in our Pool.
For the data drives, with only 1 more RAID volume I could create without a RAID 10, I was limited to either a RAID V, a mirror with 2 spares, a JBOD. In order to get the most space out of the 4 1TB drives, I created the RAIDV. After configuring two identical VM hosts like this, putting a DRBD Primary / Primary connection between the two of them and then OCFS2 filesystem on top of it. I found I got as low as 3MB write speed. I wasnt originally thinking about what speeds I would get, I just kind of expected that the speeds would be somewhere around disk write speed and so I suppose I was expecting to get acceptable speeds beetween 30 and 80 MB/s. When I didn’t, I realized I was going to have to do some simple benchmarking on my 4 1TB drives to see what configuration will work best for me to get the best speed and size configuration out of them.
A couple of environment items
So, as I test the different environments I need to be able to createw and destroy the local storage on the dom0 VM Host. Here are some commands that help me to do it.
I already went through xencenter and removed all connections and virtual disk on the storage I want to remove, I had to click on the device “Local Storage 2” under the host and click the storage tab and make sure each was deleted. {VM Host SR Delete Process}
xe sr-list host=server1 #find and keep the uuid of the sr in my case "c2457be3-be34-f2c1-deac-7d63dcc8a55a"
xe pbd-list sr-uuid=c2457be3-be34-f2c1-deac-7d63dcc8a55a # find and keep the uuid of the pbd connectig sr to dom0 "b8af1711-12d6-5c92-5ab2-c201d25612a9"
xe pbd-unplug uuid=b8af1711-12d6-5c92-5ab2-c201d25612a9 #unplug the device from the sr
xe pbd-destroy uuid=b8af1711-12d6-5c92-5ab2-c201d25612a9 #destroy the devices
xe sr-forget uuid=c2457be3-be34-f2c1-deac-7d63dcc8a55a #destroy the sr
Now that the sr is destroyed, I can work on the raw disks on the dom0 and do some benchmarking on the speeds of differnt soft configurations from their.
Once I have made a change, to the structure of the disks, I can recreate the sr with a new name on top of whatever solution I come up with by :
xe sr-create content-type=user device-config:device=/dev/XXX host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage XXX on `cat /etc/hostname`” shared=false type=lvm
Replace the red XXX with what works for you
Most of the tests were me just running dd commands and writing the slowest time, and then what seemed to be about the average time in MB/s. It seemed like, the first time a write was done it was a bit slower but each subsequent time it was faster and I am not sure if that means when a disk is idle, it takes a bit longer to speed up and write? if that is the case then there are two scenarios, if the disk is often idle, the it will use the slower number, but if the disk is busy, it will use the higher average number, so I tracked them both. The idle disk issue was not scientific and many of my tests did not wait long enough for the disk to go idle inbetween tests.
The commands I ran for testing were dd commands
dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=1000 conv=fdatasync #for 1 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=10000 conv=fdatasync #for 10 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=100000 conv=fdatasync #for 100 mb dd if=/dev/zero of=data/speetest.`date +%s` bs=1k count=1000000 conv=fdatasync #for 1000 mb
I wont get into the details of every single command I ran as I was creating the different disk configurations and environments but I will document a couple of them
Soft RAID 10 on dom0
dom0>mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb2 --assume-clean dom0>mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd2 --assume-clean dom0>mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/md0 /dev/md1 --assume-clean dom0>mkfs.ext3 /dev/md10 dom0>xe sr-create content-type=user device-config:device=/dev/md10 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md10 on `cat /etc/hostname`” shared=false type=lvm
Dual Dom0 Mirror – Striped on DomU for an “Extended RAID 10”
dom0> {VM Host SR Delete Process} #to clean out 'Local storage md10' dom0>mdadm --manage /dev/md2 --stop dom0>mkfs.ext3 /dev/md0 && mkfs.ext3 /dev/md1 dom0>xe sr-create content-type=user device-config:device=/dev/md0 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md0 on `cat /etc/hostname`” shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/md1 host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk ‘{print $NF}’` name-label=”Local storage md1 on `cat /etc/hostname`” shared=false type=lvm domU> #at this point use Xen Center to add and attach disks from each of the local md0 and md1 disks to the domU (they were attached on my systems as xvdb and xvdc domU> mdadm --create /dev/md10 --level=0 --raid-devices=2 /dev/xvdb /dev/xvdc domU> mkfs.ext3 /dev/md10 && mount /data /dev/md10
Four disks SR from dom0, soft raid 10 on domU
domU>umount /data domU> mdadm --manage /dev/md10 --stop domU> {delete md2 and md1 disks from the storage tab under your VM Host in Xen Center} dom0> {VM Host SR Delete Process} #to clean out 'Local storage md10' dom0>mdadm --manage /dev/md2 --stop dom0>mdadm --manage /dev/md1 --stop dom0>mdadm --manage /dev/md0 --stop dom0>fdisk /dev/sda #delete partition and write (d w) dom0>fdisk /dev/sdb #delete partition and write (d w) dom0>fdisk /dev/sdc #delete partition and write (d w) dom0>fdisk /dev/sdd #delete partition and write (d w) dom0>xe sr-create content-type=user device-config:device=/dev/sda host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sda on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdb host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdb on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdc host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdc on `cat /etc/hostname`" shared=false type=lvm dom0>xe sr-create content-type=user device-config:device=/dev/sdd host-uuid=`grep -B1 -f /etc/hostname <(xe host-list)|head -n1|awk '{print $NF}'` name-label="Local storage sdd on `cat /etc/hostname`" shared=false type=lvm domU>mdadm --create /dev/md10 -l10 --raid-devices=4 /dev/xvdb /dev/xvdc /dev/xvde /dev/xvdf domU>mdadm --detail --scan >> /etc/mdadm/mdadm.conf domU>echo 100000 > /proc/sys/dev/raid/speed_limit_min #I made the resync go fast, which reduced it from 26 hours to about 3 hours domU>mdadm --grow /dev/md0 --size=max
We have a couple sites with XenServer VM machines, so part of our redundancy / failure plan is to be able to quickly isntall / reinstall a XenServer hypervisor.
THere are plenty of more involved methods with setting up PXE servers, etc. But the quickest / low tech method is to have a USB thumbdrive on hand.
So we can use one of the plethora of tools to create a USB thumbdrive, (unetbootin, USB to ISO, etc) but they all seem to have problems with the ISO, (OS not found, error with install, etc)
So I found one that works well
http://rufus.akeo.ie/
He keeps his software upto date it appears. Download it and run it, select your USB drive then check the box to ‘create a bootable disk using ISO Image, select the image to use from your hard drive. I downloaded the iso image from
http://xenserver.org/overview-xenserver-open-source-virtualization/download.html
– XenServer Installation ISO
Then just boot from the USB drive and the install should start.
I found that while building my virtual environment with templates ready to deploy I created quite a few templates and snapshots.
I did a pretty good job of deleting the extras when I didn’t need them any more, but in some cases when deleting a VM I no longer needed, I forgot to check the box to delete the snapshots that went WITH that VM.
I could see under the dom0 host -> Storage tab that space was still allocated to the snapshots, (Usage was higher than the combined visible suage of servers and templates, and Virtual allocation was way higher than it should be)
But without a place that listed the snapshots that were taking up space. When looking into the way to delete these orphaned snapshots (and the disk snapshots that went with them) I found some cumbersome command line methods.
Like this old method that someone used - http://blog.appsense.com/2009/11/deleting-orphaned-disks-in-citrix-xenserver-5-5/
After a big more digging, i found that by just clicking on the Local Storage under the domU then clicking on the ‘Storage’ tab under there, I would see a list of all of the storage elements that are allocated. I would see some that were for snapshots without a name. Turns out those were the ones that were orphaned, If they were allocated to a live server the delete button would not be highlighted so I just deleted those old ones.
Occassionally I have a need to change the size of a disk, perhaps to allocate more data to the os.
To do this, on the host I unmount the disk
umount /data
Click on the domU server in XenCenter and click on the Storage tab, select the storage item I want to resize and click ‘Detach’
at the command line on one of the dom0 hosts
xe sr-list host=dom0hostname
write down the uuid of the SR which the Virtual Disk was in. (we will use XXXXX-XXXXX-XXXX)
xe vdi-list sr-uuid=XXXXX-XXXXX-XXXX
write down the uuid of the disk that you wanted to resize(we will use YYYY-YYYY-YYYYY)
Also, note that the the virtual-size parameter that shows. VDIs can not be shrunk so you will need a disk size LARGER than the size displayed here.
xe vdi-resize sr-uuid=YYYY-YYYY-YYYYY disk-size=9887654