Profoss / Events / January 2008: Virtualisation / Speakers / Kristof Despiegeleer

Kristof Despiegeleer

Speaker Details
Name Kristof Despiegeleer
Title CEO

Kristof is founder and CEO of Q-layer. He founded Datacenter Technologies (DCT) in 2001, which was acquired by Veritas (now Symantec) in April 2005. DCT developed a revolutionary and unique backup solution, based on Content Addressed Storage technology. In 2000, Kristof founded Dedigate, a managed hosting provider. Dedigate was acquired by Miami-based Terremark in August 2005. He started his career at PSI Net where he was responsible for the European expansion of PSINet's datacenter division.


I/O is sometimes mentioned as a bottleneck for virtualization. Do you confirm?

Yes, absolutely. And it is amazing to see how little attention is given to the storage layer in many virtualization projects. Hypervisors typically have a very small overhead when it comes to CPU and memory usage. But network I/O and storage I/O can suffer substantial performance decrease if not well designed. Fortunately, the solutions are there, they don't even have to be costly. We work a lot with Infiniband, which provides high bandwidths, but more importantly has a very low latency (the roundtrip time for one packet of data). Latency is a very important factor in every storage subsystem ! Could you describe a good storage solution for a virtualized server environment ?

First of all, let's ask ourselves why we virtualize our servers. Virtualization brings many benefits, including consolidation. Increased flexibility however, is the main driver for many virtualization projects. Virtual servers can be resized on the fly, they can be moved without downtime and disaster recovery finally becomes a reality. A good storage solution should support this kind of flexibility.


It goes without saying that the use of local disks is not an optimal choice. That leaves us with two possible types of storage: NAS and SAN. The use of NAS devices (fileservers) is very popular because of the ease of management. However, there is typically a performance hit because of all the "layers" between the virtual server and the actual disks (e.g. multiple filesystems on top of each other). SAN solutions on the other hand are block based, which means that the overhead is lower, but management tends to be more complex.

How do I decide which storage solution is sufficient for my needs ?

In a virtualized environment, storage will most certainly be shared among virtual servers. Even if performance is good for one virtual server, it is paradigm to execute performance tests with all virtual servers - sharing the same spindles - under heavy load. Make sure that you know how your environment will grow. Within the VM, filesystems will optimize I/O and perform caching, as they always do. But on the physical level all these I/O streams come together in a non-optimized fashion ! This can kill the performance of the storage subsystem.

What is a typical cost per GB ?

In large corporate environments, the amount of storage that can be managed by one full time employee can be as low as 5 to 10TB.


This means that the management cost is much higher than the initial investment ! I believe in the next months and years we will see the rise of many "storage on demand" or "storage as a service" offerings, where capex is turned into opex (a monthly cost per GB everything included) and with flexibility to start small and pay as you grow. These prices will probably range from about 1 EUR/GB per month or less to 5 EUR/GB per month and above. It all depends on performance, levels of redundancy, included services (e.g. backup, SLA's) and the physical location of the storage.

Except for performance and reliability, which advanced features should we expect from a storage solution ?

Storage solutions should support a good backup policy. Snapshots are an important mechanism to take consistent backups of volumes of VM's. Snapshots provide a "frozen" view on a volume which is in use and on which data is written and overwritten all the time. A snapshot gives us time to copy the whole volume to an image file and store that file for disaster recovery purposes. But snapshots are very costly, they slow down a live system. There are new solutions that support unlimited snapshots without any performance hit. These snapshots can also be used to rollback to a previous state, e.g. after a failed software update.


Another advanced feature is "thin provisioning". This means that a VM can be given a volume of e.g. 100 GB, but it will not use 100 GB on the storage subsystem. As long as only 10GB of files are present on the volume, it will only use 10GB. Thin provisioning is a great way to optimize storage usage.


A final important aspect to look at is manageability. A good storage solution is there to support your business needs, not to take up all your IT resources !