Profoss / Events / January 2008: Virtualisation / Speakers / Mike Kreiten

Mike Kreiten

Speaker Details
Name Mike Kreiten
Title Product Marketing Manager Server & Workstation EMEA
Company
Talks

Mike is responsible for AMD's server and workstation platform marketing in EMEA. He has 10 years of IT product marketing experience for several companies such as ELSA, PNY, ATI and AMD. His background in workstation graphics technologies, his former responsibility for multimedia products and 3 years of chipset marketing enable a good view beyond his own nose.

Interview

How does a hardware maker see virtualization? Doesn't it lower the production volume?

Virtualization is not applicable for all kind of applications and servers. Significant parts of the server volume are stand-alone pedestal servers for example, or HPC clusters. It’s still a valid question for the majority of applications, though. And yes, it will have an influence on the number of servers operating. On the other hand it’s a very challenging technology, not only saving energy, CO2 emission and hardware resources, and it’s a simply good to serve this market with fitting solutions.

 

What does hardware support bring? New capabilities or performance?

Hardware support takes workload off the Hypervisor. Remember in most cases the Hypervisor is a software layer managing the physical resources for several operating system instances, or VMs. Software can only be as good as the hardware it’s running on, if the hardware takes over tasks, this can only be even faster, or more efficient. Hardware support can be both, but normally it’s focused on performance resp. efficiency. Security features would be a good example where we added capabilities to silicon.

Who's in the driver seat? Do hardware maker fill the needs of software developers, or do software developers use capabilities designed by hardware makers?

You have to know that hardware such as CPUs are not invented overnight. We start thinking about new technologies inside a X86 processor a few years prior to delivery. For example back in 1999 we set the route for multi-core CPUs and our system architecture. We introduced the Opteron CPU und system architecture I 2003 and multi-core CPUs in 2005, though popular operating systems didn’t support all features on day one of delivery. While AMD’s goal is to deliver technology support right at the time it’s either economically or technology-wise interesting, sometimes we’re early.

 

What is a chipmaker's relation with virtualization software makers?

I can only speak for AMD, but we have very close relationships to make sure our customers have only best experiences. We adopted and invented a lot of technologies that are beneficial for virtualized environments, when supported by hard- and software. Software vendors may see trends early, because they’re very close to the application side at customers. We know where to tune and improve technology, so we sit together with all major software vendors and discuss our common future frequently.

 

Do you contribute to Open Source projects?

With all products AMD has to offer since acquisition of ATI, the answer is yes. Referring to virtualization within Linux it’s also a yes, KVM is of course also supported.

With this virtualization trend, when should one choose big servers partitioned over a cluster of smaller servers running virtual servers?

That’s a question which needs deep investigation of infrastructure and can’t be answered easily. We often see average workloads of 10-20% on physical servers, while a target workload on virtualized servers would be ~70%, leaving potential for peaks. It depends how these peaks are covered, how strongly workload varies over time, what consolidation rate can be achieved and of course our customer’s expectations for amortization. Virtualization is an investment in a first step, but pays off very quickly.

 

Virtualization and multi-core are 2 buzzwords in the chips business. How well do they play together?

Guest operating systems normaly run multiple applications each. Several guest operating systems share one CPU core in a virtualized environment running at a high consolidation rate. Last but not least, two or four cores share one socket in multi-core CPUs. Platform efficiency is key, as you can imagine. And luckily we have more ways to attach system components and multiple paths to access and attach memory in our DirectConnect architecture. So that was the right decision 4 years ago, go away from FSB technology. With quad-core Opteron we added hardware acceleration for even more efficiency.

 

What is virtualization's impact on power consumption?

Consolidating workloads has the goal to switch off unused servers. This aspect is the most obvious impact. Server based computing / desktop virtualization using thin clients for example would be very beneficial from not only from the energy perspective as well.

What's the future of virtualization at the hardware side?

There is always one limiting factor in every system. The challenge is to create balanced systems. So when the first limitation is removed by advanced technology, the next one is not too close. In the past calculation horsepower or memory bandwidths were typical candidates. But networking and storage became faster over time. So next will be virtualization of I/O, IOMMU, to take even more workload off the Hypervisor and attach guest operating systems closer to physical hardware resources.