This is the Live Blog of the VMworld 2012 INF-NET2166 — “How I Built My SDN-Based Cloud.” You’ll find my recap of the session below.
The presenters for this session are:
- Justin Giardina, iland
- Jesse Morgan, Peak 10
- Adina Simu, VMware
- Omar Torres, VeriStor Systems
Jesse Morgan, Peak 10 is up first. He will be discussing the Peak 10 cloud firewall offering. Challenges for firewall offering.
- No VPN, No SSL-VPN
- No overlapping of internal customer IP spaces
- No load balancing
- Physical device sprawl
- Many connections
Peak 10 installed vShield Edge in their lab. The latest beta version 694615 of vShield Edge offers:
- Load Balance HTTP
- 10 interfaces
- Load Balance HTTPS
- Load Balance TCP
- Large and Compact Appliance
- Configurable Load Balancing Health Checks
The Peak 10 testing infrastructure includes:
- Cisco UCS B230 M2 — Good to see
- EMC VNX 5300
- 2 Web Servers
- Load generator
- Edge appliance
VMware has continually improved the performance of vShield Edge through the different versions. The …
This is the Live Blog of the VMworld 2012 General Session. You’ll find my recap of the session below.
CEO Paul Maritz is on the stage, introduced by Chief Marketing Officer, Rick Jackson. Statistics say that in 2012 60% of all x86 workloads worldwide are virtualized. In 2008 we were asking what is the cloud. In 2012, we are asking how do we transform our operations into a cloud infrastructure.
Maritz continues to say that we need to look at new ways to provide access to information to users. It’s all about giving the information to users whenever, wherever they are and in the context that they need. IT transformation happens in three broad categories:
• Infrastructure — Moving from physical to virtual to cloud
• Application transformation — Moving from desktop to web. Providing big data in manageable contexts.
• End User Computing — Moving from desktop / laptop access to any device, anywhere. Mobile users.
Paul Maritz has officially announced that he is handing over the stewardship of VMware to Pat Gelsinger as new CEO. Pat Gelsinger is on the stage now. He is talking about the Software Defined Datacenter.
As much as we have progressed, we still have silos: Windows, Linux, Databases, Mission Critical, Big Data, HPC. The Software Defined Datacenter is about bringing all of these different silos under one unified infrastructure / datacenter operating system. VMware is announcing the vCloud Suite to deliver and manage the Software Defined Datacenter.
VMware is announcing vSphere …
I am very proud to announce that my new book co-authored with Sean Crookston has been released! It was a lot of hard work and I cannot thank Sean enough for being crazy enough to go through it with me. It is available now on Amazon in both Print and Kindle formats. Check out my publications page or the Amazon page for more details. Thank you to all who have already ordered a copy. The print seems to be sold out at the moment with more to come soon!
I am honored to be named a vExpert for 2012. I could not have done this without the help of my peer community. I learn something new from you guys every day. Thank you to VMware and the virtualization community.
I have been hearing a lot of interest from my clients lately about stretched vSphere clusters. I can certainly see the appeal from a simplicity standpoint. At least on the surface. Let’s take a look at the perceived benefits, risks, and the reality of stretched vSphere clusters today.
First, let’s define what I mean by a stretched vSphere cluster. I am talking about a vSphere (HA / DRS) cluster where some hosts exist in one physical datacenter and some hosts exist in another physical datacenter. These datacenters can be geographically separated or even on the same campus. Some of the challenges will be the same regardless of the geographic location.
To keep things simple, let’s look at a scenario where the cluster is stretched across two different datacenters on the same campus. This is a scenario that I see attempted quite often.
This cluster is stretched across two datacenters. For this example let’s assume that each datacenter has an IP-based storage array that is accessible to all the hosts in the cluster and the link between the two datacenters is Layer 2. This means that all of the hosts in the cluster are Layer 2 adjacent. At first glance, this configuration may be desirable because of its perceived elegance and simplicity. Let’s take a look at the perceived functionality.
- If either datacenter has a failure, the VM’s should be restarted on the other datacenter’s hosts via High Availability (HA).
- No need for manual intervention or something like Site Recovery Manager
Unfortunately, perceived functionality and actual functionality differ in this scenario. Let’s take a look at an HA failover scenario from a storage perspective first.
To be sure, there are plenty of new features to get excited about in vSphere 5.0. VMware has come a long way since 2002, when I first started using the technology. Often in the technology world, practitioners get excited about learning and implementing new technology without planning properly. They want to implement as fast as possible to bring about the benefits and innovation that the new technology has to offer. I believe that we have all been guilty of this at one point. So, this post is to remind all technology practitioners to take a step back and think about proper planning when implementing new technology projects. One of the basic tasks that should be done at the beginning of any virtualization design is capacity planning.
My role at TBL allows me to examine many virtual infrastructures. One of the common challenges that I see in many of these infrastructures is resource allocation after they have been running for a while. Workloads were virtualized quickly without proper capacity planning and by the team I am called in to assess the infrastructure, resources are strained in the environment. This point may come quickly if proper capacity planning is not performed up front. …
Centralized management brings organizations more control over resources with fewer equipment assets in the field. There are many cases where equipment may be needed in a branch office to speed access time to a resource or eliminate the dependency on a network link to the central datacenter. It is very common to see at least one, if not multiple, servers at the branch office to provide file/print services or user authentication. Perhaps the servers are providing some service that is specialized to a particular business (banking applications come to mind here). Whatever service is being provided, sometimes it is better to maintain local access at the branch. So there are servers to maintain at the branch office, as well as networking gear and other such devices.
What if you could consolidate your branch office services with your router? That is exactly what the Cisco UCS Express is meant to do. The UCS Express is a Services-Ready Engine (SRE) module that works in Integrated Services Router Generation 2 (ISR G2) routers. This module is …
This is a quick blog to discuss where vSphere is at with memory management today. vSphere has many mechanisms to reclaim memory before resorting to paging to disk. Let’s briefly look at these methods.
- Transparent Page Sharing (TPS)
- Think of this as deduplication for memory. Identical pages of memory are shared with many VM’s instead of provisioning a copy of that same page to all VM’s. This can have a tremendous impact on the amount of RAM used on a given host if there are many identical pages.
- This method increases the memory pressure inside the guest so that memory that is not being used can be reclaimed. If the hypervisor were to just start taking memory pages from guests, the guest Operating Systems would not react positively to that. So, balooning is a way to place artificial pressure on the guest VM so that the VM pages unused memory to disk. Then, the hypervisor can …
The desktop PC is dead! Finally!
Well, not yet, but VMware is sure working hard to make this a reality. I have been discussing with clients and colleagues why the traditional desktop model does not make sense for “today’s” end user for quite a few “todays.” VMware calls a user-centric approach to computing End User Computing. End users need access to their applications and information on any device from anywhere. They should not know or care about the nuances of the Operating System. This sounds like a lofty goal, but it is becoming a reality more and more every year.
If we look at the last decade (or even further into the 90’s), we have seen the Operating System itself have the spotlight. New “Operating System” features were actually marketed towards end users.
- The latest OS supports more RAM!
- The latest OS supports 64-bit computing!
- The latest OS supports Solid State Flash Drives!
- The latest OS can take advantage …
I recently spoke at a lunch and learn event about “Security in a Virtualized World”. If one thing was made abundantly clear during the discussion, it was the fact that securing a virtual infrastructure is more complicated than securing a physical infrastructure. There are many moving parts to consider along with the hypervisor itself. For many years, I have been discussing the need for automation with my clients. It makes the infrastructure much easier to manage and from a security standpoint it helps to ensure that build policies are consistent for all of the virtual hosts in the infrastructure.
There have always been tools to automate a vSphere infrastructure ranging from Perl scripts to PowerCLI. With the release of vSphere 5 automation is becoming more and more a reality. When you think about automating a VMware infrastructure, you may think about writing scripts to perform certain tasks or spending hours on the “perfect” ESX build that can be deployed through automation. Scripts are still available and in some cases necessary for automation. However, with vSphere 5 we are beginning to see an “automation-friendly” environment built into the management tools that are given to us from VMware.
ESXi: Built for Automation