Tuesday, 7 April 2015

No more excuses - VDI is ready

Over the years, it’s been easy to make excuses about why virtual desktop infrastructure (VDI) projects failed or why VDI wasn’t ready for your environment. The list of excuses is endless, but each one generally ends up following one of a few themes:
  • You could blame storage for not being able (or being too expensive) to support the unique workload that VDI presents, even for persistent desktops;
  • You could say the graphical experience always left something to be desired;
  • You could argue that non-persistent desktops are impractical to deploy.
It’s time for us to put those old excuses aside and look at VDI in a new light. Don’t get me wrong, these things considered to be “excuses” today were once valid “reasons”, but every one of them has been addressed in one way or another in recent years.

Storage is no longer an issue

Cartridges for VDI
ISTOCK
In the early days of VDI, storage was the thorn in our side. Most people didn’t recognise the unique challenge that VDI workloads presented to storage systems. In fact, many companies realised the difference only after they started rolling out VDI. 
Your run-of-the-mill SAN (storage area network) was geared towards capacity, so you’d have the storage person carve out whatever storage you thought you would need to support your shiny new VDI environment and forget about it. Then you’d do a proof of concept, followed by a pilot. You’d test on 20 or 30 users, sign off on VDI as “the platform of the future,” and start to roll it out.
Unfortunately for many companies, that’s when they learned that VDI storage isn’t as much about capacity as it is about I/O. Digging in further showed the limitations of SANs in that no matter what you did, you could only tweak your environment for reads or writes, despite all the indications that VDI requires both.

VISIT BRIFORUM 2015

If you’re interesting in digging deeper into VDI, Brian Madden is hosting a conference called “BriForum” in London from 19-20 May, 2015. BriForum is a vendor-neutral, highly technical conference dedicated to end-user computing technologies like VDI. This is the event’s fourth year in London, and will be attended by hundreds of experts who will share their stories and lessons learned with VDI projects.
Until a few years ago, this was a huge struggle. We were all dealing with, “When do the benefits of VDI outweigh the cost of the storage?” Fortunately, the industry solved this from the other angle - they brought the cost of delivering VDI storage down.
Today there is no shortage of vendors offering VDI storage optimisation. You have your hyper-converged products like Nutanix and Simplivity (along with offerings directly from VMware and Citrix), but there’s also over 20 other products (at various price points) that you can plug into your VDI environment without changing out your existing server hardware. Each one works slightly differently, but they all offer the ability to get both the capacity and the performance that both persistent and non-persistent desktops require.
The biggest challenge with VDI storage today is choosing which vendor to use, but make no mistake—the technology exists at a price point that makes VDI storage a non-issue in 2015.

Non-persistent is great now

It wasn’t long ago that non-persistent desktops presented two huge limitations that made it impractical to deploy for VDI. The primary challenge was with applications. To pull off fully non-persistent desktop with a single image for all users, each application had to be compartmentalised using something like App-V or ThinApp. That seemed fine, but the reality is that nothing available at the time could package and deploy 100% of the applications in that way. That meant that you’d end up putting some applications into your base image, but then as soon as you didn’t want one department having access to those applications, you’d fork the base image, ultimately winding up with a complicated mess of base images that ended up being more complex than just giving everyone their own image to begin with.
Managing applications is just part of the problem. We also have to give some flexibility back to the users in the form of user-installed applications
The other challenge with non-persistent was with user-installed applications. To realise the benefits of a non-persistent image (patching, upgrading, refreshing at boot), you had to lock it down so users couldn’t install their own apps. For many companies this was a show stopper. Even if they didn’t let their users install applications, taking away that autonomy was enough to halt the project.
Today there are technologies available that allow us to deliver applications with 100% compatibility, and non-persistent VDI desktops are viable. Though App-V and ThinApp appear to be useful strictly as application management platforms, at their core they’re solving the problem of dissimilar applications running side-by-side. These newer technologies like Unidesk, FSLogix, VMware AppVolumes, and others have been designed specifically to address the challenges of application management in VDI environments.
Of course, managing applications is just part of the problem. We also have to give some flexibility back to the users in the form of user-installed applications. There are several companies that offer some sort of product or feature to support user-installed applications in non-persistent environments, like Unidesk, AppSense, or Liquidware Labs. The real reason this is “solved” today, though, is that there just isn’t that much of a need anymore. What apps do our users need on their Windows desktop that can’t be accessed from a web browser or a smartphone? You don’t need iTunes installed on your desktop if you have an iPhone.

Graphics virtualisation

The last hurdle to clear for VDI was that of the user experience. Until recently, even the most top-notch VDI user experience was best described as “Not quite right. But it’s fine, really. I’ll get used to it,” by normal PC users, and “Ha! You really expect me to use this thing?” by designers and other employees with graphically-intense usage.
No matter what companies did to add “3D support” to their products (protocols, client software, thin clients, etc), nothing lives up to the real thing. In the days of Windows XP, the gap between a desktop that supported 3D and one that didn’t was pretty narrow, but today it’s huge. Even the £300 PC from Maplin has a GPU in it that can handle rendering graphics and text for Internet Explorer and Office. If you deploy VDI without a virtualised GPU today, you have a substantial difference in the user experience versus a traditional desktop. Even if your average user can’t articulate the difference they still see that something just isn’t right about their work desktop. Imagine how the high-end and more tech-savvy users feel.
Today it’s now possible to deliver GPU-enabled virtual desktops to users with varying levels of performance. Task and knowledge workers can be given just enough of a slice of the GPU to make their desktops run and look better, while high-end users can access a virtual desktop that is as good, or better than their workstation-class PC. Nvidia has been talking about the technology for a while, and for the past few years has been releasing support for one platform at a time. Now, both Citrix and VMware can fully support GPU and vGPU-enabled virtual desktops.

A look to the future

These changes all indicate that VDI is finally ready for widespread adoption. You simply can’t use the old excuses anymore, and new ones are getting harder and harder to come by. Microsoft licensing usually ranks high on that list, but even Microsoft has started to bend a little by introducing a per-user licence for Window client operating systems as opposed to the per-device licence they’ve had throughout Windows’ existence. More changes are expected on the Microsoft front at or around the Windows 10 launch, so the future looks even brighter.
If VDI looks like a great option on paper, but the entire platform is too difficult for you or your company to support, there are a growing number of desktop as a service (DaaS) options available. Worldwide adoption has been slow, but as providers address the concerns that companies have, the pace should pick up.
And then there are disruptive products like HP MoonShot. MoonShot is a chassis that contains desktop cartridges, each of which contains four complete desktop computers with their own CPU, memory, networking, graphics, and storage. By deploying your desktop images to each desktop using Citrix Provisioning Server, you get a platform that provides all the benefits of VDI without the need to worry about a hypervisor or storage.
Look around and you’ll see a lot of answers to the questions and problems of the past. VDI is more awesome than ever.
[source]

Monday, 30 March 2015

Veeam Marks March 30 as World Availability Day… Because Backup is No Longer Enough

For enterprises the question is no longer, “Are we backed up?” The question is now, “Are we always available?” It’s not enough to just back up the data anymore. That’s why Veeam Software has declared March 30 as World Availability Day.

WHAT: World Availability Day will draw attention to the need for enterprise data center availability. Can IT guarantee that customers, partners and employees will have access to their applications and data 24/7? Does IT operate a data center that is Always-On? If an application goes down, can IT bring it back up in 15 minutes? World Availability Day will reinforce the need for Availability for the Modern Data Center™ that can deliver the Always-On Business™, an environment that modern businesses require.

WHY: For consumers, just backing up data is sufficient, and reminding them to do so is a good thing. Family photos, tax documents, email – a hard drive crash or a spilled cup of coffee can cause a lot of heartache if nothing has been backed up. If it takes a few hours or even a couple of days to get the data back, that’s not a huge problem.

But it’s a completely different story for today’s global enterprises. Employees, customers and partners expect to have access to the data center anytime. Unfortunately, that’s not happening. 82 percent of CIOs admit that they are unable to meet that expectation according to a recent industry survey commissioned by Veeam. This may also explain why 81 percent of enterprises are currently involved in data center modernization to meet the increasing demands for 24/7 access to IT services and applications, and to enable an Always-On Business.

Why? For starters, it takes IT an average of 2.9 hours to recover a mission critical application, and 8.5 hours to recover other applications. No employee or customer will accept being without access to a critical application for even a few hours. It’s good that a backup exists, but if someone has to retrieve the tape from the depths of a warehouse across town before a restore can even begin, the problem is far from solved.

Especially if the backup itself won’t restore. CIOs say that when disaster strikes and they need to restore data, they find that 1 in 6 backups fail, on average, which isn’t surprising, since organizations only test about five percent of their backups each quarter to ensure they’re not corrupted. But beyond the simple frustration that employees, partners and customers may experience, the current situation is expensive. Downtime and lost data cost the average business as much as $10 million annually! Businesses don’t need backup. They need availability. But until very recently, 24/7 availability was out of reach for all but the biggest enterprises.

Today, however, the combination of advancements like storage snapshots, virtualization and other technologies have made it both feasible and affordable for all enterprise organizations to back up as often as every 15 minutes and recover anything in the same amount of time.

World Availability Day will let businesses know that true availability is now finally within reach.

Thursday, 12 March 2015

VMware Unveils Next Wave of Innovation for Applications and Desktop Virtualization in Horizon 6

VMware, Inc., the global leader in virtualization and cloud infrastructure, today announced a new release of VMware Horizon 6 featuring advanced capabilities for application publishing, 3D application support through high-performance virtual desktops, enhanced Chromebook support, software-defined data center optimizations including network virtualization benefits from VMware NSX, and new features for U.S federal government customers. VMware Horizon 6 is the industry's most comprehensive desktop and application virtualization solution to deliver and manage any type of enterprise application and desktop, including physical desktops and laptops, virtual desktops and applications, and employee-owned PCs. 

"VMware disrupted the industry when it offered a compelling option to application publishing with VMware Horizon 6, and we are pushing the innovation envelope again with new application and desktop virtualization capabilities in this latest release," said Sumit Dhawan, senior vice president and general manager, desktop products, End-User Computing, VMware. "Customers can benefit from capabilities that are unique to VMware's unified platform that will help transform their desktops into more secure virtual workspaces, as we leverage the power of virtualization to flexibly deliver any application to any device, anywhere with security and management that is simple and unified. In addition to the advanced capabilities in Horizon 6, the technology previews offer a sneak peek at possible new capabilities to come and represent our relentless focus on delivering value and innovation to customers."

Transforming Desktops and Applications On the Journey to Business Mobility This release of VMware Horizon 6 introduces new capabilities integrated into a single solution that can empower IT with a streamlined approach for managing Windows applications and desktops. Innovations in application publishing, device support, application and desktop management, and storage and network support can help make managing mobile end-users easier and more cost-effective. Customers can use VMware Horizon 6 to leverage the power of virtualization and transform their desktops into secure virtual workspaces on their way to embracing business mobility.

Key enhancements within VMware Horizon 6 include:
  • Application Publishing and 3D Graphics Support - VMware Horizon 6 offers comprehensive support for applications, from simple to complex. Port level redirection of local USB storage devices enables access to files through published applications and virtual desktops. NVIDIA GRID vGPU technology, VMware vSphere 6 and Horizon 6 combined can deliver a great end-user experience for rich, 3D graphics on high performance virtual desktops for the most demanding end-users and scalability that IT teams require.

    "Our experience with NVIDIA GRID vGPU and VMware Horizon on vSphere has been incredibly positive," said Michael S. Goay, executive director, Information Technology, USC Viterbi School of Engineering, University of Southern California. "We look forward to sharing a single GPU across multiple users without any sacrifice in compatibility or user experience. Plus, we'll be able to better allocate computer resources to drive higher user density on each vSphere host."
  • Software-Defined Data Center Optimizations with VMware vSphere 6, VMware NSX, and VMware Virtual SAN 6 for Enhanced Security and Lower Costs - Innovations within VMware's unified platform for virtualized compute, networking and storage for the hybrid cloud can help drive down costs, reduce complexities and increase the scalability of virtual applications and desktops using VMware Horizon 6.

    VMware NSX, when deployed with VMware Horizon 6, will bring push-button simplicity for networking and security to VDI deployments. Within seconds, IT administrators can create networking and security policies that dynamically follow Horizon 6 virtual desktops and applications. VMware NSX will eliminate the need for manual network and security configuration to support VDI, which is time-consuming and prone to errors. Through micro-segmentation and integration with leading third party security solutions, Horizon 6 combined with VMware NSX can extend security policy from within the data center all the way to the device.

    The combination of VMware Virtual SAN 6 hyper-converged storage with Horizon 6 offers out of the box integration and enables increased scale and lower total cost of ownership (TCO) for virtual desktops. Horizon 6 has been validated to support up to 4,000 desktops per cluster on a 20-node cluster.

    Enhancements to the VMware Horizon 6 cloud pod architecture enables IT administrators to federate desktop deployments across multiple data centers and geographies using a single interface for management and monitoring. Lastly, support for Windows Server 2012 R2 provides an affordable choice for virtual desktops to help further drive down TCO.
  • New Features for U.S. Federal Government Customers - Support for the IPv6 network address protocol enables government agencies to integrate VMware Horizon 6 into an updated network or seamlessly transition existing Horizon deployments from an IPv4 to IPv6 network. Common Access Card (CAC) support offers simple and secure access to virtual desktops and applications for uniformed service personnel, and efforts to achieve Common Criteria certification are underway. These new features enable VMware Horizon 6 to assist customers with compliance to strict security and privacy standards of the U.S. Federal Government.
VMware Horizon for Linux Early Access Program VMware also announced today an early access program for its VMware Horizon for Linux virtual desktop solution. Committed to customer choice, VMware is expanding its virtual desktop capabilities to include support for Red Hat and Ubuntu based Linux desktops. This will enable customers to simplify desktop management using the VMware Horizon platform to access Windows and Linux applications. In addition to centralized management capabilities and 3D graphics acceleration, this solution will allow end-users to remotely access Linux desktops and applications from mobile devices without compromising security while fostering collaboration. To participate in the early access program, register here.

Technology Preview: Enhancing Speed and Flexibility for Application Delivery Including Enhanced Chromebook Support
Demonstrating continued commitment to application and desktop innovations, VMware revealed today two new technology previews -- Project Local and Project Ark. Project Local showcases the ability to deliver greater speed and efficiency for application delivery to end-users by providing virtual applications access to files on local hard drives and folders. Project Ark demonstrates the flexible delivery of published applications to Google Chromebooks through two separate methodologies -- a clientless approach via an HTML5 browser and a client approach using a new lightweight Chrome client. 
"Advancements in application publishing and virtual desktops are offering compelling reasons for organizations to switch from physical to virtual systems," said Robert Young, Research Manager at IDC. "VMware offers more than just new features in their new release of VMware Horizon 6 with additional benefits coming from integration with its software-defined data center solutions. This can be an important decision-making factor for customers." 

Availability and PricingThe new release of VMware Horizon 6 is expected to be generally available this week. Contact VMware sales or a local VMware partner for pricing. A free trial is available for download by visiting the VMware website.

[source]

Wednesday, 25 February 2015

RDSH versus VDI: 2015 edition


Brian and benny 2008


I've given a "TS versus VDI" presentation several times in the past. It began at BriForum 2008 Chicago, where I co-presented the topic with Benny Tritsch. In that presentation, Benny dressed up as the mature, "proper" solution (as TS was in those days) and argued the case for TS, while I dressed up as the young scrappy VDI:
Our general conclusion from 2008 was that VDI was too risky and complex.
I gave the session again at VMworld 2008 (video from vmworld.com) and VMworld 2009 Europe (video from BrianMadden.com), with the main argument being that VDI was too expensive.

I presented the topic a third time at VMworld 2011 where I argued that VDI might actually have a chance, especially when it came to persistent images, but still in most cases RDSH made more sense.
Believe it or not, I'll be giving this session again (at Citrix Synergy in May and perhaps at BriForum), so I thought I'd share my thoughts about where the TS versus VDI debate is in 2015. (Well, first, I guess I should change it to "RDSH versus VDI."

What is the "RDSH versus VDI" debate about?
To take a step back, the fundamental question I'm trying to answer is how you know whether you should use an RDSH- or VDI-based solution for a particular group of users. When I say, "RDSH", I'm not talking about pure Microsoft-only RDSH, rather, I'm talking about RDSH-based application and desktop delivery solutions including Citrix XenApp, VMware Horizon 6, and Dell vWorkspace where you have a "session per user." And when I say "VDI," I'm talking about Citrix XenDesktop, VMware Horizon (6/View), and Dell vWorkspace where you have a "VM per user."

I'm also assuming here that "VDI" is based on a client-based OS (like Windows 7) and "RDSH" is (obviously) a server OS. For RDSH, I'm not necessarily talking about single-user VMs based on Windows Server OSes, though that's an option too.
Also, to be clear, I don't think this is an "all or nothing" proposition. Certainly within companies there can be a use for both, and I don't want to be dogmatic about one or the other.

RDSH and VDI are actually very similar

The most important thing to know about the "RDSH versus VDI" debate is that the two solutions are actually very similar. They're both about Windows instances running in a datacenter (whether on-premises or hosted), and they both can involve delivering full desktops, seamless published applications, or a combination of both.

In today's world, both RDSH and VDI use the same protocols and can deliver the same end user experience.
So this debate today is really just about the underlying platform.

Historical debate points

The main reason we're having this conversation in 2015 is that a lot of the advantages of RDSH or VDI in the past don't apply anymore. (This is why I need to make a new presentation, since if nothing changed then I would just tell people to watch the videos of the old presentations.)

But consider the following:
  • In 2015, both RDSH and VDI can deliver full desktops or single published apps.
  • In 2015, both RDSH and VDI can deliver either persistent or non-persistent desktops.
  • In 2015, VDI technology is more "proven" and not a crazy fringe idea.
  • In 2015, the cost (measuring in terms of user density for a certain dollar spend on hardware) is much more similar. (Previously RDSH was far cheaper than VDI because you could fit so many more users per server, but with advancements in hypervisors and server hardware, RDSH's cost / user density advantages are not enough to be the main driver of the decision.)
  • In 2015, non-persistent VDI images can automatically be spun up as users login and destroyed when the log out, so the "automatic" management advantages of RDSH don't apply.

RDSH versus VDI in 2015

So where's this leave us in 2015? Honestly there's only one major difference that I can think of: RDSH is a server OS. VDI is a client OS.

Of course you're thinking, "umm . . . duh! That's in the name!" But it's a key point.

A major advantage of VDI is the mere fact that VDI sessions run the same OS as all the other non-VDI desktops in your environment. This is important for several reasons.

First, since no enterprise is going to go 100% VDI, running your VDI sessions on the same OS as your traditional desktops and laptops means that everything is "the same."
  • You can use the same base image for all your users, regardless of whether they're accessing a remote or a traditional desktop.
  • You install your apps in the same way for both remote and traditional users.
  • You only need one set of instructions to support everything.
  • Your patch cycles are also in sync because you don't have to worry about Windows Server patches coming out at different times than Windows client patches.
  • You can also run the same OS for all your users, which means you have the same support lifecycles and the same versions of everything.
Contrast that to using RDSH for some apps and desktops while using Windows 7 for everyone else. Service Packs are different, and you have to have extra steps in all your support to figure out whether you need to follow the Server or the Client procedures.

Another thing that's still a reality in 2015 is that occasionally you do come across apps that don't work out of the box when installed on a multi-user RDSH server. Sure, there's a lot you can do in order to tweak the server to trick the app into running on RDSH, but meh, if you just use VDI with a client OS then you don't have to worry about anything.

Really the only reason you'd be forced to use a server OS is if you're a customer of a hosting provider (like DaaS) where Microsoft's license agreements don't allow them to sell Windows Client as a service (though in most enterprise DaaS environments you can provide your own Windows Client licenses for instances of Windows desktops that they host).

The bottom line

At the end of the day, in 2015, the main difference between VDI and RDSH is the simple fact that you're talking about a server OS versus a client OS. Any slight density advantages you might have with RDSH are swallowed up by the additional support costs generated by the fact that you're supporting multiple OS platforms for your users.

Again, to be clear, I'm not suggesting that you go out and migrate all your XenApp Servers to XenDesktop VDI environments, and I'm not suggesting that RDSH or XenApp are going away anytime soon. Certainly if you just need to publish a few apps here and there, it might be easier to just stand up a few RDSH servers and publish those apps from there.

RDSH versus VDI: 2015 edition
But if you're talking about the platform to base full user desktops on, in 2015 I'm really leaning towards VDI for that, if for nothing else than the ease you get by having all your users being the same across the board.

Tuesday, 3 February 2015

vSphere 6.0 finally announced!

Today Pat Gelsinger and Ben Fathi announced vSphere 6.0. (if you missed it you can still sign up for other events) I know many of you have been waiting on this and are ready to start your download engines but please note that this is just the announcement of GA… the bits will follow shortly. I figured I would do a quick post which details what is in vSphere 6.0 / what is new.There were a lot of announcements today, but I am just going to cover vSphere 6.0 and VSAN. I have some more detailed posts to come so I am not gonna go in to a lot of depth in this post, I just figured I would post a list of all the stuff that is in the release… or at least that I am aware off, some stuff wasn’t broadly announced.
  • vSphere 6
    • Virtual Volumes
      • Want “Virtual SAN” alike policy based management for your traditional storage systems? That is what Virtual Volumes will bring in vSphere 6.0. If you ask me this is the flagship feature in this release.
    • Long Distance vMotion
    • Cross vSwitch and vCenter vMotion
    • vMotion of MSCS VMs using pRDMs
    • vMotion L2 adjacency restrictions are lifted!
    • vSMP Fault Tolerance
    • Content Library
    • NFS 4.1 support
    • Instant Clone aka VMFork
    • vSphere HA Component Protection
    • Storage DRS and SRM support
    • Storage DRS deep integration with VASA to understand thin provisioned, deduplicated, replicated or compressed datastores!
    • Network IO Control per VM reservations
    • Storage IOPS reservations
    • Introduction of Platform Services Controller architecture for vCenter
      • SSO, licensing, certificate authority services are grouped and can be centralized for multiple vCenter Server instances
    • Linked Mode support for vCenter Server Appliance
    • Web Client performance and usability improvements
    • Max Config:
      • 64 hosts per cluster
      • 8000 VMs per cluster
      • 480 CPUs per host
      • 12TB of memory
      • 1000 VMs per host
      • 128 vCPUs per VM
      • 4TB RAM per VM
    • vSphere Replication
      • Compression of replication traffic configurable per VM
      • Isolation of vSphere Replication host traffic
    • vSphere Data Protection now includes all vSphere Data Protection Advanced functionality
      • Up to 8TB of deduped data per VDP Appliance
      • Up to 800 VMs per VDP Appliance
      • Application level backup and restore of SQL Server, Exchange, SharePoint
      • Replication to other VDP Appliances and EMC Avamar
      • Data Domain support
  • Virtual SAN 6
    • All flash configurations
    • Blade enablement through certified JBOD configurations
    • Fault Domain aka “Rack Awareness”
    • Capacity planning / “What if scenarios”
    • Support for hardware-based checksumming / encryption
    • Disk serviceability (Light LED on Failure, Turn LED on/off manually etc)
    • Disk / Diskgroup maintenance mode aka evacuation
    • Virtual SAN Health Services plugin
    • Greater scale
      • 64 hosts per cluster
      • 200 VMs per host
      • 62TB max VMDK size
      • New on-disk format enables fast cloning and snapshotting
      • 32 VM snapshots
      • From 20K IOPS to 40K IOPS in hybrid configuration per host (2x)
      • 90K IOPS with All-Flash per host


As you can see a long list of features and products that have been added or improved. I can’t wait until the GA release is available. In the upcoming days I will post some more details on some of the above listed features as there is no point in flooding the blogosphere even more with similar info.

Tuesday, 11 November 2014

Utility Computing – from Dream to Reality

I have been absorbing, analysing and sharing ideas with lots of people during VMworld Europe 2014 in Barcelona. From executive round-tables, roadmap sessions, panel discussions to one-on-one brainstorms or sharing a dream how the IT landscape would look like in 2020+. Utility Computing is forming from dream to reality in my opinion.

Flashback

If you take a look at the transformation IT has made since the millennium and the speed it is moving at now, is just surreal! In the past 10 years, virtualization has spread like a wildfire across the IT industry. In the beginning it was mostly explaining and convincing a lot of different people from different departments that virtualization is not a hype but a road ahead. Innovating, breaking barriers and tearing down the so called silo’s. Now virtualization discussions aren’t around if it is proven technology or not, now it is more about what’s next? At the first customer I applied virtualization, with a dream team, in 2005 we went from “nothing” to VMware in a production environment on ESX 2.5. Primary applications which have been successfully virtualized are Oracle, SQL and Exchange.
VMworld Europe 2009 Banner
The first time the phrase Utility Computing triggered me was at VMworld Europe 2009 in Cannes. What would happen with IT if you could buy and sell IT infrastructure-power with a standard unit of consumption measurement? What would be needed to handle IT infrastructure power like utilities as gas, water and electricity? How to break the bond between the hardware layer and the functionality on top of it?
How the world is transforming and reshaping the IT landscape
Without change there is no growth and thanks to technology the world around us moves and changes faster than ever. Organisations which do not organize their business processes to absorb and adopt change will remain behind. The revolution of Internet, Social Media, smartphones and tablets determines how people live, learn, play and work together. The way of communication between people and people and organizations change. Access to communication, information and applications have become indispensable and nourish the speed of economic and technical developments. Remain or become competitive and attractive in the market is the primary challenge for organisations.
If you want to manage change you will need to handle tree major components: People, Processes and Technology. If you forget to pay attention to one of those three, then it will go wrong fast these days.
However, the fact that change has become more frequent does not make such changes any easier.
The whole IT industry is currently being reshaped because of one tiny thing what triggered it: Virtualization! Virtualization makes it possible to break the bond between the physical hardware layer and the functionality on top of it. Now virtualization discussions aren’t around if it is proven technology or not, now it is more about what’s next?
In the past 10 years, the Fear for virtualization turned into a conscious choice of architecture in which many companies have a ‘Virtual unless’ handling policy now.

So What’s next?

So with this year’s theme at VMworld being No Limits matches 100% of breaking the barriers without limits. VMworld 2014 No LimitsIf we look at the whole data center you can define three major physical components that make up the IT infrastructure. These are Compute, Network and Storage. If controlled right they are managed by one platform, but most management I see in the field is a lot of element managers and some umbrella manager which tries to tie the things together in one platform. To make Utility Computing happen we will have to break the bond between physical and the functionality on top of it completely. In other words virtualise the whole data center.
The Software-Defined Data Center (SDDC)
Virtualization enabled the way for more automation by breaking the bonds with the physical under layer. VMs are standardized software containers. Software files that can easily be moved, copied and managed. A software-defined data center is where all infrastructure is virtualized and delivered as a service, and the control of this data center is entirely automated by software.
Enterprise IT will have to become more and more business-focused, automatically placing application workloads where they can be best processed. The Compute component is already mainstream and mature, this year during VMworld it was great to see how fast the network virtualization component (NSX) and the Storage component (vSAN) is being developed and moving to a maturity level and ready for becoming mainstream.It is a journey which is on a roll and moving faster and faster.
Each step of the journey will lead to efficiency gains and make the IT organization more and more service oriented.

The physical hardware below a SDDC becomes disposable pieces of technology, which in turn could be priced as a utility easier. If you can move VMs (functionality) in and out quickly this will lower the barriers to change. With a SDDC you can move a whole data center very easy to another location. This is also a high risk security wise because an administrator which controls the SDDC can move, copy or delete it in a push of a mouse click. But with these new challenges their are people who see opportunities and solutions to handle such risk like HyTrust with a Secondary Approval which makes sure that critical processes are handled accordingly.
Hybrid Cloud
The underlying infrastructure is more and more delivered as a Infrastrure-as-a-Service (IaaS) and on top of that a SDDC or components of the SDDC. With this combination you see more and more Private clouds being build and the next step will be combining them with Public clouds into a Hybrid cloud solution. Most companies choose a hybrid combination of on-promises and IaaS platforms. Where most of these options are build on the same VMware SDDC technology, so you easily migrate workloads among clouds and control and govern your hybrid environment from a single management interface. Their are also changes in the physical landscape, where the data center Compute, Networking and Storage resources are combined into a hyper-converged infrastructure.With hyper-convergence you integrate compute, network and storage where you eliminate the complexity and performance drag of (storage) networks and allows infrastructure to be scaled one node at a time. Alex wrote a good article about it here. VMware released EVO Rail and EVO Rack at VMworld. Combining all three hardware components into a single box! Another trend and vision what surprised me is from Diablo Technologies where they put Storage into a Memory slot. They are bringing the storage as close as possible to the execution processing unit (CPU). A simple, but brilliant move in my opinion.
If you already have a Private Cloud running built on VMware technology, you can easily expand your data center by connecting it to VMware vCloud Air. This is a SDDC with IaaS operated by VMware. It lets you quickly, seamlessly and securely extend your data center into the cloud using the tools and processes you already have. You can run your hybrid environment with a common, unified model for management, orchestration, networking and security. You can also connect your private cloud to Clouds operated by VMware partners. With all these connected clouds coming up, the underlying infrastructure for Utility Computing is being built as we speak. Maybe you can get paid for resources you expose to the utility cloud and which is being used by someone else. Making the use of resources more and more efficient world wide.
2020+ Fast forward
How will IT look like in 2020+? Who knows. I think from now till than a lot of organisations will be struggling with their identity and the value they offer to their customers. Workloads will move between hypervisors in and out of different sort of clouds where the world of applications, data and communication is integrated and running on an Utility Cloud with a standardized payment metric for Compute power. Will we see IT Power brokers perhaps? IT Service organisations using the world wide Utility Computing network?  It is good to see there is lots of competition, because competition drives innovation! Emerging technologies are likely to revolutionize the IT industry but also every organization as a whole, as well as our business processes and last but certainly not least your and my job! I see a world of possibilities for every person and organisation who is willing to absorb and adopt change without limits.


You feel something is trembling below the surface.







 [source]