You’ve likely heard about how Office 365 and Windows Intune are great applications to get you started with Cloud Computing. Many of you emailed me asking for more info on what Cloud Computing is, including the distinction between “Public Cloud” and “Private Cloud”. I want to address these questions and help you get started. Let’s begin with a brief set of definitions and some places to find more info; however, an excellent place where you can always learn more about Cloud Computing is the Microsoft Virtual Academy.
Public Cloud computing means that the infrastructure to run and manage the applications users are taking advantage of is run by someone else and not you. In other words, you do not buy the hardware or software to run your email or other services being used in your organization – that is done by someone else. Users simply connect to these services from their computers and you pay a monthly subscription fee for each user that is taking advantage of the service. Examples of Public Cloud services include Office 365, Windows Intune, Microsoft Dynamics CRM Online, Hotmail, and others.
Private Cloud computing generally means that the hardware and software to run services used by your organization is run on your premises, with the ability for business groups to self-provision the services they need based on rules established by the IT department. Generally, Private Cloud implementations today are found in larger organizations but they are also viable for small and medium-sized businesses since they generally allow an automation of services and reduction in IT workloads when properly implemented. Having the right management tools, like System Center 2012, to implement and operate Private Cloud is important in order to be successful.
So – how do you get started? The first step is to determine what makes the most sense to your organization. The nice thing is that you do not need to pick Public or Private Cloud – you can use elements of both where it makes sense for your business – the choice is yours. When you are ready to try and purchase Public Cloud technologies, the Microsoft Volume Licensing web site is a good place to find links to each of the online services. In particular, if you are interested in a trial for each service, you can visit the following pages: Office 365, CRM Online, Windows Intune, and Windows Azure.
For Private Cloud technologies, start with some of the courses on Microsoft Virtual Academy and then download and install the Microsoft Private Cloud technologies including Windows Server 2008 R2 Hyper-V and System Center 2012 in your own environment and take it for a spin. Also, keep up to date with the Canadian IT Pro blog to learn about events Microsoft is delivering such as the IT Virtualization Boot Camps and more to get you started with these technologies hands on.
Finally, I want to ask for your help to allow the team at Microsoft to continue to provide you what you need. Twice a year through something we call “The Global Relationship Study” – they reach out and contact you to see how they’re doing and what Microsoft could do better. If you get an email from “Microsoft Feedback” with the subject line “Help Microsoft Focus on Customers and Partners” between March 5th and April 13th, please take a little time to tell them what you think
Cloud Computing Resources:
- Microsoft Server and Cloud Computing site – information on Microsoft’s overall cloud strategy and products.
- Microsoft Virtual Academy – for free online training to help improve your IT skillset.
- Office 365 Trial/Info page – get more information or try it out for yourself.
- Office 365 Videos – see how businesses like yours have used Office 365 to transition to the cloud.
- Windows Intune Trial/Info – get more information or try it out for yourself.
- Microsoft Dynamics CRM Online page – information on trying and licensing Microsoft Dynamics CRM Online.
It has been a while since I’ve posted anything here but I have a very good reason. My blog has been picked up by TechRepublic and most of my posts are going to be hosted there for the foreseeable future. Believe it or not, they actually pay me for this stuff.
I’m going to look into adding the links to my blogroll here but please follow me on TechRepublic if you can.
Here are links to my first four posts on TechRepublic:
- How to install Windows Server 2008 R2 with Hyper-V and Windows 7 on the same partition
- Costs and risks to consider when planning a move to the public cloud
- Don’t overlook these seven problem areas of virtualization
- How to optimize VM memory and processor performance
Lately I’m getting asked to explain the difference between virtualization and cloud computing. I like answering this question because it shows that the person asking the question has at least enough knowledge to identify that there are similarities but that they are probably not the same. Explaining the difference to this type of questioner is not usually a problem.
What bothers me a little more is when so called IT professionals use the terms VM and cloud interchangeably and then claim that they are pretty much the same thing or that if it works in one it should work in another. It is easy to get into a debate and find specific examples to bolster most claims on either side. Reality is not quite so simple. The right answer will usually start with “it depends”
With the rest of this post I’ll try to explain some of what it depends on and why there aren’t any simple answers. I’ll also give some examples of the beginnings of some more complicated answers without getting too technical.
The question worded a little differently: Aren’t Virtualization and Cloud Computing the same thing?
Before we begin, let’s get a couple of things straight:
- Remember that Cloud Computing is a delivery model and that Virtualization is a technology. Virtualization as a technology may be used at the back end of a service that is delivered with a Cloud Computing service offering but not necessarily.
- Virtualization is only one of the building blocks for cloud computing and there are many types of virtualization (server, desktop, application, storage, network, etc.) so categorical statements about virtualization and cloud computing is risky. It really depends on what is being virtualized an how it is made available by the cloud provider.
- There are different Cloud styles (fabric based, instance based, etc.), service models (Saas, PaaS, IaaS) and deployment models (private, public, hybrid, community). Thus, an answer with any significant depth that is correct when describing a fabric based community PaaS will most likely be incorrect when applied to an instance based private IaaS.
At the risk of oversimplifying let’s just consider a simple VM running on a bare metal hypervisor based virtualization platform. Although the hypervisor abstracts the hardware and makes it available to the VM, the VM is still bounded by the physical server itself. What I mean by this is that although you may be able to move a live VM from one physical server to another, the entire VM (memory and processor resources) must reside on one physical server and a single virtual LUN is required for storage.
Something very similar is the instance based cloud (in fact Amazon’s EC2 uses Xen based VMs at it’s core). This one-to-many relationship between physical resources and user containers (call them VMs if you like but technically they should be referred to as instances) obviously puts limits on the linear scalability and redundancy of this cloud approach. For many, this scalability limitation is offset by the ease of porting an application to an instance based cloud.
Fabric based clouds achieve higher scalability through the use of a fabric controller that keeps track all of the computing resources (memory, processor, storage, network, etc.) and allocates them as services to applications. The physical resources can be distributed among many physical systems. Again, at the risk of oversimplifying, the fabric controller is like an operating system kernel and the fabric itself acts similarly to a traditional OS as far as its relationship to a specific application. Fabric based clouds have a Many-to-Many relationship that allows a many applications to use resources on many physical resources. This model results in superior scalability and theoretically less downtime. However, this comes at the cost of application compatibility as applications must be designed to run in a fabric.
So yes, in some instances (pun intended) cloud computing is just large scale server virtualization but cloud computing is not necessarily the same as virtualization and there are many examples of cloud computing that are significantly different from traditional virtualization.
- Is Virtualization the Same as Cloud Computing? (informationweek.com)
- The Inevitable Eventual Consistency of Cloud Computing (devcentral.f5.com)
We keep hearing that it doesn’t matter if your machine is physical or virtual. Your software will still work just fine. That’s true most of the time but there are some exceptions. Monitoring is one of those exceptions. In truth, the monitoring tool will work and will give you accurate information but it will be meaningless.
Remember that in a hypervisor based virtual environment, the guest OS is typically unaware that the hardware has been abstracted and that resource scheduling is taking place to provide shared computing resources based on some preset business rules (some guests may be configured to get more resources than others).
In a scenario like this, a legacy monitoring tool that is targeted at the guest VM, may get false positives concerning resource availability. Typically you might see near 100% CPU consumed. This will be based on the telemetry coming back from the guest VM indicating that it is nearly out of resources, when in reality, there may be more resources available just not committed or allocated to that VM at that point in time.
To get a more realistic and complete view of what is actually happening, the monitoring tool would need to monitor the hypervisor and all of the guests correlating the telemetry form all of them and providing a more holistic view of the availability of resources.
Products like this are starting to emerge. Make sure that when you plan to migrate a server or application to a virtual environment you also plan for the monitoring requirements.
Yesterday I met with David Ker, one of the founders of RealWat Inc. They currently offer the Ti-Took Nuage browser which is based on Google’s Chromium (the Chrome open source project). The Nuage browser seeks to improve the browsing experience by adding improved privacy, security, speed, and other Web 2.0 cloud based services such as social bookmarking (more feature details here). While the Ti-Took Nuage browser is interesting I’m unsure of the long term mass appeal it will have as other players (including the big browser shops) add similar functionality to their offerings, but for now Ti-Took is blazing a new trail.
Download a copy of the Nuage Browser here.
What got me more excited is a new project that they are working on called the Ti-Took Cloud Router. It’s an innovative offering that essentially frontends an IaaS offering such as Amazon’s EC2. The Ti-Took Cloud Router is targeted at small organizations that want to take advantage of cloud service offerings but still require security and scope of control. Using the Cloud Router essentially created a virtual private cloud (vPC?) inside a public cloud that encapsulates the services that are important for an individual organizations business and users. It also allows secure access to a virtual datacenter from public locations. The key to all of this is their web based identity management service that provides a unified single sign-on that securely validates users into the vPC and then controls access to other cloud services like email or CRM. We discussed the importance of extended authentication protocols and they assure me that they are investigating two factor authentication.
I foresee this type of offering accelerating the adoption of cloud services in the SMB space. I’m looking forward to more announcements from RealWat.
I’ve been looking at public cloud offerings informally for a while and while I appreciate the case for a full cloud infrastructure, it will be quite some time before large enterprise datacenters can realistically retool everything for the cloud. Sure, small startups can very successfully have large parts if not all of their IT services in the cloud but there are still too many barriers for larger organizations with large investments in “in-house” IT resources.
My initial thoughts were that there could be an opportunity for hybrid clouds that can provide organizations with excess capacity on demand. This would be a great way to augment datacenters with cloud bursting opportunities and introduce organizations to cloud offerings with low risk. Of course, accomplish this, enterprises will need to build infrastructures that are compatible with public clouds.
The problem is that none of the (admittedly small sample) of private cloud infrastructures that I’ve looked at use the same APIs as their public cloud counterparts. Until now that is …
Here’s an interesting announcement from Eucalyptus and Terracotta that essentially provides the management tools and a private cloud infrastructure that uses the same APIs as the Amazon AWS. This is a good start and I hope and expect to see more offerings that make it easy to build hybrid cloud solutions.
I expect that MS will soon announce on-premises availability of the Azure platform.
Some interesting quotes fromthe article:
“People that have never done anything with VMware are all going with ESXi,” said Singler. “Unless they need special hardware monitoring or have existing scripts, there’s no reason not to go with it.”
“Enterprise ESXi shops say the lightweight hypervisor offers many advantages over the full-blown version. In terms of security, ESXi’s smaller footprint and lack of Linux service console makes it harder for hackers to tamper with or for administrators to make mistakes on.”
“VMware’s made no secret of the fact that the ESX console is going to be end-of-lifed,” Wolf said. So while it’s still there in vSphere 4, “I’d be surprised if it was still there in version 5. Customers are going to have to plan on that in the next 24 to 36 months.” Part of that planning process is to select management tools that already support ESXi.”
Most interesting quote:
“We have made no secret of the fact that ESXi is the preferred and better architecture. ”
Vice-President of Product Marketing, VMware Inc.
A few key quotes from the article:
“So while VMware’s initial acquisition cost is much higher than Hyper-V’s, VMware allows for much denser VM configurations and permits RAM overcommit for higher memory utilization rates”
“…users interested in Hyper-V today tend to be small and medium-sized businesses and remote offices that already use Windows Server; Hyper-V is built into that familiar system and allows them to run hundreds of VMs at a lower cost than VMware…”
“large data centers that are serious about VM availability and density continue to rely on VMware, not the first version of Hyper-V”
Wow – Way to go out on a limb. Other than Sam Johnston (prediction 9 on slide 10), everyone is essentially predicting the growth of cloud offering and adoption rates.
My prediction for 2010 – There will be a significant cloud security breach (or breaches) that slow down adoption rates for a time but ultimately highlight the missing pieces of most cloud service offerings. Two of these missing pieces, service level agreements, and security services will become more important in closing new cloud business.
I know I haven’t gone much further out on the limb than the others but at least I have a chance of being wrong.
Let me know how accurate I was in 2011.