- “Data stored overseas should be accessible to US government, judge rules” – Source Reuters
- “Obama administration contends that company with operations in US must comply with warrants for data, even if stored abroad” – Source The Guardian
With the rulings this summer that Microsoft must provide the US government with customer data even if it is stored outside of the United States, many organizations and individuals alike are concerned about data sovereignty and privacy – And they should be however, legal issues like data sovereignty and Safe Harbor are distractions from the real issue.
Let’s start with a definition of Data Sovereignty:
|Definition: Data sovereignty is the concept that information which has been converted and stored in digital form is subject to the laws of the country in which it is located.
– Source TechTarget
If you are at all concerned about data security and privacy, it’s not just legal jurisdictions that you need to be worried about. Consider some of the more high profile security breaches over the past few weeks (let alone the past year) in both cloud services and private data centers:
- “Hundreds of Intimate Celebrity Pictures Leaked Online Following Alleged iCloud Breach” – Source Newsweek
- “Prosecutors: Accused Russian hacker jailed here had 2.1 million stolen credit card numbers when arrested” – Source – Fox
- “Data Breach Bulletin: Home Depot Credit Card Breach Could Prove To Be Larger Than Target Breach” – Source Forbes
- “Russian Hackers Amass Over a Billion Internet Passwords“ – Source New York Times
The message to me is that it doesn’t matter where the data is, it isn’t safe. In fact one could argue that while the US DOJ, SEC or IRS having access to your data is a privacy concern, it is less of a threat than a major security breach like Home Depot etc.
So what’s the answer?
Obviously this is a complex problem and large organizations with lots of smart people have been struggling with it for years. I don’t have a simple answer nor should you expect one. I know that many of the technology problems we faced in the past have been solved – and even seem quaint Remember having to rewind VHS movies before DVDs? Or returning DVDs before Netflix? Since I can’t travel to the future to tell you what the solution will eventually be, let’s look to somebody who has seen the future. Namely Captain Jack Harkness.
He definitely doesn’t want to get caught with his pants down while saving the earth. Notice that he is wearing both suspenders (braces for our British readers) and a belt? So what can we learn from this?
While taking all of the precautions that you can with data center processes is an important part of a security strategy, some additional steps can also be taken. Consider data encryption. Yes, the data may still be accessed by unauthorized parties but the data will be of little use to them if they can’t decrypt it. In a private data center that has been compromised, the data may still be safe.
In public cloud environments, data can be encrypted before it enters the vendors cloud. The keys can reside in the client’s data center or in a third party escrow facility. In order for the data to be useful, a double breach would be necessary.
The same holds true for data sovereignty. Who cares if the DOJ has your data if they can’t read it.
Of course all of this assumes that the level of encryption being used is sufficiently strong that it is non-trivial to decrypt it through brute force or other means.
What do you think the future holds for data sovereignty and security?
Microsoft’s newest Billion Dollar business units include Office 365 and Azure. There’s lots of marketing, sales, and ROI information about Office365 and cloud services in general. So I’m not going to bore you with another post about how to save your organization money or accelerate value by adopting Office365. I’m going to describe two real world use cases that I have personally found Office365 to help with. I might even through in some anecdotal cost benefit analysis but my main purpose is to explore some less common uses for Office 365 that you may not have thought of.
The two scenarios are:
- External consultants
- Text and Development
I manage a team of consultants that regularly have to work at client sites. Often at some very security conscious organizations. We can’t always use our own laptops in their environment or if we can it is typically through guest wireless networks. We’ve encountered situations where the guest wireless prevents us from connecting back to our office through VPN. This makes it difficult to access some of our collaboration services like SharePoint. We have moved my team to Office365 specifically to do things like coauthoring documents in SharePoint from customer sites. This enables some interesting scenarios. We’ve had cases where an offsite consultant was able to review and update some documentation while it was being simultaneously authored by another consultant working in our lab.
Test and Development
We do a lot of System Center work. System Center is a complex suite of products that interact with each other as well as core Windows infrastructure like Active Directory and Exchange. When we are building out a proof of concept for a customer, they typically don’t want us to touch their production AD and Exchange environments. I don’t blame them. Ultimately in order to complete the project we would need to somehow build out an Active Directory and Exchange infrastructure dedicated to the proof of concept or pilot. Consider the additional costs in hardware, software, and time required to accomplish this. Lately we’ve started using Office365 to provide Exchange services. It takes minutes to provision and connect to. Examples we’ve used recently include the Exchange connector for Configuration Manager and Service Manager. Using this approach, in under and hour I was able to get more than a half dozen mobile devices loaded into Configuration Manager for a MDM/UDM proof of concept without touching any production AD or Exchange infrastructure simply by adding an additional email account the devices.
We’ve extended this to Azure as well. We have been using Azure to host System Center instances for proof of concept and sandbox deployments. I’m looking forward to combining Azure with Office365 to further accelerate our pilots and proofs of concept deployments.
Lately I’m getting asked to explain the difference between virtualization and cloud computing. I like answering this question because it shows that the person asking the question has at least enough knowledge to identify that there are similarities but that they are probably not the same. Explaining the difference to this type of questioner is not usually a problem.
What bothers me a little more is when so called IT professionals use the terms VM and cloud interchangeably and then claim that they are pretty much the same thing or that if it works in one it should work in another. It is easy to get into a debate and find specific examples to bolster most claims on either side. Reality is not quite so simple. The right answer will usually start with “it depends”
With the rest of this post I’ll try to explain some of what it depends on and why there aren’t any simple answers. I’ll also give some examples of the beginnings of some more complicated answers without getting too technical.
The question worded a little differently: Aren’t Virtualization and Cloud Computing the same thing?
Before we begin, let’s get a couple of things straight:
- Remember that Cloud Computing is a delivery model and that Virtualization is a technology. Virtualization as a technology may be used at the back end of a service that is delivered with a Cloud Computing service offering but not necessarily.
- Virtualization is only one of the building blocks for cloud computing and there are many types of virtualization (server, desktop, application, storage, network, etc.) so categorical statements about virtualization and cloud computing is risky. It really depends on what is being virtualized an how it is made available by the cloud provider.
- There are different Cloud styles (fabric based, instance based, etc.), service models (Saas, PaaS, IaaS) and deployment models (private, public, hybrid, community). Thus, an answer with any significant depth that is correct when describing a fabric based community PaaS will most likely be incorrect when applied to an instance based private IaaS.
At the risk of oversimplifying let’s just consider a simple VM running on a bare metal hypervisor based virtualization platform. Although the hypervisor abstracts the hardware and makes it available to the VM, the VM is still bounded by the physical server itself. What I mean by this is that although you may be able to move a live VM from one physical server to another, the entire VM (memory and processor resources) must reside on one physical server and a single virtual LUN is required for storage.
Something very similar is the instance based cloud (in fact Amazon’s EC2 uses Xen based VMs at it’s core). This one-to-many relationship between physical resources and user containers (call them VMs if you like but technically they should be referred to as instances) obviously puts limits on the linear scalability and redundancy of this cloud approach. For many, this scalability limitation is offset by the ease of porting an application to an instance based cloud.
Fabric based clouds achieve higher scalability through the use of a fabric controller that keeps track all of the computing resources (memory, processor, storage, network, etc.) and allocates them as services to applications. The physical resources can be distributed among many physical systems. Again, at the risk of oversimplifying, the fabric controller is like an operating system kernel and the fabric itself acts similarly to a traditional OS as far as its relationship to a specific application. Fabric based clouds have a Many-to-Many relationship that allows a many applications to use resources on many physical resources. This model results in superior scalability and theoretically less downtime. However, this comes at the cost of application compatibility as applications must be designed to run in a fabric.
So yes, in some instances (pun intended) cloud computing is just large scale server virtualization but cloud computing is not necessarily the same as virtualization and there are many examples of cloud computing that are significantly different from traditional virtualization.
- Is Virtualization the Same as Cloud Computing? (informationweek.com)
- The Inevitable Eventual Consistency of Cloud Computing (devcentral.f5.com)
Yesterday I met with David Ker, one of the founders of RealWat Inc. They currently offer the Ti-Took Nuage browser which is based on Google’s Chromium (the Chrome open source project). The Nuage browser seeks to improve the browsing experience by adding improved privacy, security, speed, and other Web 2.0 cloud based services such as social bookmarking (more feature details here). While the Ti-Took Nuage browser is interesting I’m unsure of the long term mass appeal it will have as other players (including the big browser shops) add similar functionality to their offerings, but for now Ti-Took is blazing a new trail.
Download a copy of the Nuage Browser here.
What got me more excited is a new project that they are working on called the Ti-Took Cloud Router. It’s an innovative offering that essentially frontends an IaaS offering such as Amazon’s EC2. The Ti-Took Cloud Router is targeted at small organizations that want to take advantage of cloud service offerings but still require security and scope of control. Using the Cloud Router essentially created a virtual private cloud (vPC?) inside a public cloud that encapsulates the services that are important for an individual organizations business and users. It also allows secure access to a virtual datacenter from public locations. The key to all of this is their web based identity management service that provides a unified single sign-on that securely validates users into the vPC and then controls access to other cloud services like email or CRM. We discussed the importance of extended authentication protocols and they assure me that they are investigating two factor authentication.
I foresee this type of offering accelerating the adoption of cloud services in the SMB space. I’m looking forward to more announcements from RealWat.
Wow – Way to go out on a limb. Other than Sam Johnston (prediction 9 on slide 10), everyone is essentially predicting the growth of cloud offering and adoption rates.
My prediction for 2010 – There will be a significant cloud security breach (or breaches) that slow down adoption rates for a time but ultimately highlight the missing pieces of most cloud service offerings. Two of these missing pieces, service level agreements, and security services will become more important in closing new cloud business.
I know I haven’t gone much further out on the limb than the others but at least I have a chance of being wrong.
Let me know how accurate I was in 2011.
Any organization with sensitive data needs to take precautions before moving applications to the cloud. In many cases, regulatory compliance may prevent certain types of applications from moving tot he cloud any time soon.
Understanding that, what about the jurisdictional risks associated with a third party cloud provider essentially managing a portion of your data center? What about provisions of the Patriot Act that might compel your cloud provider to disclose or provide your sensitive data to a law enforcement agency such as the department of Homeland Security.
I’m surprised that a Canadian cloud provider hasn’t emerged that operates outside of the jurisdiction of the Patriot Act. While, such a provider would be certainly be appealing to Canadian organizations that don’t want (or cannot allow) their data to be in the hands of a foreign government, it would also be of interest to American organizations trying to maximize data security while still enjoying some of the benefits of the cloud.
What am I missing?
I’ve been involved with Terminal Services architectures since 1995 with the classic Citrix WinFrame. I even spent some time at Microsoft as the Technical product Lead for Terminal Services in the 1990’s. I’ve always felt that TS was an overlooked technology in most North American enterprises. European enterprises seemed to more readily accept this desktop computing paradigm. It just makes so much sense from a centralized management of computing assets and less touch points.
Then along came Virtual Desktop Infrastructures (VDI) and everybody was excited about the opportunity to centralize desktop computing resources. My initial thoughts were that this is interesting and makes sense for some specific use cases but that TS made sense for more use cases in the average environment. TS provides higher user density on the same server hardware, there were less touch points (with VDI you still have to manage each virtual desktop), there is a rich set of tools to help scale and manage TS infrastructures, and of course the technology was mature and proven in thousands of installations around the world. In reality, TS and VDI are parts of the same spectrum of thin client computing technologies – TS is single multiuser operating system, while VDI is multiple single user operating systems. I was waiting to be vindicated – waiting for the masses that jumped on the VDI train to realize that VDI was just another flavour of TS and that TS made more sense for most users.
I was wrong. VDI is going to leave TS in the dust. Don’t misunderstand; I still believe that TS makes sense for many use cases and that there will continue to be a market for it. The sheer momentum behind virtualization initiatives has propelled VDI into the spot light for most organizations looking at re-architecting their desktop delivery strategies. That momentum might be enough to eclipse TS. After all, the best technology doesn’t always win out in this industry, for instance, <insert you favourite failed technology example here>.
I believe that VDI has one distinct advantage that TS can’t easily provide: It is more “cloud friendly”. Many enterprises currently have applications that can have their workloads dynamically moved within a virtualized pool of computing resources. This works well as long as computing resources are available to meet the peaks. This usually means that for a large portion of the time, the supply of compute power exceeds demand. Essentially, compute power is over provisioned at least some of the time.
However, using cloud services (public or private), can provide just in time computing resources that allow applications to burst into the cloud to meet peak workload requirements.
Think about having the ability to do this with desktop workloads. Sure you could do this with Terminal Services but the inherent TS advantage of higher user density makes the workloads less granular than the equivalent VDI workloads, and more apt to be over provisioned. Essentially, you could establish a policy that would let specified VDI users burst to the cloud as required. This would be perfect for occasional users, or low priority users.
What a great desktop delivery model – provision on demand. Couple this with a client side hypervisor, a streamed desktop, and local caching and you have a solution for the road warrior too.