The world has changed substantially from the 1980s - esp. in terms of technology. Hardware was of just as much importance as the software that ran on it. Changes in hardware and software has pushed the boundaries of computing at an exponential rate - but now, changes in hardware won't have quite as much effect as they formerly did. Now, everything will be outsourced and virtualized, leaving users with less hardware in-hand, and focusing the modern experience more-so on software. But, it is important to take a step back occasionally - computers have been in existence since the time of World War II. They started out as mechanical counters and calculators, with primitive timing mechanisms denoting how fast they could operate and limited programmability. Machines like the Colossus, the world's first programmable, electronic, digital computer, were large enough to fill an entire room, and were not used by the public. The Colossus computer was a set of ten computational machines used by the Allies during World War II to crack the Lorenz cipher. Later milestones, such as ENIAC and EDVAC took computing a step further by introducing the concept of storing both data and programs in the computer itself rather than using removable media to do so. To add on to the mounting milestones set by these three machines, they were all Turing complete, meaning that they were able to emulate the functionality of a machine with the following traits:
- having a limited set of states
- having a (medium) with storage cells and a read/write device that can move along the medium
- having a transition function that allows the machine to switch states
- having its own working-alphabet (set of characters)
But the mentioned machines all suffered one subtle problem - they were multi-ton monsters, and could not be moved. While this wasn't an issue during the time of their use, modern computers would not exist today if they had stayed the way they were in the 40s to 60's. Even with the advent of the BRLESC and ORDVAC machines - whose computational power dwarfed their 20-years removed predecessors, they were only capable of up to five million operations per second (the BRLESC), and still had the problem of size. Companies with the likes of IBM and HP would have to change the scene, providing some of the first personal computers in the industry. This was the 1980s - the so-called Digital Age. The next major companies to appear afterward were Apple and Microsoft - Google arrived approx. 20 years later, in the late 90s. These companies initiated the development of the modern PC, which still is Turing complete to some extent, but can accomplish way more than their older counterparts - by leaps and bounds. But, even after all of these years, all of these machines have certain characteristics in common, including:
- having an input-output mechanism for accepting and returning data (or an acceptable set of characters)
- a central processing mechanism for modifying the input and performing calculations (limited set of interchangeable states, for processing)
- a form of storage for data that is either in use or saved for later (storage medium)
With this in mind, while no two computers are alike, all computers follow this model, which appears to be an indirect derivative of the Turing requirements.
With the advent of modern computers, the hardware has advanced substantially. Instead of flops measuring the number of calculations per second, gigahertz tell how many timing oscillations a computer's processor goes through in a second - in which billions of computations can be accomplished. The wires and circuits are not vacuum tubes, but instead silicon and fiberoptic. Soon, they'll even have diamond wire. It's highly unlikely that we will be able to see this first-hand, however. With the change in focus switching to software, having hardware might not be a common sight in the future. With the focus of technology switching from personal hardware to hosted software, what will be the effect on personal ownership, and when will it become more of a hindrance than a benefit? Is allowing someone, that we don't know, to manage our personal life and files on the network a good idea?
Virtualization is a branch of Autonomic computing (a self-managing computing model based upon the physiological autonomic nervous system) that emulates a specific environment, esp. when in an effort to increase efficiency and/or ease of use. By pooling various resources in calculated manner, virtualization allows for systems to exist that would otherwise be immensely uneconomical in physical implementation, and nearly impossible for the average person to understand or utilize. Cloud and web hosts use network and storage virtualization to allocate bandwidth and digital storage, from accross multiple servers, to various users. The catch is, the users never have to be bothered about where their data is stored or how their website is served. All of the heavy calculations pertaining to where data should go, which IP addresses should host which websites, and who should have what amount of bandwidth at any given moment are all handled by the servers' subsystems. Emulators, another form of virtualization, involve emulating hardware with software, to run legacy programs that required older systems to operate on. A more common form of virtualization is called a virtual machine, and it allows users to run multiple operating system images simultaneously on almost any modern computer in existence. With this advancement, those with the likes of Apple, Google, and Microsoft would never need to purchase spare computers to test their latest operating systems, curbing potential costs tremendously.
In this manner, computing power and resources can be efficiently rationed to tasks as needed from a network, without having to set up more network resources to handle the same load - esp. if the existing network/resources can already handle the load with whatever's left. It's like putting the unused cookie dough that didn't go through the cookie cutter to work - instead of tossing it, all of the dough gets used, reducing the amount of resources wasted. The newest forms of virtualization even allow you to run applications requiring a certain runtime environment on other operating systems, from a cloud-based instance on another server - removing the software compatibility issues that computers once had. In addition to this, one key feature must be noted - the system acts as one machine, making it difficult for the user to be able to tell that multiple machines in multiple locations are handling the task. While you sit at your desk uploading a file to an online backup server, the online backup server could be creating multiple copies of your files to protect against data loss, or storing parts of you folder on various servers to optimize space allocation and usage. And all of this is done without you having to lift a finger - quite different from the days of old.
As mentioned before, the virtual cloud has an immense list of benefits for the taking - with the best being that your personal computer doesn't have to do most of the work. With that said, here's a swift recap on the best parts of the Virtual/Cloud experience:
- Three types of virtualization/cloud services exist, all of which can partially or fully replace the need for local services that would normally need to be on your computer.
- Software as a Service (SaaS) provides common computer applications online for use on almost any platform available, removing the runtime compatibility issue
- Platform as a Service (PaaS) provides one of more frameworks, runtime environments, and APIs for customers to use in application development and/or providing virtual systems online that can replace desktop systems in some cases (VPS)
- Infrastructure as a Service (IaaS) is the most common one you can come across, and includes a myriad of basic cloud storage and internet protocol - providing services, such as VoIP and Amazon AWS.
- With SaaS, organized collaboration becomes one of the easiest tasks on the internet - getting group projects done in an efficient manner is no longer a chore
- Due to hardware efficiency, Virtual/Cloud is also greener in cost and environmental impact than traditional IT
The virtual world of the cloud has never looked so attractive. But, there is a bad side to it.
Some of the largest drawbacks to the virtual cloud are also related to some of its shining features. Data ownership and rights in the cloud varies from service provider to provider. Facebook, Twitter, and Google Plus are examples of providers whose terms and conditions change constantly - esp. those relating to data ownership and content privacy on their networks. Data Privacy in the cloud also varies. It's hard enough dealing with the fact that you're tracked every time you open the web browser (cookies). Now, any data that you upload can be looked through as well, depending on the provider. If your provider says that they'll remove any copyrighted content you upload, or even revoke service from you if you choose to upload content that's questionable, that's a good indicator that they can (and possibly are) looking at your data - esp. Google. Reliable service is also key as well. If the service provider that you are using suddenly goes out of business or changes their service in a detrimental way, you have to migrate your data to another network. This field in general is ever changing, and service providers sadly come and go.
You also need to have reliable internet service to be able to use and access the cloud as well. Without an ISP (Internet Service Provider), you will end up dead in the water in some cases Places with bad internet reception become your worst enemies. The worst worry of dealing with the cloud is worrying about the safety of your data - if you don't have a local backup of your data, it's even worse in your case. The stakes couldn't be any higher, esp. with the new cracking methods and threats arising daily. Things like HeartBleed and ShellShock (of the past) quickly become the least of your concerns as newer forms of technological invasion are developed everyday. But the worst of this is, many breaches occur on an accidental basis in the corporate world. Typing in an incorrect URL while on your work network could accidentally reveal something confidential that you never wanted to know in the first place. For these reasons and more, many businesses and people in general have chosen to resist moving to the cloud, and simply work with whatever hardware they have available. By restricting use of the virtual cloud, they can avoid the pitfalls of allowing outsourced management of their bandwidth and data. But they lose the opportunity to utilize the virtual cloud's power as a result, which is a problem as well. With this in mind, it seems as though there has to be a functional median between these options.
The composite Hybrid Cloud uses a combination of local, private resources and cloud/public servers, with synchronization between the two platforms to provide more flexibility in operations. Where it may cost more to use the cloud, you could just choose to host locally instead. And vice versa. One instance could involve a business that makes use of local resources to host a private cloud for sensitive and critical services, and uses a third-party service, such as Amazon AWS and Google Compute Engine, to host less-critical, development workloads. Synchronizing software can facilitate the connections between the two halves, and the system as a whole can run as one machine. Systems of this design tend to be:
- More adaptable to changing conditions and workloads
- More reliable with processing of 'Big Data'
Ironically, however, the Hybrid Cloud also retains many of the same pros and cons of its constituents as well - the local and the virtual cloud. In fact, it may also introduce a few new issues as well:
- "...Hybrid cloud requires API compatibility and solid network connectivity" - because no network is ever 100 percent perfect.There will be connectivity problems - unless you're willing to act as your own ISP and manage every data packet that traverses your network (no one ever gets this anal about it). Also, APIs can have bugs/flaws as well.
- The public portion is esp. prone to having issues as well - "for the public cloud piece of hybrid cloud, there are potential connectivity issues, SLA breaches and other possible public cloud service disruptions."
While many of these issues can be accounted for and handled by capable IT, most people might not be able to pull off everything listed in this paper.
Upon further analysis of the given points, it is quite apparent that the decision becomes a matter of what you're willing to put up with in terms of the benefits and pitfalls you will experience. While the cloud has the advantage of being the newer and more efficient option, the environment itself is more dangerous. On the other hand, local networking is somewhat less efficient at times, and isn't as ubiquitously available. Hybrid networks attempted to fix the issue, but instead are faced with the compounded issues of both the local and the virtual realm. As it stands, the best choice is the one that you are willing to use, based upon your circumstances and limitations. With this said, I leave the decision to you the reader.
Sources and other content behind this paper can be found here:
Have a nice weekend!