I wrote this in 2003… Six years later, looking at all the excitement about cloud computing and the maturity that has been achieved (see “A Berkeley View of Cloud Computing“), I am glad our ideas at ThinkDynamics proved correct. Applying Control Theory to the automatic management of data centers is the way forward.
The computing industry has evolved dramatically in the past two decades, with major advances in computing hardware, operating systems and application software, as well as networking and connectivity. A computing environment today is complex and heterogeneous, including hardware and software components from a multitude of vendors and open source teams, making it increasingly difficult to integrate, install, configure and maintain. At the current rate of complexity increase, it is feasible that in several years computing environments will become impossible to administer by skilled professionals.
The high cost of ownership of computing resources has resulted in a number of industry initiatives to reduce the cost of managing them, and therefore reduce the overall impact of the cost of operating a data center. Examples include IBM’s Autonomic Computing, HP’s Adaptive Infrastructure, and Microsoft’s Dynamic Systems Initiative. All of these initiatives target the reduction of overall operational cost by increased automation, going as far as envisioning self-managed systems that operate without human intervention. The concept here is that operator error has been identified as a major source of system failure, and hence would benefit the most from automated operations.
The concept of automation has existed for years, as a way to adapt to changing workloads, system failures, and security attacks. However, most solutions ignored the use of control theory as a way to provide a solid theoretical and practical foundation to the development of computing resources automation and self-managing systems.