Commentary--Now more than ever, businesses and government agencies are dependent on clear, actionable infrastructure information to make intelligent business decisions on a daily basis. As individuals on almost every organizational level rely on the technical infrastructure to support mission-critical objectives and applications, data center management solutions must be capable of providing high levels of visibility into complex, continually changing environments.
Despite these well known challenges, however, most organizations have no way to gain a comprehensive view of all technical components and their impact on the enterprise. Traditionally, organizations relied on an array of independent point solutions to manage data center operations and critical infrastructure information. Doing so has not only proved to be cost prohibitive and labor-intensive, but it reduced access to accurate and actionable critical information whether executing planned changes or responding to an emergency/crisis situation. Today theres a new approach. Consolidating critical infrastructure information and streamlining IT operations through a single, centralized visual data repository is allowing businesses to control costs, improve efficiency, reduce downtime, improve service levels, reduce disruptions, and more.
Traditional technology management
Managing IT infrastructure in a typical organization involves a variety of stakeholders, from data management to facilities management. Often each department had its own discrete rack space and computing resources. However, enterprises realized this model did not maximize use of equipment. They began sharing services across servers and virtualizing machines to optimize technology investments. This provided a tremendous financial benefit, but also made managing the vast array of technologies in use more complicated. CIOs, IT managers, facilities staff, and others are faced with myriad questions. How do I know when Ill need more blade servers? Do I have enough capacity to support another virtual instance of an operating system and its supporting applications? Is there enough power in the rack to allow for additional equipment?
Answering these questions is often no easy matter. The mere task of identifying the thousands of heterogeneous components, fiber optic networks, LANs, WANs, etc. can be overwhelming. While there are many discovery tools on the market, each only handles certain aspects of the technical environment, and thus do not give a comprehensive view for strategic decision making. Typically, separate applications are used to track and manage the floor layout of the data center, the equipment powering the racks, network connectivity and cabling, and more. An organization might use Visio or even a spreadsheet to keep track of capacity needs. And there is no way to discover passive elements of the enterprise such as the physical cable plant.
Without insight into all of the technical components impacting the organization, realizing the impact of changes was virtually impossible. Moreover, there was no way to know for sure whether each department was manually updating the tracking software within their domain to reflect the latest equipment, facilities, or networking changes. Thus, there was little confidence in the available data.
The 10,000-foot view
Today, progressive-minded organizations are breaking free of individual fiefdoms to gain a 10,000-foot view of the overall technical infrastructure. This gives managers the ability to determine not only the number and types of servers within the data center but where the miles of fiber connections are going, what services are being delivered where, and what capacity problems may arise. For the first time it is possible to understand the interdependencies of the technologies supporting various business functions and serviceshelping to drive more informed decision making. The consolidation of data into one central repository enables powerful analytics. Enterprises simply utilize one application that correlates enterprise discovery tools and captures all elements of the technical infrastructurefrom servers to power equipment to network connections. This next-generation type of software unobtrusively picks up on all these disparate elements, and correlates them in the data center. Organizations can use this data at a high level to make decisions about future technology investments, or drill down to granular issues such as determining which ports are being used relative to services being delivered.
For the first time, organizations can gain a real understanding of the service impacts of a failure in one area, or the implications of a change in the data center. One accurate source for all of this information is a necessary leap forward. Instead of relying on disparate applications and manual processes, businesses can use a centralized visual data repository to visualize racks, how equipment is laid out, how much aisle space is free, etc. This is especially important for planning appropriately for power and cooling needs. Having a multi-dimensional view of equipment placement in relation to the overall space allows for more accurate decision making--ultimately leading to greater reliability.
But visualization of the technical infrastructure doesnt end there. Organizations deploying centralized visualization tools should also look to solutions that map to the actual users desktop environment. Many products only concern themselves with what is in the data center. However, new tools are emerging that give enterprises insight into the impact of service on users. This objective information has been proven to enhance the quality of service provided to users across the enterprise.
Moreover, visualization tools should provide a means to manage the wide area network connections between multiple data centers. Many large, geographically dispersed enterprises have multiple data centers. These centers are already connected via wide area networks. Theres no reason to view these centers as discrete entities when in reality there are important connections between them. A sophisticated geo-referenced visualization tool will allow organizations to gain a holistic view of the relationships between these centers.
Conclusion
The complexity of todays technical environments, and lack of centralized means to manage these resources has often left CIOs and other executives with little insight into the true impacts of infrastructure changes. Often times this meant decisions were being made at a tactical level without regard to any strategic impact. Today, enterprises finally have a clear view of all infrastructure elements and how they relate to one another. The ability to visualize, analyze and manage disparate components allows for better cost controls and a higher quality of service. Mapping the entire enterprise offers a foundation for driving change management, expansion plans, and optimum service delivery.
biography
William Spencer is the founder and CEO of Planet Associates. Located in Neptune, N.J., Planet Associates develops, licenses and supports the Planet IRM family of physical infrastructure relationship management software products. Planet IRM works toward total enterprise network asset consolidation, with project scopes ranging from individual data centers to entire global organizations. For more information, visit http://www.planetassoc.com.
No comments:
Post a Comment