What You Need to Know About Data Centers
So, that re-purposed PC under the desk can no longer meet your expanding server needs? Your mind might drift to thoughts of building your own data center, but the complexities can seem overwhelming. While there are a multitude of considerations involved in the design of any data center, they can be broken down into a manageable set of characteristics that all data centers must possess.
In many ways, choice of location is the single most important element of any data center. It can be tempting to install a few servers into some vacant office space and call it a data center, but there's more to the picture than meets the eye.
An acceptable location will have to meet requirements for reliable power, climate control, floor weight loads, network diversity, and physical security. Nearly every other issue in data center design is impacted by choice of location. An informed data center designer will need to consider each aspect before settling on a location that will meet their needs.
As a company becomes more dependent on their data systems, maintaining uptime becomes a top priority, and it's not as simple as pressing the reset button on that outdated PC server! As a first line of defense, power and Internet connectivity need to come from reliable sources. If multi-hour outages are common at a company's location, they may be required to find a remote location for their data center with a more stable power grid, or even procure alternative power that can remain independent from the grid such as wind, solar, or fuel cell systems. But even reliable sources can and will fail, so every data center needs a clearly defined expectation of uptime so that appropriate backup power and connectivity decisions can be made.
All data centers will require some type of backup power system. For smaller non-critical centers, a battery-powered UPS may be sufficient. It will need to be sized to provide a certain amount of volt-amps for a duration of amp-hours, based on the demand of the equipment and the projected duration of an outage.
To ensure uptime during outages of long duration, a backup generator becomes necessary. Generators are usually powered by the public natural gas utility, with diesel fuel, or in the case of the largest turbine-powered generators, jet fuel. They need to be tested and exercised weekly, and maintenance and refueling services are also required. Refueling contracts that guarantee delivery can be critical during extended outages.
Internet connectivity is also a leading cause of downtime. A single-provider, internet-leased line can be implemented if a SONET ring is available in the area, due to its self-healing ability. More mission critical applications require connectivity from at least two providers to ensure an acceptable level of diversity. A plan to re-route traffic needs to be in place before disaster strikes. This is a situation where a software-defined network (SDN) can be a real time saver, as an SDN can be configured to intelligently redirect traffic through alternate providers to circumvent outages.
Also, note that the infrastructure in some cities can provide better internet connectivity than others, so this is another decision that relates to the choice of location. The best data centers will provision internet-leased lines from all major carriers, ensuring the fastest possible route for external traffic regardless of destination.
Choice of server hardware is not quite as critical these days, as more data centers are implementing VMware virtualization or Docker containers to abstract the hardware. The usual requirements such as RAID still apply, although the availability of low-cost server hardware is beginning to change this. In some situations, it is becoming more cost-effective to implement redundant servers with SSDs rather than use hard-disk RAID arrays. Equipment might also need redundant hot-swap power supplies, hot-swap drives, and multiple network interface cards for secure network isolation.
As previously mentioned, network equipment can also be abstracted using SDN, so that choice of network hardware is no longer as critical as it once was. Where necessary, high-end switches, routers, and other appliances are often modular in design, and spare modules are kept on-hand to minimize downtime.
Just as important as the choice of electronics, every data center also needs an organization plan. Servers and networking equipment need to be organized into racks which are interconnected with cable runway. The best data centers have a pristine look that reflects a high degree of organization. While it may be tempting to think this is done for photogenic purposes when the leading IT magazines pay a visit, it's actually important when problems arise or modifications are necessary. When quick changes need to be made, an organized data center can be maintained or upgraded with the greatest efficiency.
Most data centers use multiple isolated internal networks to protect from unauthorized access. Connections between web servers and their corresponding database servers are often isolated, while a load balancing appliance manages the distribution of external connections among the server cluster.
In the past, data centers were usually homogenous for ease of configuration. Today, however, SDN is rapidly removing the requirement that network hardware come from a single vendor. The cost of equipment is also being reduced because it's often cheaper to use commodity hardware within an SDN. Modern data centers rely less on high-end networking equipment such as load balancers and modular switching/routing devices, instead investing in commodity switches that can be rapidly swapped out if a hardware problem arises.
Backups and Offsite Storage
Although magnetic tape storage is on the decline, it can still be useful in many backup scenarios. Most data centers will be large enough to require a robotic tape library, which can be configured to quickly and seamlessly capture many terabytes of data. It is also important that a full backup of all systems periodically be ejected from the library and transferred offsite. There are couriers available for this purpose who will pick up tapes weekly and store them in their secure warehouse.
Cloud-based solutions are the up-and-coming way to keep data securely stored offsite. Services such as Amazon Glacier can store data center backups at very low cost, but restoration can be slow and expensive.
A comprehensive backup solution using redundant servers (or RAID), local tape backups, and offsite cloud storage may be implemented using the "3-2-1 method". This method provides a high level of data safety with rapid restoration capability.
Any location that houses sensitive data will need to be very secure. This means restricted access at all times, as well as physical barriers within the building to prevent entry. The best data centers are highly concealed, constructed with concrete walls without windows, and located away from possible natural disasters. Their locations are usually a closely guarded secret, which explains why you may have never seen one.
Less critical data centers can be constructed in a common office building with many tenants. Have you ever been in an office elevator and noticed the mystery floor that requires a special key to access? In shared office settings, it is best if the data center occupies an entire floor of the building. Security literally begins at the elevator, and the possibility of intrusion through shared walls or above drop ceilings is eliminated.
Data centers require a pre-action sprinkler system, which is unlike the wet-pipe systems found in most office buildings. Conventional sprinkler systems have pressurized water at the head, which could leak or be inadvertently activated. A pre-action system contains pressurized air at the head instead of water. A smoke detector must first be triggered in order to release water to the sprinkler heads.
Although most building codes require a pre-action system for data centers, water and electricity would hopefully never mix. To prevent this, mission critical data centers will also contain a supplemental dry gas extinguishing system, which should smother a fire before any sprinklers are activated.
Temperature and humidity must remain stable and within the acceptable parameters of data center equipment. A data center's climate control system is usually independent of any other building system. This is often an important consideration when data centers are placed in common office environments.
As you might imagine, a heating system will not be necessary, but utility costs can still be high. Many data centers use advanced climate control systems such as water chillers, geothermal cooling, or direct air systems (in colder climates) in order to minimize utility costs.
Most data centers are complex enough to require more than a single person to maintain systems. A monitoring and alert system is used to notify an on-call technician in the event of a system failure. A typical small- to mid-sized organization might employ both a server admin and network admin who interact closely and can trade 24/7 on-call duties. Larger organizations might also need a dedicated data center manager who can coordinate all activities, plus one or more technicians who share on-call duties while monitoring, testing, and upgrading systems as needed.
With all of these considerations in mind, it may seem that the best data center option for most organizations is to procure a collocated or cloud-based solution. Depending on an organization's particular requirements, though, it can be a huge benefit to design and construct their own custom data center according to their specific needs. For the IT engineer tasked with design and construction, the experience can be very rewarding, as rarely do we get to see such a complex puzzle come together in a physical way.