Data Centres

Importance of Network for a Data Centre (DC)

– Dharmendra Singh, Director – Data Centres, ANAROCK Capital
dharmendra.singh@anarock.com

Data Center or DC have been in existence in India for more than two decades. On-Prem and Colocation hosting models were widely prevalent and were primarily led by Enterprises. Tata Communications Ltd., Sify, Netmagic, Ctrl-S, Nxtra, Tulip, Reliance Communications & GPX etc were some of the leading operators who had set up multiple data centres with highest concentration seen in cities like Mumbai/ Navi Mumbai, NCR, Bangalore, Chennai, Hyderabad, Pune, Kolkata in this order.

Emergence of Cloud, Virtualization, hybrid architecture, 4G, online B2C services, affordability of smartphones and lowering of data tariffs caused exponential growth of data. Add the Government initiatives on Data localization, Meghraj (GOI cloud), infrastructure status given to DCs (thereby enabling easy availability of funds), huge demand for remote offices driven by COVID-19, high uptake of video content etc are some of the key reasons driving the increased capacity demand for DCs. Buoyed by this, global majors like NTT, Equinix, Colt, PDG, Digital Realty, Yondr, Capitaland, Iron Mountain, STT are making significant investments in this space. Incumbent players Nxtra (Bharti Airtel), Sify, CtrlS and new local players like Hiranandani group led Yotta, Adani have ramped up the ante through fresh investments.

Business Case for Network Dense DC
One of the most critical parameters for making a DC compelling is its Inter-Connect Design. ANAROCK Consulting services helps Enterprise customers decide the best Network design parameters to ensure highest redundancies, QOS, Scalability with lowest latencies and capex. Let’s go further into why.

In today’s hyperconnected world; applications and data may be hosted in different clouds (private or public, multiclouds), on-prem as well as third party colocation partner; leading to emergence of a “hybrid set-up”. This necessitates the need to interconnect among DCs deployed by different operators as well as Cloud providers. An ecosystem of cloud providers, CDNs, Network, ISPs, SaaS and other managed service providers in a facility enables customers to efficiently exchange traffic directly and privately with one-to-one or one-to-many “direct connections” with the most secure and fastest connectivity to instantly interchange data due to equipment proximity thereby delivering the best performance with highest security and lowest cost.

DC Inter-Connect Need
There are different kinds of Interconnectivities:

  1. Nature of Inter-DC Connectivity
    With hyper-scalers becoming anchor customers for DC operators in India; the connectivity among the DCs has become one of the key requirements and an important parameter for qualification. Point-to-point leased bandwidth often inhibits scalability, is not cost viable for Internet Service Providers/ ISPs and may not give best latency and Internet bandwidth may lead to frequent switching (Flaps) of the link due to challenges associated with underlying transport network. The hyper-scalers typically lease multiple DCs in a given availability zone and connect them using high speed redundant and scalable Optical Fiber links from an IP-1 licensee. A DC with direct connectivity with other DCs in a particular city becomes an impending proposition.
  2. Low Latency driving DC Clusters
    In many cases, the total infrastructure available for lease, gets consumed quickly. Normally, hyper-scalers do clustering of servers for BCP etc with latency expectations <50 microseconds. This is feasible when the distance between two facilities typically does not exceed 5 Km of physical length.If the distance constraint among the closely located facilities are met, existing Hyperscale cloud company are likely to prefer to lease more infrastructure from the same DC operator. One of the reasons why we are seeing new DC’s being constructed within clusters in Navi Mumbai, Siruseri and Ambattur in Chennai, White-field in Bangalore, Greater Noida to name a few.
  3.  Intra-DC Connectivity
    While a lot of emphasis has been put for the inter-DC connectivity, the criticality of intra-DC connectivity is equally high. Inside a DC, a typical rack space consists of multiple shelves. Each of these shelves are unit of Compute and storage and gets connected in specific spine and leaf architecture. The connectivity of each shelf with higher level server is through multiple redundant fiber cables. These complex designs if not done properly, may lead to multiple challenges in managing the operations of the DC. The design should ensure that maze of cable should not restrict airflow within DC. Bundling, Clamping and Routing of cable is done in a way that it does not inhibit the smooth air flow. Maintaining & updating labels & record of each strand of fiber is the key for efficient management of connectivity within DC.
  4.  DC to Gateway Locations
    Typically, most prominent gateway locations are where international submarine systems are terminated via Cable Landing Stations (CLS). Another category of Gateway location is, the one, which hosts Cross Connects like Internet exchange/IXs, DC exchange etc. These exchanges are neutral locations where traffic from different entities are exchanged among them.

    For example, NIXIE hosts a neutral Internet peering exchange and all Internet Service providers or Telcos, normally have very high-capacity links, connected at this site from their respective DC locations through the network racks in the Meet-me-room.

  5. Inter-City DC Connectivity
    In order to prevent data from potential loss, copy of information is maintained at multiple locations. Cloud companies also offer facilities of DC and DR in cloud environment to many of its clients. This necessitates, continuous replication of Data happening among multiple DC in different zones.

    For example, AWS has created two major availability zones in India with Clusters on DCs located in two cities – Mumbai and Hyderabad. All DCs located in different cities need to be connected using scalable, resilient, redundant and best latency path. Normally, such links are leased by the tenants located inside the DC. For example, Google located in various DCs of Mumbai and Delhi, may take bandwidth on lease from Class A UASL upstream providers like Bharti Airtel , Tata Communications Limited etc. on multiple redundant paths with specific Quality of Service SLA.

Pillars of Connectivity Quality
What factors determine the quality of connectivity that differentiates one DC from another:

  1. Redundancy
    Large Enterprise and Cloud companies expect availability guarantee of minimum 99.982% to support Tier III compliance. Such level of availability cannot be guaranteed without multiple end to end redundant path connecting the DC from Telco nodes. Though it is responsibility of ISP to provide link SLA; DC operator has to enable such degree of redundancy with its active involvement
  2. Scalability
    Increasing Data consumption leads to exponential increase of footprints for tenants. Any DC having lack of scalability (on both space and power) may fall out of favour from the potential tenants. The reason why modern day trend is DCs in a campus.
  3.  Latency Desired
    Till few years ago, any latency of carrier grade (< 50 ms) was good enough to support all the services, except for few customers like Stock Brokers and Media Companies. Now cloud hosted consumer applications like Gaming, Automation etc., have started demanding latency near 1 millisecond. Customers are willing to pay extra for a committed latency but also imposition of penalty for breach of SLA. Theoretically, it is possible to achieve latency of 5 microseconds per Km of optical distance. However, achieving such extreme level of latency needs a lot of design considerations in the beginning of setting up the connectivity
  4.  Quality of Service (QoS)
    Other QoS parameters like link flapping /link switching, Packet drops, Jitter, throughput variation etc. impact the application performance. Hence DC providers need to engage proactively with ISPs to ensure very good quality connectivity at any DC location. Often they leave this to ISPs & thereby lose out on points during technical evaluation by prospective customer.
  5. Redundant Meet-me-rooms
    MMRs are those which house Telco network racks. Modern trend is to have a redundant MMR at each floor and not a single or redundant MMR serving all floors. This needs to be a part of initial design of DC operator
  6. Optimized Connectivity Capex
    Initial Capex for setting up best in class connectivity has to be borne by DC operator without any revenue expectation. Hence it is also very important to make sure that all above listed objectives are fulfilled with best possible TCO.