Who is using in-house racks?
The majority of hedge fund firms established before 2008 have internal data storage racks (77%) and/or use external data centres (71%), the results of our survey of hedge fund professionals in IT roles suggest (Exhibit 1.1). Only 3% use neither, with more tenured managers interviewed bemoaning the difficulties in moving legacy systems and information to the cloud. Similarly, quant managers, with their greater need for computing power, have a ravenous appetite for physical storage, with almost all (95%) using physical racks in some form. Established firms have developed a complex infrastructure reliant on physical servers, and many principals will be wedded to the significant investments made and sense of control internal systems offer.
Many of the survey respondents at newer managers – established after 2008 – still use internal racks, but the proportion using neither internal nor external racks is significantly higher (26%) than among other groups. Of course newer managers have fewer disincentives to invest in the cloud, and, as will be noted, the cloud and data centres are the options many new managers are choosing today.
However, our survey results do not provide strong evidence that managers prefer internal physical data centres to external centres, with similar proportions of emerging and established firms trading discretionary and quantitative strategies using both types. In each case, the proportion using internal racks is greater than the proportion using external centres, but not significantly so. This is most likely because internal racks are the traditional, and obvious, method of data storage.
Internal racks were until relatively recently the only feasible method of storing large amounts of data and many see their use continuing, albeit to a diminishing degree. An HFM Technology survey last year found that managers using in-house for primary storage had fallen to 20% from 36% in 2015 (Exhibit 1.2).
i) Greater control and response time: The CTO of one US-based ‘billion-dollar-club’ (BDC) manager said that having an in-house technician meant getting straight answers when something goes wrong, while a marketing representative or account manager from a large company may be distracted by other clients or incentivised to massage the truth.
ii) Bespoke solutions for complex firms: Another manager interviewed said that the wide spread of markets their firm traded would make it difficult to replicate their investment strategy in an external system.
iii) Comfort in compliance: The onus is on the manager to ensure compliance, and while it is labour-intensive to ensure adherence to all relevant data storage regulations, such as the EU’s forthcoming GDPR, doing so in-house can provide an added layer of comfort.
i) Expensive real estate and utility bills: Firms located in the top-end hedge fund hubs need to pay premium-level ground rent for their physical hardware. In addition, cooling systems are expensive enough and prices will increase if additional power is needed to counteract unwanted residual heat from surrounding rooms.
ii) Difficult to transfer: The opportunity to move an existing infrastructure to an external data centre is limited when so much time is being spent keeping a legacy infrastructure afloat, one manager said.
iii) Adding racks is not cheap: When storage needs increase, whether long or short term, there is a significant cost of adding a block. Even if the process to add more space may be straightforward, the full amount of storage may only be needed for certain tasks.
Data centre storage
H1 2017 saw more investment in US data centres than in any of the last ten full calendar years. According to a CBRE report, this year’s investment is on track to surpass the previous three years combined. However, the 2016 HFM Technology survey found that the use of external data centres as the main location of data storage among hedge fund managers had fallen to 48% from 55% in 2015.
i) Specialist support: Perhaps the biggest advantage of an external centre is having access to specialist skills that would be otherwise unattainable. Managers interviewed by the team were very complimentary of the quality of staff at their external data centres. “Goldman Sachs couldn’t do what the likes of AWS and Azure are doing,” one CTO said.
ii) Inexpensive quick fixes: There are cost savings to be made as short-term storage needs can be met in a matter of seconds for little added cost. This is a significantly easier process than adding a new brick to your rack.
iii) State-of-the-art security: Even though external data centres are a focal point for attempted cyber-attacks their security is superior to most in-house installations.
iv) Quick and flexible upgrading: It is expensive and often impractical to advance technology in-house, several CTOs said, while external firms are expected to be at the cutting-edge of new services. Colocation is also easier. Among 226 US data centre managers surveyed by Vertiv, 57% believe it will increase in the next two years.
i) Costly overall packages: The CTO at one multi-billion-dollar manager noted that the cost of his overall data centre service had increased recently, even though the actual cost of storage has fallen. Managers should be mindful that the market is a buyer’s one (as will be discussed further in Section 2) – they may well be able to push back on price hikes.
ii) Third-party risk: Whatever the benefits of external centres, a firm is effectively ceding control to a third-party. As one CTO warned, there is always the risk that a rogue data centre employee can, deliberately or not, “switch off your server”. This may be alarmist, but points to real concerns about the concept of external data centres.
Managing third-party risk
While many managers have no concerns with the concept of external data centres per se, our survey suggests that a similar proportion remain nervous about third-party risk generally (Exhibit 1.3). For many CTO and technology professionals, their biggest task is often convincing management that moving data storage out of house is the right call. The big question is: who is ‘touching’ the data? Many data centres are outsourcing tasks in order to continue growing their business. One London-based BDC manager had it written into the firm’s contract that no ‘fourth-party’ workers could handle their data.
There is also the concern that a third party lacks sufficient knowledge about your contract, or is vulnerable to replacement. If your one point of contact leaves the firm, all knowledge of the relationship could be lost. One way to protect against this risk is by making sure the contract states exactly what you desire in terms of contact points and fourth parties. The other measure is to increase your vendor due diligence set-up. The community of vendor due diligence providers is growing. One easy check when choosing a provider might be to use a provider also used by a respected agency.
Physical security measures
While most CTOs acknowledge that external centres now offer satisfactory, if not superior, security measures, some feel such centres are targets for cyber-attacks. One fear is the growing number of ‘zombie servers’. A Stanford University paper released earlier this year found that 20-30% of servers are unused and unmanaged over a six-month period. Not only do they increase cost through space and wasted energy, they are also ‘un-patched’, i.e. easier to infiltrate. One smaller manager told the Insights team about a laptop that was restarted after four years. A piece of malware that had lain dormant then attempted to penetrate their network.
Physical security is as much of a concern as cyber security. Most physical security is reliant on day-to-day common sense, but determined individuals remain a threat. During our research, we heard several stories of the lengths people will go to access buildings and/or unsecured networks to steal information. As a result, some data centres have taken extreme measures, building facilities in caves and ex-military bunkers, for example. If there is a catastrophe in one of the major markets – many managers noted the radioactive ‘dirty bomb’ scenario – other markets will still open and a ‘bunkered’ centre offers a strong chance of remaining able to trade.
US and UK data centre power failures
Some risks are particularly difficult to mitigate. According to Eaton, there were 3,879 electricity outages across the US in 2016, affecting almost 18 million people, as well as a host of data centres. Reasons for recent outages have been varied and colourful, such as cranes falling onto power lines in the UK, a lawnmower destroying an electricity cable near Chicago, and animals destroying hardware. Exhibit 1.4 lists a selection of some of the most notable outages in the US and the UK over the past three years. A ‘power failure’ is often the reason publicised by data centres to explain an outage – a catch all for when the real reason is unknown or detrimental to the brand.
Even the smallest changes can affect running conditions. One New York-based CTO noted that air pressure was taken for granted by its underground data centre when the floor tiles above were removed seriously impacting the productivity of their systems below. A power outage in the UK in September 2016 lasting 1/5th of a second put the GS2 data centre in London’s Docklands out of action for two days. This and other failures have damaged the company’s reputation, CTOs the team interviewed said.
Preparing for an outage
According to a Ponemon Institute study (Exhibit 1.5), uninterruptible power supply (UPS) failures were the main cause of outages in 2016 (25%). Cyber-crime was second with 22%, a significant increase from 2% in 2010 (this was the only cause to rise during the period). The proportion of failures due to human error remained roughly consistent, on 22% in 2016 and 2013, and 24% in 2010. This was about twice as likely as water or heating failure which had decreased from 15% in 2010, to 12% in 2013, and 11% in 2016.
So how can managers protect themselves against power outages? Interviewees recommend having multiple different sources and types of power supply. One New York-based hedge fund manager whose IT team was concerned about this issue had installed a generator as an insurance policy against outages. The firm also made use of two physical telecoms lines and one microwave connection to ensure permanent connectivity.
Testing back-up systems on a weekday
As Exhibit 1.6 shows, there was a fairly even split between those respondents who said they did not run their key systems from secondary centres (43%) and those who said they did during the working week (41%). The remaining respondents (16%) tested their systems on secondary networks out of office hours. This is not ideal. Testing at the weekend, while providing valuable information, does not truly test the system under the stresses of a business day.
Also of note was how impassioned some managers were about the topic. One interviewee at a Switzerland-based hedge fund manager was surprised to be even asked, calling those firms who run their systems via a back-up centre during the working week “very brave”. He was concerned by the thought of breaking the connection with one and resyncing. Another CTO, at a London-based manager, planned to start such tests at the end of 2017 having recently invested in a DR set-up.
A lack of sophistication is holding many managers back. The most advanced firms can move between data centres at will, as one London-based manager does on a biweekly basis. And as processing and connectivity speeds increase, location itself becomes less of an issue. An IT professional at one London-based BDC manager looking to upgrade from its current internal storage system is considering data centres in Berlin. One lingering problem with regards location is regulation. French managers, for example, must ensure their data is located in France.
The team dictates the solution
Managers looking at whether to invest in internal infrastructure would do well to consider the strength of the team they have or could build. Ultimately it is the team and/or the talent brought in – permanent or contracted – that will create and run the bespoke server and provide the necessary updates. And not just today. The team will need to be maintained and evolve with time much like the infrastructure. A process needs to be in place to ensure this happens. External teams will provide a high-quality service, security and swift access to upgrades, but are less likely to offer the truly bespoke offering sophisticated managers require.
For managers requiring a physical presence, HFM prescribes a minimalist storage system in-house with most of the data stored offsite. The colocation or hybrid model used by quant funds will likely become more prevalent as more firms build out their technology and add systematic products. This model offers the best benefits of external and internal systems: control, support, cutting-edge technology, as well as the ability to run key systems from dual locations in the event of a power outage. An event every fund should expect at some point in its lifecycle.