Data is the new uranium – incredibly powerful and amazingly dangerous
Column I recently got to play a ‘fly on the wall’ at a roundtable of chief information security officers. Beyond the expected griping and moaning about funding shortfalls and always-too-gullible users, I began to hear a new note: data has become a problem.
A generation ago we had hardly any data at all. In 2003 I took a tour of a new all-digital ‘library’ – the Australian Centre for the Moving Image (ACMI) – and marveled at its single petabyte of online storage. I’d never seen so much, and it pointed toward a future where we would all have all the storage capacity we ever needed.
That day arrived not many years later when Amazon’s S3 quickly made scale a non-issue. Today, plenty of enterprises manage multiple petabytes of storage and we think nothing about moving a terabyte across the network or generating a few gigabytes of new media during a working day. Data is so common it has become nearly invisible.
Unless you’re a CISO. For them, more data means more problems, because it’s stored in so many systems. Most security execs know they have pools of data all over the place, and that marketing departments have built massive data-gathering and analytics engines into all customer-facing systems, and acquire more data every day.
But they’re mostly unable to identify all the data they hold, and are unsure if those who collect it understand the reputational and financial risks of a data breach – blame for which lands on a CISO’s desk no matter who messed up.
CISOs therefore increasingly feel that the cost of managing data sometimes exceeds its value. Those I observed have found themselves wishing for a world with less data that needs securing.
While few CISOs would make that suggestion publicly – and fewer have any idea how to manage that feat – they do see the business proposition of “big data” shifting from a net positive to net negative.
Welcome to the latest movement in IT’s endless swings and roundabouts. Just as we’ve seen the center/edge debate in computing shift back and forth repeatedly over the last 50 years, we’re now seeing emergence of another debate: data value versus data cost.
The mantra at the start of this debate – “data is the new oil” – looks to be replaced by another, more accurate assessment: “data is the new yellowcake.” For the unfamiliar, yellowcake is a radioactive, toxic, uranium oxide that can be further refined into a range of both very helpful and apocalyptically terrifying products.
Yellowcake and its derivatives also create a critical storage problem which, if mismanaged, draws intense attention from governmental and anti-governmental interests.
The best place for uranium is in the ground – undisturbed, slowly decaying into lead. If we don’t concentrate it, we don’t have to manage the consequences.
Will we make the same decision about data? We concentrate data to increase its value – simultaneously amplifying the danger to our organizations. Beyond a certain point, organizations could well outrun their ability to manage their concentrated data securely – which could then lead to the whole situation going supercritical.
We don’t know what a “data Chernobyl” might look like. With luck, we’ll never see it. But playing with fire while relying on luck to keep us safe seems a guarantee for disaster. In order to keep data at arm’s length, we’ve got to find our equivalent of the ‘glove box‘ – managed carefully, and with a full awareness of the risks and costs of an accidental spill. ®
READ MORE HERE