For many years data monetisation has been seen as a way for data organisations (traditionally seen as cost centres) to reduce, or at least justify, their expense. By assembling data and then selling it either externally to third-party vendors, or internally within their own company, a data organisation could recoup some of its costs. This is, however, not an endeavor to be entered into lightly.
To be clear, data monetisation is not simply another name for selling personal data. It is a way of creating economic value for the firm holding the data, with the aim of either increasing revenue or decreasing costs. However, due to the large amount of publicity currently surrounding personal data and its uses, particularly in the light of the recent Facebook-Cambridge Analytica scandal, the public perception is that if a company is monetising its customers’ data, it is selling it on to be used for other purposes.
Monetising customer data is however not a new phenomenon. Companies have been doing it for years, even before the advent of social media. Take, for example, the first time an advert popped up on a news website for a niche product you had just conducted a Google search for, or looked at on the online retailer Amazon’s website. “How did that get there?”, you may have asked yourself. The answer is that Google or Amazon provided the advertisers with your search history, enabling them to target you specifically. This use of collected consumer information, which has been taking place for many years unbeknown to lots of consumers, is a form of artificial intelligence (AI) that helps advertisers improve their target marketing.
Since the dawn of advertising, companies have tried to hone their message to ensure it gets to the right people. While the classic quote: “I know that half the money I spend on advertising is wasted, I just don’t know which half”, still holds true to some extent, the vast amounts of data available and the high levels of analytical capability, provide the opportunity to significantly reduce waste.
However, there has recently been a shift in public perception surrounding the practice of collecting and using personal data. Consumers have become picky. They want Amazon to tell them things like: ‘customers who viewed this item also viewed…’ and Netflix to make ‘knowledgeable’ programming recommendations based on what has been watched recently, but they are not so keen on the information being passed on to third parties. In Europe, with the upcoming deadline for compliance with the new General Data Protection Regulation (GDPR) – May 25th – and in the US, with the increased demand for personal data protection, businesses need to be more careful about what information they provide to whom and ensure their customers are aware their data is moving outside the organisation.
The aim of GDPR is to protect the consumer and their data. It lays out a legal framework to govern the way companies handle data generated by EU citizens and aims is to empower individuals to make informed decisions about the data they generate. Under the terms of GDPR is it the responsibility of the company holding the data to collect, store, process and dispose of it correctly and legally. Both Europe and the US have previously had data privacy and protection rules, but not actual laws punishable with financial penalties. GDPR is much more stringent, with fines for non-compliance of up to €20 million, or 4 per cent of global revenue.
So, what does this mean for AI, machine learning and predictive analytics? There is no doubt that the new rules for collecting, storing and using customer data will make it more challenging. The concepts of AI and machine learning are based on the premise that there is lots of data available from which to run complex algorithms. If an organisation is not able to access a customer’s buying history to predict their future needs, what will the machines use to learn? This is not to suggest that data privacy, and more specifically GDPR, means the end of machine learning and AI. It will just become more difficult to provide services that the public appear to want (given the amount of money that companies like Amazon, Apple, Google and Microsoft have already spent, and are still spending, on it).
The key will be to correctly understand what level of privacy the public is prepared to give up in order to get what they want from their technology and ensure that this fine line is not unknowingly crossed. Will consumers be willing to give Amazon access to their data so that its’ Alexa voice-recognition virtual assistant and speaker can recommend songs and jokes they may like, or will they prefer tighter controls over their data to the possible detriment of technological advancement.
Collecting data on customers for internal analysis is a vital part of business today. Understanding what customers want, whether it be the colour of a shirt or a mutual fund that meets their risk profile, is important for customer satisfaction. However, once the data leaves the confines of the organisation that collected it, or is used for an alternative purpose without consent, perils ensue. Most customers won’t object to the collection of data. In fact, there is an expectation that companies are collecting data on you to enhance the customer experience. It is the actual transmission and monetisation of that data that causes the problems.