In previous articles I have discussed various ways AI is being used to track all types of customer behavior and serve up the next best content or channel for that individual as well as patient identification in rare diseases. We do this with many projects in the pharma and health spaces.
One thing that is critical in all these kinds of activities is a combination of utilize the data in compliant personal ways, while avoiding an invasive creepy feeling to the recipient. When a company or brand integrates AI into the foundation of its core data and processes, the information it is able to access will be much more valuable to both the company and the customer. But doing this then poses questions about what is done with that data, whether it is ethical, and how marketers can retain trust.
Those who get it right do so in such a way that the customer feels a connection with the content and engaged. But many get it wrong and the consequences can be disastrous for the brand and company with that customer.
Getting it wrong
I have mentioned a few personal examples to people but one that I always go back to was google selling my search data, which included my mobile phone number and home address, and linking these for their profit in a very invasive creepy way. I was on a flight to Hong Kong and stretched on the plane. As I was stretching I felt my back and more specifically I felt a lump on my back I had not felt before. Therefore, as soon as I got to my hotel in Hong Kong, I looked at my back with a mirror, saw a mole I had never seen before, started up my laptop and went to google and typed in melanoma to double check if it looked suspicious. I did not type in any details about myself and was surfing anonymously – or so I thought. Within 10 mins I received a text on my phone inviting me to a melanoma talk very close to where I live – on the other side of the world from where I searched. The only connection was my google search being on my computer. This felt like a huge invasion of privacy and I was extremely annoyed that google had clearly sold my personal data (phone number and address) plus what I searched on to the company giving the talk. I had only just considered the possibility of melanoma and didn’t even have a diagnosis of melanoma and yet they were assuming I had it and wanted to attend a talk.
I am not alone in feeling this way from google activities, as other complaints against google in this area involve ads for things that people have been typing in their emails if they are using G-mail. Facebook do this also where they offer up people you may know and the connections are difficult to identify but you do know those people. Several friends have expressed to me how they feel it is a little creepy how Facebook do that. It feels creepy when companies utilize personal data in a novel way that provides a seemingly too personal response. Of course, like most things, we get used to much of it. But still, when utilizing personalization technology, it is something to consider strategically and carefully. Companies being this irresponsible in how they handle personal data and how they interact with those customers may have great technology but clearly are not thinking through strategy of the voice data, the requests and the unique user IDs, personal information, and location.
Getting it right
For companies getting this right, the information is used in such a way to deliver what the customer wants next, when they want it. It should feel like a pleasant coincidence rather than intrusive and to achieve this balance it is critical to put yourself in the shoes of your customers when you are planning to project. When we did the ultra-rare disease face recognition-enhanced patient identification project <Click here to read>, we immediately could see that we had to be careful to avoid the creepy factor. No parent wants to be told their child has a fatal condition in a digital medium. One needs to be sensitive to the customers’ needs and emotions. Of course the parents want a diagnosis, but it has to be implemented sensitively.
One of my team worked on a project before working at Eularis and it was a banking project. They used AI to collect data on each customer and combined this with the customers’ cell phone GPS and so when the customer was walking near one of the banks, it would search data on that customer and if they had searched on buying a car and car loans for example, it would send them a text saying something like ‘Are you looking for a car loan? If so, next time you are near a branch, pop in for a special discount on car loans’. And it looked like a coincidence that they just happened to be near the bank and they actually were looking for a car loan. I have learned in the past few years that anything involving digital (emails, texts, websites) that appears like a happy coincidence with what you need, is not. It is the result of a well-planned and orchestrated AI powered analysis of your needs. But this type of analysis should make you feel that it is welcome, relevant and useful for what you are looking for or interested in. When you achieve that, you are doing it well.
If you can strike the right balance between observing your brand values, and regulatory compliance, while allowing AI to access the right amount of relevant data, it can be highly valuable and can deliver real-time personalization that may not be possible with a human due to the numerous data points to be accessed in real time.
For more information on using AI to enhance customer experience contact the author at Eularis http://www.eularis.com