Porque confiança é chave para o sucesso da Inteligência Artificial

7 mins read

If you search the internet for ‘content and AI’, you’ll mostly find people talking and writing about how algorithms are trespassing on the territory of creatives. Algorithms for short and structured copywriting, algorithms as tastemakers, algorithms producing movies and so on.

But what’s more interesting than all the future-gazing is the question of how marketers are setting the context for AI?

I’m not talking about the work of designers in helping data scientists – though it’s evident that UX professionals are needed when it comes to selecting training data and validating prototypes.

What I’m getting at is how the end-product is presented to the user. To me, this will be the most fascinating output of the next stage of digital service design, and it’s a debate about transparency. Users need to be well informed if they are to trust AI with their personal data.

Think about the early days of the web and your journey to trusting websites with your credit card details. There were so many instances where the slightest oversight by a content designer led to mistrust from the user. Perhaps you were signing up for a free trial but the copywriting didn’t make it absolutely clear that you had the chance to cancel before your card was charged – well you probably thought twice.

People don’t trust machines

I suppose I’m just describing a large chunk of human-computer interaction here – think about the famous examples of travel booking engines that delay ‘loading’ of search results because an instantaneous response would seem suspicious.

In 2019, it’s obvious that automation and personalisation are stepping up a notch, and websites and apps are increasingly after our personal data. Personalisation at scale seems to be the goal for every ambitious CMO. So just when we thought UX conventions were fairly well settled, there are new experiences to figure out and transparency is being reassessed and to some extent redefined (with legislation such as the GDPR setting the tone).

There’s a wonderful example of algorithm UX offered by senior product designer at VMWare, Zhaochang on the UXCollective blog.

He shows two real estate websites that both offer a tool which estimates the value of a home. The two estimates produced differ by some way, leading the user to ask the obvious question of which estimate is more trustworthy.

One of the companies, Zillow, does a good job of including the user. The copy states ‘This estimate looks at for-sale or recently sold homes with similar features to this home, like its location, square footage and beds/baths’. Furthermore, Zillow’s estimate is given as a range, gives historical data (how the value has changed over time) and shows similar houses sold recently (and the price they went for).

The competitor, Redfin, on the other hand, supplies a wadge of more opaque copy which begins: ‘The Redfin Estimate is based on what we currently know about this home and nearby market.’

It’s not rocket science. The user wants to know more about how each estimate was cooked up. Because people don’t trust AI, or at least not without implicit trust for the brand.

Redfin, however, excels in some other areas, allowing users to ‘edit home facts’ to improve the estimate, for example. Informing the user is important, but so is giving them some level of control.

Content design isn’t going to open up a black box, of course. And the fact is that in the new world of AI we won’t get a full explanation of every decision (though it’s worth remembering that the GDPR has rules to protect individuals if automated decision-making has legal or significant affects on them – such as allowing users a simple way to request human intervention or challenge a decision).

However, hopefully the tightening of data protection laws is the beginning of an era in UX, with the end result that transparency is best for the consumer but also delivers better results for the business. Though you might accuse me of living in ‘the best of all possible worlds’, given the way in which obfuscation/deception has been used by some to increase conversion in the past, I would argue there have been encouraging signs over the past couple of years. Facebook, of course, lost a million European users in the wake of the Cambridge Analytica scandal.

What can marketers do?

So, what’s the message here? I think there are a few things marketers can do. When working with vendors who use AI, look out for those that take ethics seriously – copy-optimisation company Phrasee has an ethics policy with some simple rules about what they will and won’t do. Be aware of impending legislation and guidelines (for example, the EC has just published ethics guidelines for trustworthy AI).

Most importantly, where AI is involved it will be even more important to stay close to your product, using it regularly and putting yourself in the users’ shoes – extra usability work (or training of the marketing team) may be required to help your team truly understand what a customer-centric experience looks like. In an excellent article last year Uxstudio CEODávid Pásztor set out some principles for AI UX, such as differentiating AI content visually, explaining how machines ‘think’ (‘customers who bought this item also bought’), setting expectations, dealing with edge cases and allowing feedback.

Marketers and content creators who can responsibly set the context for AI functionality will need to work closely with data science teams. It’s an exciting prospect for our industry as marketers continue to seek influence in the board room.

Deixe um comentário

Your email address will not be published.