Hooked and booked
At Booking.com, they do a lot of A/B testing.
At Booking.com, they’ve got a lot of dark patterns.
I think there might be a connection.
A/B testing is a great way of finding out what happens when you introduce a change. But it can’t tell you why.
The problem is that, in a data-driven environment, decisions ultimately come down to whether something works or not. But just because something works, doesn’t mean it’s a good thing.
If I were trying to convince you to buy a product, or use a service, one way I could accomplish that would be to literally put a gun to your head. It would work. Except it’s not exactly a good solution, is it? But if we were to judge by the numbers (100% of people threatened with a gun did what we wanted), it would appear to be the right solution.
When speaking about A/B testing at Booking.com, Stuart Frisby emphasised why it’s so central to their way of working:
One of the core principles of our organisation is that we want to be very customer-focused. And A/B testing is really a way for us to institutionalise that customer focus.
I’m not so sure. I think A/B testing is a way to institutionalise a focus on business goals — increasing sales, growth, conversion, and all of that. Now, ideally, those goals would align completely with the customer’s goals; happy customers should mean more sales …but more sales doesn’t necessarily mean happy customers. Using business metrics (sales, growth, conversion) as a proxy for customer satisfaction might not always work …and is clearly not the case with many of these kinds of sites. Whatever the company values might say, a company’s true focus is on whatever they’re measuring as success criteria. If that’s customer satisfaction, then the company is indeed customer-focused. But if the measurements are entirely about what works for sales and conversions, then that’s the real focus of the company.
I’m not saying A/B testing is bad — far from it! (although it can sometimes be taken to the extreme). I feel it’s best wielded in combination with usability testing with real users — seeing their faces, feeling their frustration, sharing their joy.
In short, I think that A/B testing needs to be counterbalanced. There should be some kind of mechanism for getting the answer to “why?” whenever A/B testing provides to the answer to “what?” In-person testing could be one way of providing that balance. Or it could be somebody’s job to always ask “why?” and determine if a solution is qualitatively — and not just quantitatively — good. (And if you look around at your company and don’t see anyone doing that, maybe that’s a role for you.)
If there really is a connection between having a data-driven culture of A/B testing, and a product that’s filled with dark patterns, then the disturbing conclusion is that dark patterns work …at least in the short term.
This was originally posted on my own site.