Chatbots: 3 Things for PMs to Watch Out For

Editor’s note: the following was written by a guest blogger. If you would like to contribute to the blog, please review the Product Blog contribution guidelines and contact [email protected]

Solving problems for customers, as product managers, is our most important job. The tools we use are there to help us find solutions to those problems.

The increase of the use of AI in our products has opened up a brand new world for product teams to explore. One of those tools stemming from AI, chatbots, gives teams an opportunity to talk to customers on a 24/7 basis and hopefully solve those problems even when the team sleeps.

We’ve been using chatbots for several years now, and if there is one conclusion we can gather from the use of the tool -chatbots are hard.

Yes, when it comes to the business outcomes, most of the time these chatbots are designed to be a cost-saving measure to lower CSM to revenue ratio. For most companies though, they never fulfill that promise. 

For example, according to The Information,  Facebook had a 30% success rate with their chatbots. Even with essentially a blank check and likely some of the brightest minds in the field working on their chatbots, they still failed 70% of the time.

The issue with chatbots is that, unless they’re clearly outlined as a closed system, they do all three things we talked about above. The customer assumes that the chatbot can solve their problem, no matter how complex, the AI itself takes what is being fed and flattens the input to fit its world and then the customer quits out of frustration, often lowering trust.

I had this happen personally with my cable company. I told the support bot about an internet outage issue I was having. I engaged with it as it helped solve my problems. The AI wasn’t reporting back to the other lines of support, and flattened my issue into a binary. When I was fed up and called the company, they had no idea of the interactions and I found out that the chatbot system isn’t even CONNECTED to their customer support team. Guess where my trust is for that cable company now. 

white robot wallpaper

Again, think of Facebook, whose war chest is in the billions, failing here. Now think of your company, whose war chest is in the thousands, not being crystal clear about the problems you are solving with AI. 

This is a good a place as any to talk about three things you, as a product manager, should watch for when constructing any project:

  • Data being collected but not connected
  • Avoiding the magic box
  • AI evolving but you aren’t

Data being collected but not connected

One of the things that frustrated me most during the cable story wasn’t just finding out that my information wasn’t getting over to the right people, it was also that the people helping me sounded just as dejected. 

Our chatbots are not on an island, they represent the company both internally and externally. Our customers see them as representatives, no different than your customer success team when they exchange email or a phone conversation. Our customers expect that level of follow through when using the system.

If they aren’t playing well with the customer and they aren’t working with your customer support staff it leads to misalignment. When a customer talks to the support staff, you’ll find instant frustration – since the customer will wonder what happened to the conversation it had, and the representative will think of the multiple times this has happened to them with no recourse. 

You might also be interested in: Stand Out As A PM Using Data

Avoiding the magic box

If you’ve ever looked at some AI process and wondered, “What’s happening in there?” and no one can answer your question, you might find yourself in the middle of what I call the “magic box” fallacy. This happens when someone feeds input into a tool/object/person and simply trusts the output without understanding the process.

brass and silver round pendant lamps

The easiest way to see if your company has “magic box” thinking is its decision fitness. How often do they check to see if their decisions are aligned with their goals? If asking the question, “How do we know this AI process is working to serve our customers?” brings nothing but strange looks, I can assure you that the AI is a magic box.

If people aren’t looking confused, ask further if there is some sort of process that sees if the major decisions the AI is making tracks with the expected behavior.

AI evolving but you aren’t

Your chatbot is getting better with every conversation. Can you say the same about your processes inside of your team? Customers tend to say things to chatbots that you may not get in any research conversation or see in any screengrab. 

Our job is to solve problems. Using our customers to learn is a big part of how we can increase the confidence that we can solve those problems. The benefit of those 24/7 conversations is that customers are talking directly to the problems they see in every conversation. That means our chatbots can function as another entry into knowing our problem better.

It’s important to have a disciplined research practice, and even moreso if you are working with chatbots.  Can you find the conversations that happen repeatedly? Do you have a home for edge cases? Are the customer success/support teams plugged in with product management to ensure conversation is monitored and the problems identified? The always-on nature of chatbots means you’ll have a lot of information coming at you, and if it isn’t structured, you’ll miss out on the opportunity to get better.

In conclusion

You don’t ever want to have customers experience that level of frustration when it comes to your chatbot solutions. Every conversation is an opportunity for one or two things – better customer outcomes or more frustration for your customers. As product people, the former means better long term product success.

If not, your customers will be looking for someone else.

My cable company is a monopoly, and I am betting, the company is not.  Don’t fall into those traps because trust is much harder to recapture than to earn.

Meet the Author

Adam Thomas

Adam Thomas is the principal and founder of Approaching One, a coaching firm designed to help level up product managers into product leaders. He’s worked in tech for over ten years as a product manager, strategist, and executive in various fields, like AI, eCommerce, and Finance. Feel free to tweet him at @thehonorableAT.

The Product Economy

Enjoyed the article? You may like this too: