In this episode of DigitalNOW, guest host Tejan Gabisi, is joined by Paul Lee, a Senior Manager with Logic20/20’s Digital Transformation practice, and AI/ML Product Manager. They discuss the nuances between Artificial Intelligence and Machine Learning, how technological advancements are impacting the business, and how Paul’s team has leveraged AI/ML for their telecom client.
Matt Trouville: You’re listening to DigitalNOW, an original business and technology podcast by Logic20/20. I’m your host, Matt Trouville. Each episode, I’ll be interviewing a new expert to learn more about industry trends, fascinating new tech, shifting customer expectations and the steps every business can take to stay ahead.
Tejan Gabisi: I’m Tejan Gabisi, I’m your guest host for the podcast this month and we wanted to talk about digital product innovation. We have Paul Lee here in terms of being the subject matter expert to discuss digital product innovation at Logic 2020 and just in general.
Paul Lee: Yeah. Hi, everyone. Paul Lee here. Team Manager at Logic 2020 in the Digital Transformation practice. I’ve been spending the last couple of years on one of our telecommunications clients as an AI/ML product manager. I spent about eight years in consulting over the past decade in digital product management and I’ve seen lot of different types of innovations come out of that. AI/ML has been around for a long time, but the adoption of it has been increasing and improving a lot more lately, so it’s fascinating to see how it has impacted visual product innovation over the past five, ten years and I’m excited to talk about that today with Tejan.
Tejan: Sounds good. I want to get this started talking about digital product innovation. I know Paul and I discussed offline our thoughts on digital product innovation, project development, and project management in general. I’ve been a product owner, project manager in the past as well; so I definitely have my thoughts, but we definitely want to get into to what is the cutting edge today, especially when you look at AI, and looking back at innovation. What was innovation five to ten years ago? Things have changed significantly, especially when you start talking about AI/ML. So I guess Paul, can you give me your thoughts in terms of what is digital product innovation and what are your thoughts on that in today’s world?
Paul: Yeah, absolutely. About 5-10 years ago when we talked about digital product innovation, with a client, they want to build a mobile application or web application, we design out a three to six month MVP with them to see what that would look like and how to build software. Fast forward to today, now everyone’s got a mobile app, everyone has got ecommerce, or a website and whatnot. So now they want to know, “OK, how can we move to the next level, adopt data science, machine learning, and AI?”. All of these bug words out there to increase processes and all that. So it’s kind of fascinating to see now that there is the next stage of their digital development and innovation and being a lot more data science teams out there being adopted and part of the digital world.
Tejan: I want to switch; on the same topic still, but to talk about the difference between AI and the ML. Since I’m an actual mechanical engineer, I know what I think of when I think about AI and also what I think about when someone says ML. I’ve seen it where some people have put them together, and say that essentially they’re the same thing. I don’t think they’re the same thing, but I want to give you the chance, Paul, to talk about: what is the difference between AI and ML?
Paul: Yeah, that’s a very good question. Part of my job, a lot of it has to do with sort of educating stakeholders, explain to them the differences between AI, ML, deep learning, all these buzzwords in the community. And so artificial intelligence is basically any sort of computer technique that can be used to automate human decisions. And artificial intelligence can be a number of different things, that’s kind of the big umbrella. Above artificial intelligence is data science. Data science can be used to create models, predictions and all sorts of things. But when you go down into artificial intelligence, there are multiple ways to do human decision making through a machine, and one technique is called machine learning. Machine learning requires a lot of data in order to create a model to predict human decision making. But machine learning isn’t the only solution that exists in the market as part of AI, so AI is more of the umbrella term of the data science AI/ML world. And then within AI is machine learning, and then within machine learning there’s several different frameworks and techniques that data scientists and machine learning engineers can use to apply machine learning.
One popular example that we use in one of my clients is deep learning. It requires a lot of modeling with neural networks, training it with a lot of different data sets, teaching the model how to think like a human, and over time it only gets better. So the way I explained to my stakeholder how it works is, imagine raising a baby. When you have a baby, you teach the baby different objects as they get older. For example when you want to teach your baby what is a spoon, you show it in an object that is a spoon. So you tell the baby, “Hey, this is a spoon, this is what you eat with”. But then the baby doesn’t understand anything else. Any of the other utensils. And so you show the baby multiple types of spoons through the shape of the spoon – so you show it a wooden spoon, a plastic spoon an aluminum or metal spoon. And you say hey, these are all spoons. So the baby starts using this sort of training data to understand: “OK when an object looks like that, it’s a spoon.” When you show the baby a knife, a chopstick, or a fork, then the baby is going to realize that’s not a spoon. It doesn’t look like any of the other objects that I was trained to understand in the world. And so, machine learning is very similar to that. At a very basic stage you teach it certain data to understand, hey, this is the right answer. And then when it recognizes that similar image or audio or speech, again it will be similar to that. So I think it’s that. And so that’s basically how our machine learning models work. And so once you teach it more and more data, the baby can become smarter and smarter, so you can teach it larger data sets in terms of, hey, now that you know these are spoons, you can say, hey, this is what a wooden spoon looks like and you show it 20 different images of a wooden spoon. This is what a metal spoon looks like, when you show 20 different types of metal spoons and sizes. And this is what a plastic spoon looks like. And so over time, the baby gets smarter; the machine model gets smarter. And that’s essentially machine learning. You teach it several different data sets to make it smarter and smarter over time. So it takes a lot of time and investment in order for it to become highly accurate.
Tejan: I know a lot of people have heard of an algorithm or algorithms. Some of their favorite products are algorithms. Would you tie that into it ML? Machine Learning?
Paul: It depends. You can make hybrid ML models with algorithms, but necessarily no. Machine learning is more around data sets and training it with neural networks. Algorithms are another technique that you can use to apply artificial intelligence. You can use very complex if/then statements, different types of algorithms out there to create different solutions for different AI use cases. But generally, the very popular technique used in artificial intelligence is machine learning. Whether it’s speech and audio and image recognition, a lot of different companies such as Amazon and Google for Alexa, and the Google Assistant, even Apple Siri, they all use speech recognition and audio recognition, which is basically just machine learning. You teach it a bunch of words and then it will understand how people communicate.
Tejan: Got it. I guess you touched on this a little bit already during the podcast, but can you go into more detail in terms of how is AI/ML transforming product innovation today?
Paul: Yeah, absolutely. So companies always want to stay ahead or at least competitive to their competitors, and they’re noticing a rapid adoption of machine learning and AI in the market right now. And there are several use cases you know at home and in the workplace, you’re starting to see AI/ML everywhere. You’re starting to see it on your iPhone. You’re starting to see it on your Alexa. Even in the workplace, you’re starting to see call centers use it. And so, it’s interesting to see a lot of executives wanting to learn more about it and adopt it. However, AI/ML, it’s not sort of a ‘one size fits all’. It doesn’t fit every use case out there.
And so, there’s a lot of sort of discovery that needs to be done, and education. That’s the time between the data science teams, the business stakeholders, and the product teams to really understand the use case, the workflows within each business department, and whether it makes sense to even invest and adopt artificial intelligence. So, it depends on what you’re trying to do because essentially AI is a tool used to simplify or automate human decision making, make it fast. It’s not something where it’s going to just automate very mundane and manual intensive paths. It can, but not always. Sometimes you’d prefer to make a bot or use some sort of algorithm that is commonly used in the market.
Tejan: Yeah, got it. So can you talk about how your product team is utilizing AI/ML for your client?
Paul: Yeah, absolutely. I’ll try to keep it a little confidential, but we mainly consult for a large telecom provider. They obviously have call centers, and these call centers have a lot of metrics to reduce call time because more call time spent is more dollars for the client. And so we investigated several different artificial intelligence solutions out there. So there’s chat box solution, you can use chat bots to understand when the customer messages in; how to respond to that customer intelligently? There’s also when you talk to a live agent; similar to on your iPhone, when somebody calls you and they leave a voicemail, you look at your voicemail, and you can see your voicemail being transcribed into words. You can actually read your voicemail on your phone. We’ve created a technology in-house for our client called ASR: Automatic Speech Recognition. When a customer and agent speak live on the telephone, our model will transcribe that audio into words and storage transfer. That transcript can then downstream into several different AI use cases.
For example, we can do conversation prediction, so the customer called in about, “Hey – you know, I noticed there’s a new charge on my bill this month. I didn’t add that. Why is it there?” A lot of confusion. And so we use a lot of these transcriptions and models that can recognize these past conversations and predict when it would make sense. Hey, that is the customer calling in and spending more of our agents’ time and dollars. Why don’t we just look at previous conversation history or even help the agents in real time quickly understand what a customer is calling about or what their intent is and then send them notifications so that they can self-serve themselves within the phone or the mobile app, so they don’t have to call in to customer care. Or they could solve it. Or we could solve it through AI without having to talk to a live agent. So those are very common use cases in the call center. And then obviously the chat bot is another solution related to the conversation.
Tejan: How many pages are we talking in terms of what this product is processing?
Paul: Yeah, so initially we were trying to scale it to 17 call centers in the US. Even more so around the globe, and there’s about 30,000 agents in the entire world, geographically located, obviously, to solve these. So, when you test an AI solution, at a couple calls centers as a proof-of-concept MVP and then you see it work, and then you expand it rapidly domestically all over the US and globally, the amount of cost saving you see is tremendous. At first it was a few $100,000 and then once we scaled it to 30,000 agents, it quickly became $20 million dollars of cost savings. Just saving a few seconds on a call for these agents that get millions of calls and month. You multiply that about 30,000 agents – That’s a lot of money in dollars, that could be redistributed around the enterprise.
Tejan: You’re saying globally, so this this product – could transcribe in English and other languages? Does it work that way, or can you talk a little bit about that?
Paul: Yeah. That is an industry wide challenge that we’re trying to solve. Apple, Amazon, Google, Alexa, Siri, and all that, are also having trouble transcribing different accents, different people who, you know, English is second language for them. Or also people who speak English in combination with another language like Spanglish or something like that. And we’re constantly trying to find ways to prioritize – Which one do we want to, we call it fine-tune, in our model? And so that definitely is a popular issue in the marketplace. So if you talk to your laptop and you speak to it in Spanglish, it might not really understand the accuracy of what you’re trying to say. If you also talk to it with a unique act in the United States, there’s a different. There’s a Southern accent, there’s Northeast, Eastern accent, California accent from what I’ve heard from living in California, but you know, the models are only trained on – just like a baby – if it’s trained on a bunch of data and recognizes California accents, but then all of a sudden you feed it a Southern accent, the accuracy and its ability to understand what you’re saying goes down a lot. And so in order to get its accuracy back up, you need to train it with a lot of different data sets, such as data that you’re trying to improve. So right now, with the client, we are prioritizing certain offshore call centers such as centers in the Philippines and India and parts of Europe that have accents, English accents. And so by doing that, we’re going to improve the accuracy and essentially downstream that will become very impactful to them.
Tejan: Yeah, got it. And I guess earlier you were talking about how from the call center, the agent – as that this product is transcribing, it’s recommending things to the agent. Does the customer get recommendations as well? I guess specific recommendations. Or how does that work?
Paul: So it depends. They’re different use cases for why a customer might call in. A lot of the volumes that customers tend to call in about is problems with their bill. And so, I mean, if you think of reasons why you would call your telecom provider, probably it’s when it involves money under accounts. And so we’ve found ways to use our model to proactively send them notifications to let them know that we are aware of issues on their bill. And, ‘here are here the steps you can take to solve it’. Click, you know, a workflow where you could submit a request and then we can review it internally, and then we can remove it from your bill and make it decision on that in real time for the agent. We also have models where – not sure if you’re aware, but when you call a call center and they put you on hold when you ask them a difficult questions, they will essentially go into their internal company Wikipedia and search up, you know, your question different questions like that and search through 10 to thousands of articles. That take anywhere between like 2 and 10 minutes of call time because you’re on the call for 30 minutes and you can see why it’s so frustrating because they have to sift through thousands of articles. Whereas our models can take the transcription in real time, understand quickly what the customer and agent are talking about, and pull the most relevant articles and recommend them to the agent. Those articles are relevant to the call and then the agent doesn’t have to go through all of these thousands of articles, they can quickly look up the article popped up in front of them on the screen, scan it, find the answer, and then talk to the customer and get back to them right away. That reduces what we call ‘call resolution time’ or CRT, tremendously – minutes off the call. So, it has a huge business impact.
Tejan: I guess one last thing I want to kind of touch on, is what chat bot is. I’ve talked to previous clients, and they noted how…they’ve had issues getting people who would engage on chat bot, or people get frustrated. What would you say is the biggest difference between the online chat bot and then how your tool works? I know they obviously use AI to a certain degree, but what’s some of the biggest differences and or benefits of the product that that you have?
Paul: Yeah, chat bots definitely have several stages of evolution, meaning they’re not meant to solve every issue at the moment, but it will get there eventually. So, it’s meant to solve a lot of general pain points that customers have, and then what customers message in about very unique pain points catered to themselves, then that’s when the chat bot doesn’t work all the time, and you have to get rerouted to an agent. So it’ll capture probably 50% of issues that customers message in about, but with the other 50%, the chat box is going to have to learn all the time, like any AI model.
One interesting road map item I’ve been talking to vendors about with my client is this concept of Natural Language Generation, NLG. So what that is, is the chat box sounds very robotic. It’s a very big pain point – customers don’t like talking to a chatbot. If the chat bot can send you messages that you know, look and feel like how you would talk and recognize instantly how you would talk, that would dramatically increase customer satisfaction.
And so, there’s a lot of road maps, quite large tech companies, both working to develop NLG and also for people who are in the accessibility area – maybe they’re blind, maybe they can’t read the actual messaging. Then we will also create this thing called text to speech. So we have speech to text, which is transcribing your audio to your voicemail and then you ca read the words. But then what about back from words to speech? And so there that as well, types of customers or even customers who can’t read it, let’s say they’re busy doing something or multitasking and they want the computer to talk back to them, but they don’t want it to sound robotic. This text to speech solution can also work where you can customize how you want to hear the audio. Just like in theory, it’s kind of usually like a British woman, or an Australian male voice, letting the future they’re going to eventually create speech accents into whatever you want. You can have it done like yourself as well. And so these are, you know, various stages in the chat bot world that I’ve started to see across several vendors and they recognize the weaknesses that they have, which is that they can’t solve all the difficult ones, but they also can’t solve right now the look and feel of being so robotic.
Tejan: Got it. So this tool that you’re using definitely save your company or your client a lot of money and a lot of time in terms of helping the agents become more efficient through the process of interacting with the customers. I would also expect that it brought up the overall customer satisfaction as well too.
Thanks Paul, I really appreciate your time today and really going into this topic, so really appreciate your time.
Paul: Yeah. Thank you for having me. Anytime. I really have a lot of fun talking about this topic and I’d love to talk more about the features.
Matt: You’ve been listening to Logic 2020’s podcast Digital now. To learn more, visit our website at logic2020.com or follow us on social media. See you next time.
DigitalNOW is an original business and technology podcast by Logic20/20 that is released on a monthly basis. In each episode, host Matt Trouville interviews a new expert to learn about industry trends, fascinating new tech, shifting customer expectations, and the steps every business can take to stay ahead. Check back here for future episodes, OR you can find us on all major podcast sites, including Spotify, Apple Music, Pandora, and more.
Digital transformation done right
We create powerful custom tools, optimize packaged software, and provide trusted guidance to enable your teams and deliver business value that lasts.
Like what you see?
Paul Lee is a Senior Manager in Logic20/20’s Digital Transformation Consulting practice, with experience in digital, customer experience strategy, and technology implementation engagements.