AI Wisdom Ep. 6: The Crucial Role of Data in Your AI Strategy with Colin Toal

Digital Transformation - January 15 2020

On this episode of the “AI Wisdom – Talking Innovation in Insurance” podcast, host Ron Glozman speaks with Colin Toal, Chief Technology Officer at Chisel AI, about the critical role of data access and management in launching a successful, sustainable and scalable AI strategy. Ron and Colin cover a host of technical considerations for deploying AI including data security, machine learning techniques, buy versus build, and the volume and types of data needed for modeling. Click the link to listen or read the full transcript below.

 

Full Transcript

Ron Glozman: Hello, and welcome to “AI Wisdom – Talking Innovation in Insurance.” On this podcast, we talk to business and insurtech leaders about how artificial intelligence is transforming the way we buy and sell insurance. I'm your host Ron Glozman, Founder and CEO of Chisel AI, and a strong believer in the power of AI to help people work smart and enrich their lives. So, let's get into it.

In this week’s episode, I have with me our very own Colin Toal, Chief Technology Officer at Chisel AI. We have a very exciting episode for you today. We’re going to talk about data, security, buying versus building versus outsourcing versus partnering, change management and a host of other considerations. Colin, please introduce yourself.

Colin Toal: I'm happy to be here – as I am every day – getting to work with you, Ron, and the team. I’m CTO here at Chisel AI. I’ve got a history of building very valuable business applications for some of the world’s greatest customers and I’m excited to work with Chisel’s customers and bring AI that helps insurance companies reach new levels of efficiency.

Ron: Colin, just for some context, the interesting thing is you used to be in the insurance industry 10 years ago?

Colin: It must be 12 years ago now. Yeah, I worked for a little company in Toronto for about a year and I found insurance to be fascinating and difficult and very complex, more complex than I think we were able to handle with the software at the time. So, I left the insurance technology industry and went back to CRM. That’s where I spent most of my career and vowed actually, ironically, that I would never do this again. But now, I'm actually really happy to see the moment that’s arrived where cloud computing and inexpensive commodity computing technology has met with natural language understanding algorithms in a way that can let me revisit the challenges that frustrated me more than a decade ago and use AI to solve the problem the right way this time.

Ron: I love that. So, as somebody who has seen the change 12 years ago to now, what are some of these big technological changes since people were first implementing these technologies, and what do people need to think about now? Because, you know, it used to be all in-house, and now you have the option to go to the cloud. So, what should people think about when they’re deploying these technologies?

Colin: Well, I mean one of the amazing things that’s happened in the past decade with cloud computing is a switch from the capital expenditure for building your own data center, having your own server farms, and then the operating expense that goes along with staffing and running those environments to a model that is a much more highly leveraged, where you just pay the operational expense to partner with an infrastructure provider like an Azure or an AWS or a Google, even an IBM Cloud to produce much more elastic access to capacity. That elastic access to capacity allows you to solve problems with more flexibility rather than having to get into the budget cycle and earmark a lot of money to cover a need that you may not even be certain you have yet, so it gives you a great deal more agility and a great deal more flexibility and it makes it easier for you to tackle some of the more difficult problems that before you would have had to build out an entire infrastructure for.

Ron: And, I mean, one of the things we often hear about on the sales side is security and a lot of people are concerned. They go, “How do I know that my data is secure?” What are your thoughts on that?

Colin: This is a great topic to dive into a bit. When I worked at Amazon, one of the things I learned is the extent to which they go to keep not just their AWS customers’ information safe but their customers’ client’s information safe. And they really have a mature attitude towards not “if” you will be breached but “when” you’re breached, how do you keep the blast radius small and how do you make sure that the attacker or the adversary doesn’t get everything at once? They have a very, very robust and sophisticated security practice, as do, for that matter, all the cloud providers. They’re all basically at parity right now with respect to how they architect for resiliency and redundancy and how they make sure that the blast radius and exposure of any one breach is small. Now, having said that, that does not make it easier for people adopting the cloud. That makes it very easy for people inside of the cloud who are running the control plane and operating the services. The people who are consuming the services off of that control plane have to think about how best to use that infrastructure.

There have been plenty of breaches in the past three to four years and Uber is the one that pops to mind most profoundly, where insecurely configured S3 buckets have been a major vector of exposure. So, the challenge for enterprises right now as they’re thinking about moving to cloud or taking a hybrid approach and using some of the cloud services, is to be very current and very correct about how they’re taking up the cloud-based technology. Because, at the end of the day, it’s still your information and you have to secure it both inside your own operation and when you use partner infrastructure and that challenge is not fundamentally different. The cloud doesn’t give you a great security posture by default, it just gives you access to greater storage and greater compute capacity. So, you have to think about your posture, and you may have to think about it in a slightly different way and use different controls to enforce it.

Ron: Yeah. I mean, one of the interesting things that I’ve seen is they've done really well as they’ve set up an architecture for you to implement best practices really easily. They obviously can’t implement it into your code. So, if you don’t salt and hash passwords and you leave them unencrypted and you don’t do anything, that’s your fault. But they have very easy-to-use functions to do all of those things.

Colin: Yeah, that’s absolutely correct. It’s not in any of the cloud infrastructure partner’s best interests to make it difficult for you to be safe on their platforms. But they also can’t make you safe on their own. So, best practices like you mentioned, how you use encryption, how you use access control and how you baseline and then re-baseline that access control on a particular frequency, how well you audit – these are all aspects of a robust security practice in any enterprise and that doesn’t change with cloud. You still have to follow those practices.

Ron: So, on the same topic of data and more on the AI stream, you need a lot of data when you’re talking about AI?

Colin: Yes.

Ron: Let's talk a little bit about the different types of data, maybe how you store them, if you want to talk about that, and how much data you need, which I think is always a funny question.

Colin: So, we’ll start with the different types. Broadly speaking, you want to think about the patterns of data that you want to learn and take action from. And for most enterprises, what they have the largest volume of in terms of data is the data that is produced as a byproduct of their operations. So, these are the ledger statements for a bank. These are the financial transactions for retail, the shopping cart actions for an online retailer. These are the things that allow you to finish the transaction. They’re not necessarily the behavioral signals you might be looking for if you’re building a marketing AI that predicts if somebody is going to make a purchase. They may not be the kinds of data that you’re looking for when you tackle problems like we do, which is taking the important information out of semi-structured and unstructured documents, but the best perspective is to take a wide view of your entire operation. Think about the channels of interaction you have with your partners and customers. Think about the product in the case of insurance, the actual contract documentation that you produce and the submission documentation that you take in. And then think about all the things that are a byproduct for your operation and how those can be tied together. Often the easiest things to get access to are the things that are structured in data warehouses and databases that are part of your operational processes.

“But for us at Chisel AI, the stuff that’s most interesting is the unstructured data that lives inside your policy documents, lives inside your binders. And for those things, having the right strategy to make sure that those are stored and archived well, decorated with the right metadata, and queryable is an important efficiency.”

One of the first steps as you embark on putting AI in place is to make sure that you have your data taxonomy clear, you understand how things are cataloged, the metadata that’s put on top of them, and the providence of that data, the governance rules for it so that you can safely extract it and do whatever cropping or trimming you need to do so that you get just the essence of the signals that you’re going to build your patterns off of and make that available to the teams that are working on it.

Ron: And how much?

Colin: Oh, if we take a walking around number for a classifier that we use for data extraction, if you want to pull a single element like a policy number out of context, you can do that on as few as 200 elements, but it works better when you have, you know, 1000 examples of it. So, we know from testing that we’ve done that we can pull a policy number off a particular carrier’s document about 85% of the time on a little more than 200 examples – 200 well-labeled examples. If we want to get more accurate than that, or if we want to cover a better generalization over a broad set of carriers, we need many more examples than just the 200.

Ron: Love it. I think 200 is a great start. Obviously, 1000 or more is a better answer. When we talk about the data, there’s two types of data. There’s training data, there’s testing data, and there’s actually a third type, which is the production data which comes in as the model is actually put out there. How should people think about that data? What is the value that data provides?

Colin: Well, I mean, this is one of the key values to having a learning-based system. When you talk about training and testing data, that’s really the start. You know, it’s the cold start for the feedback loop that you want to build for a system that learns continuously over time. This is really the best quality of AI. Lots and lots of different approaches exist where you can take a corpus of data and build static rules or other things on top of it that will work as well as they do when they’re deployed. The important thing about AI is that continuous feedback loop that allows it to get better over time. So, you have to design your systems with that loop in mind. You have to think about the opportunity to give an expert end user the chance to interact with the system to get their job done, and to capture the additional feedback that that system needs to improve.

So, if we think about, in our case, where we use natural language understanding techniques, we have a corpus of labeled data we use, we train our classifiers on that corpus of labeled data and that allows it to reliably extract key insurance elements from documents. Occasionally, it misses. That’s expected because these models are summaries. They’re generalizations over a set of patterns. When it misses, it’s our privilege to have the user inside of that interaction to say, “Here’s what you classified incorrectly and here’s the item that you should have classified.” We capture that feedback and that goes back into our training corpus. It helps our machine improve.

Ron: As you were talking, an interesting question popped into my head, which I know a lot of people listening probably think about, which is who should own the models that come out of this?

Colin: This is a great question. Who owns Tesla's self-driving model? When you purchase a Tesla, it comes with sensor packages and it’s connected to a platform and that platform is constantly gathering telemetry data that feeds into Tesla’s algorithms that teach the machine to drive. The benefit to you as a customer is that when you get into the Tesla on day one, it's already learned from thousands of miles, millions of miles that other Teslas have driven. This improves its algorithm, improves its end users’ experience. Our position is that we want that benefit to accrue to all the adopters of the Chisel AI platform, so the ownership for that stays inside the Chisel AI platform.

The dividend that gets paid out to the customer is in better performance. You can see similar systems in place with Alexa at Amazon or Siri with Apple where all the usage drives continuous improvement, continuous accuracy and more innovation with respect to the actions that can be taken on the models. That's the kind of innovation we want to build in the core of our platform, and that we want our insurance customers and partners to benefit from.

Ron: When you think about the rollout strategy, and even before the rollout strategy, one of the steps when you don't have the testability yet is the cold start problem. For those who don’t know, the cold start problem is, in AI, you need to have a labeled training set before you can build your first classifier, which then gets to the reinforcement learning and the learning loop cycle. And that cycle can take months if you’re fast and years sometimes to get over that cold start problem. When do you know you are out of the cold start phase? One of the things that I've heard a lot of people say is, “I don't know if I have enough data.”

Colin: So, there are techniques that are coming up like few-shot learning and others that can help with a sparse dataset producing some signal that gives you a reasonable action that you can take. How do you know if you’re out of the cold start? Well, it’s a series of cold starts. So, for each problem that you tackle – for us, for example, it’s information extraction – our accuracy continually climbs with the more examples of the patterns that we need to recognize. And our applicability to the problem we’re solving climbs with the breadth of patterns that we successfully recognize. So, for us, as soon as it’s useful, we’re through the cold start.

“As soon as it is as good at recognizing a policy number in a document as an average human on their best day, that’s a good mark. You know, 85% and 90% of the time. it sees it correctly and that says it’s as good as an average human on their best day – and it will always be that good. It will never degrade. It will only get better over time.”

And that for us is sort of the signal that on that particular problem we’re through our cold start. Now, we then sort of step into the next problem. In the next problem, the Tesla analogy works well here where you think about, you know, it might hold its lane first or it might do adaptive braking for how closely it’s following first, and that might be the only feature they unlock in the software that they provision for you.

And you know, they’re gathering more data and they’re learning more, but as soon as they get to a safe level, then they know that that works well enough. For us, we have to know that we’re at a productive level, that we’re at a level that is driving some efficiency for our customers and getting the feedback from them that this is an improvement. In the submission intake challenge, that might be lower than 85% or 90%. There might be suitable advantages to speed and efficiency to have that classifier be right 50% of the time.

Ron: Yup. SIC code maybe is an interesting problem that matches that.

Colin: That’s right. Something like making sure that we’re getting the industry coding to a full resolution on top of a document. There might be an advantage to getting the resolution to only a part of the SIC code so that it becomes easier for a human being to finish that last part. We gather the information from the labeling that way and they get efficiencies.

“So, we work closely with our customers to figure out, “Are we at a level that solves the problem for them?” Because we know that’s our baseline for them to go to production and to get some return on the effort to deploy.”

Ron: And the people who are sitting there thinking, like you said, ideally 1000, I have 800, I shouldn’t even think about AI. What do you say to those people?

Colin: You try it. I mean, these are heuristics. Everything about AI is probabilistically based. So, to say that you match it 85% of the time out of 800 or 85% of the time out of 200, you have to attempt based on the data that you have and see what the threshold is, what the confidence interval is on what you’re matching.

“When the confidence interval is high enough, then you’re going to get some efficiencies from it, and you should try it. There’s not a lot of benefit to waiting.”

We’re at a stage now where the compute power, the data availability and the techniques are democratizing and are easily available. So, there’s very little cost. Historically, cost is at the lowest point it’s ever been. It’s gonna get lower. And then, on the other side, with platforms like ours, as we build them out, you will benefit from that learning effect. Just like the next Tesla owner benefits from that learning effect with their car, you will benefit from that learning effect. So, there isn’t really a reason not to try it out.

Ron: So, what I’m hearing is you should buy the Tesla truck because it have already driven a million miles?

Colin: Well, apart from the aesthetics of that truck, I mean, it’s a controversial opinion, I don’t know where I should fall on that particular vehicle. [Laughs] But the self-driving platform that they’ve built, yes, I'd expect that finds its way into all the vehicles, whether it’s a semi-truck that they’re building or it’s the Model 3.

Ron: So, we’re going to take a 20-second break to tell you where you can find more information and insights about insurance innovation. We’ll be right back.

[If you liked this episode of AI Wisdom, subscribe to our blog, Writing the Future: AI in Commercial Insurance at www.chisel.ai/blog for feature articles, interviews, opinions, and more.]

We’re back with our featured guest, Colin Toal, CTO of Chisel AI. Let’s jump right into the next question. As we talk about this AI learning effect, can you share your perspective and insights on how, specifically, commercial insurance brokers and carriers can benefit?

Colin: Yes. The benefit of the learning effect built into the platform is that the early majority that adopts our software is going to see accuracy and precision from the models that is a function of the work that we’ve done to date. They’re going to see accuracy and precision driven across the universe of carriers for which we have trained classifiers that work with the documents from those carriers. In addition, there are some transfer effects that come from working with submissions across brokerages and working with policy documents across carriers that are in the insurance domain and it is a more constrained and tighter domain for us in order to find these transfer opportunities. I hesitate to say easier, but it is a more constrained problem than general document extraction. So, they’ll see the benefit from that, and they’ll get to start on the millionth mile versus the first mile. And I think that’s a good benefit for them.

Ron: Let’s talk a little bit about buy versus build versus outsource versus license?

Colin: Sure.

Ron: I guess maybe let's start with should insurance companies hire data scientists?

Colin: The short answer is, yes. Should they hire a ton of them? I don’t believe they need to build the, you know, the equivalent of an Amazon’s data science team or you know, the Vector Institute here in Toronto. I don’t think they need to build the equivalent of that internally. Machine learning scientists and data scientists, there’s a couple of different perspectives that they carry, part is fundamental research in solving fundamentally hard problems in the field, and I have tremendous respect and depend on those talented people to be able to do great work. They’re probably not the right investment for insurance companies except in their actuarial or actual risk management areas.

“In terms of their operational needs, a data scientist can help an insurance company understand how to get the most from their data, understand what new opportunities they have to instrument and gather information beyond what they’re getting as a by-product of their operation, and help them understand how to properly develop the taxonomies and catalogs and ontologies on top of the semi-structured and unstructured assets that they use.”

As a software company that works directly with insurance companies regularly, having a sophisticated data science cohort inside of the operations makes our job easier. It makes it easier for us to deliver more value for those companies. Having said that, I don’t think it makes a ton of sense to replicate everything that’s being done by platform providers inside those companies. There’s a value to understanding how to use the software well. Probably the best analogy that I could give is it’s important for any enterprise that uses an Oracle database to have an Oracle DBA, somebody who understands how to get the most from that and how to properly apply that to their problems. It is not as important that they have the Oracle database engineering team internally because those folks are going to build value that accrues across the industry that the companies can leverage instead of having to recreate fundamental data storage or, in our case, fundamental classification technologies on their own.

Ron: And what about building versus outsourcing? Sometimes it’s easier or maybe less risk-averse to actually still have humans do the work but ship it overseas then shift to a lower cost center. Any thoughts on building versus outsourcing?

Colin: Well, in terms of actually building the software, I think the highest leverage for everybody is always to buy the software that’s built rather than to commission something that’s completely custom, whether it’s done with in-house hiring or it’s done with a partner. With respect to taking the tasks that are part of insurance operating, there is a temptation to think that a Mechanical Turk approach to information extraction, for example, is just as good as, you know, a sophisticated AI approach. The economics may look that way in the short-term. In the long-term, that cost is not going to diminish to zero.

The cost on sophisticated AI technique will eventually diminish to zero. In terms of near-term adoption, there may be some hybridization. There may be folks who offer a lower cost solution for extracting information, and customers may partner them with our learning effect system that is using the information they extract to train an AI-based classifier.

“Those AI-based classifiers over time are going to outperform human beings in speed, certainly, as well as match them in precision and accuracy, and approach a cost that is based on the price of electrons and not on the price of human beings’ lives. I think that’s a more humane approach. I think that’s a more sophisticated approach and I think it’s an approach that long-term drives better economics.”

So, you know, near-term there’s probably some overlap. Mid-term to long-term, you’ll see that these systems will dominate because of that economic difference and that humane difference.

Ron: One of the things that’s said a lot in the industry – at least in 2019 – is this whole notion that insurtechs aren’t vendors but they’re partners. And at least with AI, because of potentially the cold start problem and the need to invest in a relationship and provide data up front, that’s a lot more like a partnership than a traditional vendor relationship.

Colin: Well, insurtech often is a label that’s applied to early-stage companies. And you know, I can say conclusively from experience that I love our customers that are with us in the early stage. They, of course, get additional support and they get additional responsiveness but they partner with us because they believe in the vision and they’re doing something fundamentally important for themselves and giving themselves an important advantage. They’re doing something important for the industry and, of course, they’re doing something really important for us.

“So, I love those early-stage partners, those early stage customers, that are with us. They set our charter and I do believe that we partner with them to figure out how to drive the best level of automation into their operation, to drive the most efficiency into how they work and to give them the most direct value out of the relationship with That’s hugely beneficial for both sides and it helps us build the value that then the rest of the industry can avail themselves of.”

So, the insurtech label is a label that often applies to early tech companies. It’s when they grow past that that they become a software vendor. You know, like it’s hard to label a Guidewire with a robust, reliable, successful operation as an Insurtech. It’s hard to imagine that you would look at that software company and the success it has had and refer to it as an insurtech. They’re an insurance software company. The earlier stages often get that insurtech label and I think that’s why people think of their customers more as partners because they set a charter for them.

Ron: I love that. I’m gonna steal that. I like that definition. So, looking into the looking glass – and I think the obvious answer to this question is going to be yes – but is there something that comes after AI? Just like, you know, there came an industrial revolution with electricity and now we’re sort of in the internet revolution and the AI revolution. Is there a fifth, sixth, seventh evolution? Forget the timeline.

Colin: I tend to be wrong if I try to prognosticate too much on giant trends. I've been fortunate in my career to see client/server turn into N-tier web and N-tier web really evolve into internet services that consumers buy but don’t provision, and to see that evolve now to services that also gets smarter the more you use them. And I think that’s what we’re seeing right now with this AI revolution.

“If you boil it down to common consumer experience, we used to buy music on CDs, we used to have our own CD player, then we switched to buying music online with iTunes where we bought one song at a time, and then we switched to subscription music, and now we’ve switched to, you know, asking Alexa just to put some music on and having Alexa magically pick the music that matches our tastes.”

This progression has been amazing to watch. The next step, the challenge here, is there’s much that’s being written about the future of work. There’s much that’s being written about what kinds of roles human beings will play in commerce in the workforce in the future as more of the rote work, more of the pattern match-based work is taken on by sophisticated algorithms on top of cloud computing. So, the question is, does that drive a trend now where we see a renaissance in art and expression, where we see people pursuing their livelihood and the success for their families through less sort of inhuman rote work? You mentioned the industrial revolution. There was a lot of manual farm work, manual manufacturing work, that has been replaced by industrial machinery and now robotic automation inside manufacturing has replaced even more of that.

“Now we’re seeing white-collar rote work start to be replaced and become more efficient. So, will that free people up for a better, more human experience? That’s the question.”

If you think about those large-scale movements in technology, the next stage for us, I think, is more powerful tools that allow people who have creative and inventive ideas to have the leverage of skilled craftspeople without the millions of hours of practice of those skilled craftspeople because that practice will be baked into the machinery. People will be able to express themselves with a level of sophistication and execution that traditionally would take thousands of hours of practice and motor skill development. They’ll have the ability to do that now.

“The kinds of sculptures that young people today will invent in the future on the back of 3D printing will be incredible. The level of architecture and art that will be computer-assisted in the future, I think, will be fascinating to witness. This is the transfer of those rote skills and that fine motor skill abilities into robotics and automation. It will be an amazing thing to see.”

Ron: I love it. If you had to summarize everything we've just talked about to a 15-second pitch where somebody’s sitting on the other line listening right now and they’re trying to convince somebody higher up in the organization to look at AI, what would you say?

Colin: Ron, you work with me. You know that if you ask me what time it is, I’ll talk about the history of watch building! [Laughs.]

Ok, 15 seconds, let’s see: The compute and storage power is available today for less than it’s ever been. The risk to take it on, especially with a modest volume of data, is small. The potential upside benefit to efficiency through automation – and efficiency is a big word, but it really comes down to consistent quality and faster execution – is compelling. And it will give your highly valuable employees more time back to use their brains on the hardest problems versus the rote problems, so that’s a better experience for employees, a better experience for customers, and it just makes a ton of sense.

Ron: Folks, you heard it here first. Colin, thank you much for joining us. Folks, thank you so much for tuning in.

That's a wrap for this episode of “AI Wisdom” hosted by Chisel AI and me, Ron Glozman. Thanks for listening. If you like our podcast and want to hear more, check us out at www.chisel.ai or tune in and subscribe wherever you get your podcasts: SoundCloudiTunes or Stitcher. Join us next time for more expert insights and straight talk on how AI and insurtech innovations are transforming the insurance value chain. See you on the next episode!

 

Browse different topics

Recent Posts