On this episode of the “AI Wisdom – Talking Innovation in Insurance” podcast, host Ron Glozman speaks with Daniel Faggella, CEO and Head of Research, Emerj AI Research about the impact of Artificial Intelligence on the insurance industry and how AI is digitally transforming insurance. Click the play button to listen or read the full transcript below.
Ron Glozman: Hello and welcome to “AI Wisdom - Talking Innovation in Insurance. On this podcast, we talk to business and InsurTech leaders about how artificial intelligence is transforming the way we buy and sell insurance. I’m your host Ron Glozman, Founder and CEO of Chisel AI and a strong believer in the power of AI to help people work smart and enrich their lives. So, let’s get into it.
Get our viewpoints delivered to you inbox
Ron Glozman: There’s a lot of buzz around AI in the market, which can be confusing and easy to misconstrue. On this episode, we can tackle some fundamental questions like what is AI? How has AI revolutionized insurance? What are the use cases for AI? And what are some practical guidance to help insurance organizations spearhead successful AI strategies? I’m super excited to have Dan Faggella, CEO and Head of Research at Emerj AI Research join me today. Called upon by the United Nations, World Bank, Interpol, and leading enterprises, Daniel or Dan is a globally sought-after expert on the competitive, strategic implications of AI for business and government leaders. Welcome, Dan, I’m super excited to have you on the show. Can you please tell us a little bit about yourself and maybe share your journey on how you became such a recognized leader in AI?
Daniel: Sure. Ron, I’m really glad to be here. So, I’m happy to give the quick backdrop on just the fast description of Emerj. People can probably best think about us, like a very boutique Forrester or Gartner. People who are listening in are familiar with market research companies that’s certainly our world, but we don’t focus on technology generally, we focus on the ROI of artificial intelligence. So, a rather narrow focus on trends, impacts, and use cases of AI mostly in finance. So, this is obviously an insurance show, insurance, banking, wealth management is a large bulk of where we focus. So that’s us here at Emerj, we’re generally going to be called on when an enterprise firm is focused on one of three things that could be building an AI strategy, it could be selecting high ROI projects, or it could be selecting a vendor. And occasionally they’ll come with their own custom questions like competitive research, for example, what are our top seven competitors doing with AI? How are those investments working out? Things like that. But that’s essentially how we get called on. You want me to go a little bit into how this thing got started, Ron?
Ron: Would love to hear.
Daniel: So back in, this is a long time ago now. I got out of graduate school in 2011 and I went to the University of Pennsylvania for skill development and skill acquisition. So, this is a little bit of human learning here. So, what’s the neuroscience and kind of the cognitive models of how humans come to learn? And oddly enough, around the same time in the Ivy Leagues, there were a lot of rustles in the breeze about what machine learning was able to do. So, this brand-new thing of machine learning, this was the early days of ImageNet when computers were, you know, getting better at labeling images of butterflies and dogs than human beings were. It was a pretty exciting time and by the time I graduated, I sort of came to the conclusion that between NLP and there was some interesting kind of Twitter NLP projects going on at UPenn at the time. Between NLP and Computer Vision and the future of AI, this was really going to be something that was going to shake up basically every industry and potentially shake up kind of the global balance of power and that ultimately, I wanted to focus on that.
So, while I had this realization, Ron, I was running actually a full-time Mixed Martial Arts Academy. So, training fighters, that’s how I paid for graduate school, I never really had a job I just trained people to fight in cages and trained other people on self-defense. So, I had a gym, and I had to kind of grow and sell that business. And that ended up turning into an eCommerce company that we got into the Inc. 5000 company called Science of Skill, basically, an eCommerce business selling self-defense and self-protection instruction. And when both of those companies were sold, I got to focus full-time on AI and then focus on kind of my dream. So, all along this has been what I’ve wanted to do. I’ve been doing interviews with, basically AI researchers and people deploying this stuff in the industry since 2012, which is way before it was popular. And finally got to do it full-time about three years ago and now we’ve been growing this business. So, a little bit of a strange trajectory into here, but certainly my passion.
Ron: Love it. We’re very excited to hear what some of the research shows. So maybe let’s start off with that, I’m curious to hear, you’ve been doing research since 2012. Curious to hear what your latest research shows around some of the AI trends may be that you’re most excited for, in general, and then maybe as it relates to insurance as well.
Daniel: Yeah, I’ll talk about financial services generally and then I’ll talk about insurance at sort of a deeper level. So, the fact of the matter is, and you’re aware of this, Ron, a lot of artificial intelligence solutions involve a pretty strong amount of kind of core transformation in terms of how businesses operate. So not every AI solution is going to imply that we jackhammer our existing systems, but certainly overhauling the way that we store data, the way that our teams operate. There are some real opportunities to mature there. But they could also be seen by firms, especially if firms don’t understand the value of that maturity, as challenges and as hurdles. If there’s one big trend as to where a lot of the near-term value lies across financial services broadly, banking, insurance, wealth management, it could pretty well be summed up in the word’s anomaly detection. So, if I give you an example here, Ron, if we look in, let’s just say, the banking space and we could talk about fraudulent insurance claims. We could talk about, you know, underwriting, we could go in a lot of directions, but let’s just talk about banking, for example.
If we look at banking fraud, the anomaly detection is great for a couple of very succinct reasons. One is that it’s a nice natural fit for machine learning. So often we can sort of see our results relatively quickly. Secondly, if we can apply it properly, we can often measure the return on investment really quickly or relatively quickly compared to other kinds of AI solutions. So, I’ll give you two hypothetical examples of AI and financial services and I’ll sort of explain why the anomaly detection example is sort of a succinct and nice one for businesses to kind of latch their teeth into. So, let’s just say in the banking space where we’ve done, just as much work as we have in insurance, we have an application that detects fraudulent money laundering instances, so money transfers. So, which of these money transfers are potentially laundering or potentially fraud or potentially criminal in some way, breaking some kind of regulatory rule? Then there’s another AI application where we want to send email communications and reminders to our wealth management, our individual kind of wealth management folks, to communicate with our wealth management clients to maybe remind them to buy or sell some Disney stock. Or give them maybe some helpful tips given what we know that their portfolio looks like and things like this.
If we take that second example of prompting our wealth management clients and prompting the people that manage them, so you know, whoever their point of contact, their agent is, their broker or what have you. If we want to look at that example and ask the question, “Well, when does that deliver value to the company?” The answer would be, “Geez, we’d need like years of experimentation to figure that out.” To figure out if we can lift the customer lifetime value of a wealth management customer over time, not just can we get them to trade more actively for six months and then leave us? If we want to know if they’re ultimately worth more money to the company. We would need a really long time to figure out if that kind of core transformation would ultimately be valuable in terms of the way that we interact with and talk with our wealth management folks.
On the fraud side, if we can hypothetically reduce the number of false positives, that is to say, the fraud instances that we put in front of our experts, less of them are fake yeses. In other words, fraud instances are more likely to actually be fraud than our existing system. And also, we have less instances that we miss, that is to say we pick up on more potential instances that actually were laundering so we can catch this stuff. We can remain compliant; we can prevent crime from happening in our bank if we’re Santander or Citibank or something like that. Within a few months, often we can get a sense of, “How did the old system perform with the new?” AI, pattern recognition really nice, snug fit and there’s a million instances of that whether it be in CyberSec, in payment fraud, we could talk about claims, for example, where anomaly detection just requires a lot less aggregate big picture transformation and often is quite measurable in terms of return. So, in terms of patterns of what we’re seeing, we’re seeing a lot of adoption, a lot of money, go into things that have that in common. I’m happy to go into more detail if you’d like.
Ron: I love that example. That is such a powerful example and I think it’s so easy to your point, like the calculation of an LTV can take a long time. The whole point is it’s a lifetime value calculation. So, if you can just prove that you have a better detection rate on fraud, that’s an immediate impact. I love it. What do you see as some of the practical applications of that similar technology in insurance? And you talked maybe a little bit about fraudulent insurance claims, which I think is obviously probably the closest and most similar, happy would love to hear you talk about that. If you see any other use cases as well, would love to hear.
Daniel: We certainly see a lot of use cases, Ron. So, the reason your company is on our radar is because, we do what we call the AI opportunity landscape. So, across the major chunks of financial services, so wealth management, banking, and insurance, we’re essentially tracking all of the credible AI vendors that sell into that space. And we’re also taking a look at what the global top 20 firms in those sectors are doing in terms of their investments and are doing in terms of their own internal initiatives. So, what are the global top 20 doing? And then what is the entire landscape globally of viable, reputable AI vendors actually doing in that domain? So, who’s raising money? Who’s closing deals with bigger clients, etc.?
So, I can get into as much detail as we’d like. But if we want to talk about broadly speaking, if we just look at vendors that serve insurance relatively specifically, so they either serve just insurance or insurance among let’s say maybe only two or three other sectors. If we narrow ourselves to those kinds of companies, it’s pretty clear to see that the bulk of the action here falls in either claims or in underwriting. So, when I say the bulk of the action, by the way, I’m talking about slightly over 75% of the funds raised by companies, can be broken into those two functions.
So, when we look at a space, Ron, the AI opportunity landscape sort of research approach that we take is we look at the investments of the big companies, the enterprise. We look at the startup ecosystem, and we look through two lenses, one is called functions, that is to say, departments of the business. So marketing, customer service, legal, claims, underwriting, things like this, this is one view. And the second is capabilities. A capability is a new verb that AI enables. So, what’s a new thing that I can do because of AI? So, capabilities, is one lens functions is another. Functions is great because everybody understands it intuitively, they already know these two departments. So, we’re talking about over 75% of the funds raised from vendors being in either claims or in underwriting. And there’s a lot of traction within the big companies in those two spaces as well. There’s some things like cybersecurity which we don’t see nearly as much action as claims or underwriting but is a large chunk. CyberSec vendors often are not insurance specific. And so, they wouldn’t have shown up in this particular pie chart. But if we look at what’s specific to insurance, claims and underwriting, are, by and large, where the bulk of the action is, both of them, gobbling up something like 40% of the funds raised of all companies in this space. Happy to get into what that means or kind of the subcategories of that there’s so much to dig into here.
Ron: That’s a very, very interesting statistic. And I love the two lenses. I haven’t quite seen any other firm and I guess that’s what makes you guys so good at what you do. It makes you different than, like you talked about, Forrester and Gartner. They don’t necessarily have that lens and I think that’s so powerful.
Ron: I’m curious. What do you think is driving some of the dynamic change in the financial services market? So, you’ve been doing research since 2012, do you think it’s different than it was then? What do you think is different and maybe if you can hypothesize around why?
Daniel: So, to be clear, a couple of things. I’ve been interviewing sort of folks applying AI in different sectors public-private, since 2012, actual full-blown research products were a few years after that. And to be honest, the market for market research on AI in the enterprise, borderline didn’t exist until 2014, 2015, when AI actually became relevant, outside of the little rustles and the breeze around ImageNet and what not. You know, the GEICO’s of the world weren’t doing cartwheels and backflips about artificial intelligence in 2012. But yeah, so you’re essentially talking about what are the factors that are encouraging folks to adopt this or what might be the best way to frame it because I can take it in whatever direction you want.
Ron: Yeah, I mean, one interesting statistic not to be the researcher here. I mean, one interesting statistic that I got really excited about was KPMG did a study in October of last year, and 97%, nine seven percent of insurance executives said that they know and believe that AI is going to change and have a big impact on their business. And 87%, eight seven, were going to take charge today under their current leadership, they’re not waiting for the next person to come around. I don’t have any statistical data to say that was different in 2015, or 16. But my gut...and my sort of experience without having numbers to put behind tells me that those numbers are drastically higher now than they were, like you said, GEICO wasn’t doing cartwheels and I would love to see Warren Buffett do a cartwheel.
Daniel: Yes. So, would I, although to be honest, I might be a little bit afraid, he’s a little old these days to do that. But yeah, so in terms of, I would suspect those numbers to be astronomically above where they were even four years ago, even three years ago. What I’ll say so it does seem self-evident that AI is going to find a fit in almost any sector? Financial services, I think, is more aware of this than, let’s say, manufacturing, is more aware of this than, let’s say, logistics, is more aware of this, even then, let’s say, Life Sciences, that this is going to be evident. And a lot of the money’s going into this space. And one of the reasons we know that is because we track the vendors who are raising big money and who are closing deals.
The other way that we know that is who’s asking for a rather expensive market research to guide AI strategies? And as it turns out, that the finance space is leading the herd there in many regards’ insurance and banking. So, it doesn’t surprise me at all that that’s the case. I will say that that doesn’t necessarily mean that the money cannons are out there shooting them AI vendors left and right.
So COVID, I think, has had both positive and negative impacts on artificial intelligence adoption broadly. I think there’s a lot of excitement. I don’t think that that necessarily translates to dollars in the hands of vendors.
The other factor here is Gartner has their numbers of 80% of AI initiatives kind of flopping. Frankly, you know, we think that might even be a little bit underrepresented if we look at all the various pilot projects, all the silly toy AI projects, all the little sandbox willy-nilly vendor initiatives that flashed in the pan. So, there’s been a lot of flops mainly due to misconceptions about what to adopt and how to adopt it and we can talk about that. Sort of what those core misconceptions are, that essentially squandered the first, geez, we might even say two years of investment of AI and financial services, not all of it. I’m not that much of a pessimist by any means, but squandered a great deal of it, and kind of put folks a little bit on their heels about what vendors are we going to throw resources at? So, I certainly think there’s enthusiasm. There’s a little bit of maturity around realizing how much this stuff isn’t magic. But fortunately, I think it’s clearer and clearer that the future will involve AI and we better leverage it properly. Now, does that mean that people are over the misconceptions that made them waste all the money in the first place? I certainly don’t think so. I think there’s a lot more education to be done. I think that’s partially our role here in Emerj. But the convincing that this is the future, I think, certainly is the case, Ron I can agree with you there.
Ron: Love it and I think that’s perfect because the next thing I wanted to get your thoughts on was recommendations to set yourself up for success. So, if you’re an insurance executive who’s listening, which is the majority of our audience, what advice would you give them when they’re considering investing and deploying an AI solution?
Daniel: Yeah. Well, I mean before thinking about sort of AI, X AI solution or Y AI solution, really what we consider to be kind of the linchpin of effective uses of resources with AI. In other words, the linchpin of how we get ROI out of AI, the linchpin of actual value as executive AI fluency. So, what do I mean by this? Well, let’s talk about what the opposite of it looks like. If leadership, if the people cutting the check, and the people coming up with the ideas about where AI is to be applied, just aren’t exactly up to snuff on some of the aspects of AI. So, for example, maybe we presume, as is often done, that the common press releases of our competitors are actually the hottest and most interesting areas to invest in, let’s just say that that’s a belief that we have. If we hold that belief, or if we believe, whichever vendor gives us the most, you know, plug and play-oriented pitch. Oh, yeah, you know, we can turn this around in X period of time, we can sort of have this up and running by Y if we take, kind of whoever gives us the most optimistic, you know, pitch without a realistic understanding of what the deployment challenges and deployment requirements actually look like. If we don’t have a rounded conception of the range of use cases. In other words, we know what our competitors have done press releases about but in all frankness, I mean, we don’t really know where we could put this stuff.
Then what ends up happening is we end up following our competitors into kind of toy applications into things that are somewhat silly, somewhat bloviated, somewhat unrealistic, even in terms of the initial expectations that we jump in on. There’s a dynamic that we talked about here, Ron called The Lens of Incentives. So, The Lens of Incentives states that this is, I could talk about every single quadrant of financial services and how this shakes out.
The Lens of Incentives basically states that “Companies will reveal and even expand and exaggerate AI use cases that they think will make them look better to their customers and investors. And they will conceal and downplay AI investments that they think are not going to make them look better in front of their customers and their investors.” So, when you’re looking at what the big firms are saying, you’re not actually seeing where the money’s going. And that’s a real problem because, you know, most of the time when we walk into a boardroom, or some head of innovation office, it’s not because anybody’s been misled intentionally, but what they’ve seen is what’s been revealed.
So, Ron in the insurance space, if I were to go by what’s been revealed by the big companies, I would presume that there has been much more money placed on AI applications for conversational interfaces and customer service broadly than for claims fraud altogether, just everything claims fraud versus customer service. I would guess it’s maybe a two to one or three to one for customer service, when in fact, Ron, it’s a two to three to one the other way around. Now, if I’m GEICO, how much do you think I’m going to talk about how I pick up on when you light your car on fire, and I make sure I don’t pay you because I knew you were trying to screw me. How much of a bullhorn do you think I’m going to throw on that?
Ron: Right. I love this. Keep going. I love that.
Daniel: Yeah. So, it’s not a lot. As it turns out, the answer is not a lot. So, it doesn’t really behoove me to let you know about that, right? Similarly, if I’m Santander or I’m Citibank and I spend a lot of money on a cybersecurity application, like let’s say Darktrace or one of the other major vendors. You know, to tell you or some money laundering vendor like I don’t know Feta Ray or something. If I say, “Hey, let’s do a big press release. Let people know that we’re trying to reduce the amount of terrorist dollars that runs through our system every year.” Like, really though, are we ever going to tell anybody that? Of course, we’re not, we’re just never going to throw attention on it. So what happens, Ron, is that initiatives are selected based on what has been expanded and exaggerated by competitors, rather than a realistic look at where could we actually apply this, and what realistically are the deployment challenges of each of these solutions?
So those are two factors, Ron, one is the reasonable use case landscape what could we do? Not one of our competitors told us about which is literally a violent bending of reality and an absolutely vile and misleading view as to what’s actually delivering value. That’s literally just what they’d like you to know them for, that has nothing to do with value and with the amount of dollars being hurled at these solutions. So, what could we do and what does it look like to actually deploy it? What are the actual realities of overhauling data infrastructure and of having cross-functional teams work together? Of building a culture of iteration if we need to find some of this value within our data for, detecting fraud or for improving underwriting, etc.
So, if we have those two bits of understanding, we can make smarter decisions and at the highest level, Ron we’d like to have executive teams that have some degree of an AI transformation vision. In other words, when they look at their digital transformation, and where they want that to take them, let’s say five years, 10 years out, where they want to be in the market. How they want technology to kind of enable their advantage. If they can blend AI into that story, because they understand it, they understand those two things I just told you about, the realistic use case range and what it takes to make it come to life. If they get those two things and they can blend that into their digital transformation story.
Now, more or less, every AI investment that we make at least ties to both near term and long-term value, as opposed to the kind of plug and play, the kind of me too, the kind of that looks neat AI deployments that we’ve seen over the course of the last four years, which have squandered countless millions.
Ron: So well said. So, we’re going to take a quick 20-second break now to tell you where you can find more information and insights about insurance innovation. We’ll be right back.
[If you liked this episode of AI Wisdom, subscribe to our blog, Writing the Future: AI in Commercial Insurance at www.chisel.ai/blog for feature articles, interviews, opinions, and more.]
We’re back with our featured guest, Dan Faggella. Let’s jump right into the next question. Let’s say you have decided to take the leap and you’ve done the due diligence and you’ve found a solution. And hopefully as we just talked about it’s not based on competitor’s press release, and you’re not doing it out of fear, but you’re doing it out of innovation and vision. And we didn’t necessarily talk about this, but AI solutions are data driven. They rely on their statistical models or probabilistic models based on large quantities of data. And so, what factors can you as an executive, do or keep in mind to set yourself up for a successful POC or deployment of a production solution?
Daniel: Yeah, I mean, there’s a lot of factors involving the data and involving skills and involving teams. We have an article called “The Prerequisites of AI Deployment.” So, if anybody Googles Emerj, just E-M-E-R-J critical capabilities, we have an infographic that essentially describes what these prerequisites are. But there’s a bunch to go into here. So, if I’m an executive, and let’s just say, you know, I’m kicking off, I’m working with some vendor or I’m building an in-house solution around underwriting. So, I’d like to be able to come up with some scores and recommendations for our underwriting process. So, I can help our underwriters make better decisions faster, make decisions that are maybe really, really well-calibrated to ultimately the business results that we’re looking for. So, we can stay competitive in pricing, but we can also make sure that we’re making the margin that we need to make. So, let’s just say that that kind of scoring for inbound applications and then the underwriting process is something we’re working on. Certainly, we’d want to be prepared for the elements and factors that would realistically involve.
So, we’d want to ask ourselves about the kinds of data that would be necessary, hopefully, for an insurance firm interested in this. The data for these applications, the data that underwriters actually have to deal with, has two things in common number one, it’s digital and accessible. So, it’s not coming in yellow pads. It’s not coming in bungled half scanned PDFs that people have a hard time reading, but it comes in in a digital format that we could actually train, and we could pull the information out of those columns and then train an algorithm on it.
And the second thing is that it’s relatively similar throughout the company. So, if we have underwriting forms that are kind of using our older one using our newer one. Some of them, you know have, some of the online forms put everybody’s names in all caps, some of them have them in lowercase, some collect more information than others. Is it somewhat uniform? Is it somewhat harmonized? So, we’d have to take a gander at our data ecosystem and sort of ensure that we’re going to have an environment where we can actually train an algorithm. And that might require finding a way to clean the data after it is entered. So, folks enter the data, it’s a little garbled, it’s not exactly as nice as we’d like it. And so we have to add kind of a layer behind where we collect that data, where we could clean it and make it something that we can actually train an algorithm on to sort of understand, should we say yes or no to this application. What kind of pricing should we offer? Things like that.
The second thing that might be necessary if we’re looking at a little bit more of a jumbled circumstance for data is, we might have to kind of gradually alter our protocols for collecting that data in the first place. So, we may realize that having X percent of these applications come in, in some PDF format or some other format that’s drastically different from the nice clean digital data that we get sometimes. It’s just not going to be something that’s sustainable and so we actually have to focus on kind of the front of the house and sort of those core processes.
So, having conversations that are that big, Ron, feels very, very weird when people are adopting AI, because you generally don’t have to have conversations that big when you talk about IT. When you talk about IT, it’s just okay, where do I plug in the APIs or whatever? What are the settings we want? What do we want it to do when we click this button? What are the features we want it to have? And we can kind of build it out. In the case of AI we’re talking about kind of overhauling the soil that AI is going to sit in, in this case, the data infrastructure and to some degree that the way the business works.
I should say something here that there’s two ways to look at this. There’s two ways to look at this. One way to look at these core changes is to say, “This is bad. This is a frustration. This is a challenge. And I’m going to cut this AI project or decide not to move forward.” Now, Ron, bajillions of AI projects are canceled when vendors sort of promised the result, but don’t talk enough about what it’s going to take to get them there and then all of these hurdles I’m telling you about become a bad surprise to the buyer. And bad surprises often will mean you’re out of here. So, these come across as a bad surprise, this bodes very poorly for adoption and deployment, bodes very poorly for something turning from a pilot into something that makes its way into production.
The other way to see it, though, outside of a challenge is to see it as an opportunity for us to mature our firm altogether so that we can be more capable moving forward, not just with this AI application, but with any other AI application we want to do in the future.
So, it’s a little bit frustrating for buyers if they don’t understand this space because the mind shift that has to happen here is that yes, this is a bit of a challenge. Yes, having all these people focus on data cleaning, setting up this entire new data pipeline thing, thinking about the way we collect data in the first place. I just wanted to automate my underwriting, what the hell?
But when we realize that some of these prerequisites can become things that streamline our data ecosystem so that we can do other stuff in the future. Maybe we can even have better customer service because some of this application data is just immediately accessible and we can cross-reference it with our CRM and our support people can have it on the phone. Maybe we’ll want to do other things with underwriting data moving forward in terms of prediction and modeling. And this is just going to give us a rich ecosystem to be able to do that those better predictions and set better prices moving forward.
If we can find the value in the maturity itself, only then can we realistically double down on AI, because we can see the near-term value. But we can also see something other than pain, for why we are to overhaul the way that our teams operate, the way that our data Infrastructure operates, and the way that we kind of think about the business longer term.
So being able to understand maturity means that we can see it as something other than an annoyance and a challenge and see it as something that we can also be investing in, in addition to the near-term results.
So in terms of advice, I’d say the firms that are doing things well kind of understand what they’re signing up for and they know that they’re signing up for some changes and that those changes can be good in both the near term and in the long-term. And that’s kind of the, mature view. I wish I could tell you it was the norm today, certainly more than the norm than it was three years ago. But it’s still not the norm today.
Ron: That’s right and communication is the root of many, many things and expectation setting as well, as you said. Although, in my experience, most of the time, the client isn’t the one and, in this case, the client being the company, the carrier, the broker, whatever it might be, isn’t the one who sets up a data cleaning pipeline. I mean, it obviously depends on the vendor they work with. But in an ideal world, from my perspective, the input is given in sort of a natural format just like the raw data .PDF, .EML, whatever files you have, and the labeling, the cleaning, all of the statistically speaking, my experience like 90% of data science is cleaning. So, I don’t expect customers to do that. I don’t expect carriers to do that. That’s what we specialize in. And that’s what I think other AI vendors should be doing. In your experience is it really common to actually ask the carrier to set up like a data cleaning pipeline?
Daniel: So, for some applications that’s certainly something that’s done. What I’ll say is this about it. So whenever possible, vendors often have to kind of absorb, kind of shock absorb what are the initial challenges of deploying artificial intelligence. So maybe they’ll have this little buffer region where the data that they actually take in will be something that they can actually use for whatever application the customer is looking for. And to be frank, Ron, I think in the near term, and when I say near term, I mean, maybe even another two and a half years. I think that the AI vendors that see the bulk of the value at least in the next two and a half years are vendors that do, frankly, and I’m not saying this is good, by the way. But I’m saying that I see this as borderline inevitable.
There are vendors that cushion their customers from as much change as possible. If I give you an example in healthcare if we have a...and this translates precisely your space as well, but this is a representative example that I think is really, really strong. If we have the ability to just take the radiology data from all of our chest X-rays, pipe them up into the cloud, lLabel them, put little circles around them where there might be cancer and then pipe that down into the display that the radiologist is actually going to look at to the point where nothing has changed. We used one API here, one API there and we’re done. To the point where there’s really no... the customer doesn’t even need to know that there’s AI being done, that there’s no change the way we’re handling our data. That is a much faster turnaround in terms of the client actually being able to see results in terms of near-term benefit. It doesn’t fundamentally help them, sort of unlock the value of the totality of their data moving forward. It doesn’t, sort of give them a broader ability to do more with that data moving forward. But that bigger maturity, that bigger vision, that bigger overhaul and change might just not be something they’re ready for right now anyway, they need to see a lot of instances of the more near-term value.
So, I think that vendors that succeed and I think especially after COVID when the, let’s say, willingness to do a lot of upfront heavy lifting is probably reduced somewhat significantly in some sectors.
The vendors who can integrate with less data streams, who can, alleviate as much of the pain as possible of some of that initial transformation of data or handling of early processes. And can, for lack of better terms be kind of a layer of value on top of the business, as opposed to transforming sort of core elements of the business are going to be the vendors that I think gobble up the bulk of the sales in the next two and a half years.
Not all vendors can do that, Ron. Not all vendors can stay for example, from what I gather, you guys train on the actual data of the client. You guys do kind of sink your fingers in there a little bit you’re not quite as extracted as that healthcare example that I mentioned. But there’s companies all along this gradient, if I’m a Dataiku, I need to train your teams to collaborate on my platform. I need to potentially really help you set up all different parts of your data infrastructure, so that you can kind of jack into and leverage my platform to, sort of do your AI initiatives. There’s a lot of transformation inherent in my sale. If I’m like that company, Aidoc, that just pipes up the radiology and pipes it down, then I need to do exactly zero transformation, exactly zero AI maturity with my client. Then there’s everybody in between. From what you’re saying here, Ron, you guys are trying to stay closer to the distal end, whereas much of that is off the plate of the client as possible.
Ron: Yeah, I would summarize this as like, change management and I think everybody will acknowledge that in any project, and it doesn’t have to be AI, change management is one of the most important pieces and I would say it is also probably the riskiest piece of most projects. Because adoption, especially like maybe if it’s API driven, it’s less risky because adoption is “Not necessarily a factor.” But anytime that there’s a user using it, you want to minimize the change. And so, to your point that’s the philosophy we’ve always gone by.
Daniel: Yep. And again, I don’t necessarily think that’s where all AI vendors are going to sit forever. I think the layer of value model for AI vendors and the layer of value expectation for buyers, is probably going to be different three or four years from now. I think people are going to realize the core way that we tackle, that we store, that we leverage for our own purposes. Not for this vendor, not for this vendor, and not for this vendor, for our own purposes and our own ability to wield and deliver business value from our data needs to be handled differently. The challenges of cross-functional teams, we just need to get over that. So, a lot of the things that are currently today hurdles, I think are going to be seen as capabilities that companies want to build in-house. But I think for right now most people want minimal headache, and they want a damn result. And the more vendors can kind of play at the surface and handle a lot of the maturity things themselves, I think the better they’ll do in the next two and a half years. This is our guess right now.
Ron: So, let’s talk as we start to wrap up, we’ve danced around the topic of COVID-19 a couple of times. You know, I would love to get your thoughts on what you’ve seen as far as COVID-19, the adoption of AI, slowing, accelerating, anything else?
Daniel: Yeah. You know, I have a couple of takes on this. So, on the aggregate COVID-19 has what we could call a potentially beneficial effect, where people are already having to radically change how they do business if for no other reason than the fact that everybody’s working remote. So, we’re getting to see where our breakpoints are, we’re getting to see maybe where we needed staff, where we didn’t need staff, where there was the most value being added or maybe not so. And also, companies that are more digitally mature are going to be doing better in this current environment. So, there’s a bit of encouragement “for jeepers, maybe we could be a little farther ahead” and be sort of person appeared to be nimble and operate in these different kinds of environments. I think digital transformation at large is going to get, 10 years from now we’ll see this as kind of maybe a kick in the tail for that.
That said, there’s also some factors here that make it a little bit more challenging for companies to deploy artificial intelligence. So, one of those is that a lot of companies are taking a pretty serious blow financially. So that never sort of bodes well for a lot of new projects. There’s certainly a lot of AI pilot projects, projects that were, one eight or one-fifth of the way into becoming something real that, are no longer existent, because COVID struck. And sort of whoever the buyer was decided that that just wasn’t going to be something viable for them. So, you know, budget’s pulling back, maybe a little bit less willingness to buy. The other factors here, Ron, is that if we’re working remote, it’s sometimes tough to pool together the kinds of cross-functional teams and do the kind of brainstorming and integration, you know with IT, etc., that maybe we would have liked to with kind of the white glove beginnings of an AI vendors interaction with a client. That becomes a little bit harder.
Also, it becomes a little bit harder to sort of, get over some of those initial adoption hurdles. So if there are things about setting up the data cleaning pipeline or there are things about really needing to extract a lot of information from the subject matter experts within the client company and having kind of an ongoing set of iteration feedback cycles with those folks. Any of those things that feel like, again, if we don’t see them as maturity, if we don’t see them as an investment, they feel like a challenge. That’s the rule. So, if we don’t see them as maturity, they only feel like a drawback.
If there’s a lot of those things, and if we’re already scrimped for budget, then we’re going to have less appetite to endure potentially some of the initial cultural talent and data hurdles to jump over before we actually see this thing turn around and deliver value. So again, I think that’s going to bias towards companies that really make onboarding as smooth as possible, potentially minimize the number of data, sort of data types that they need to access in order to deliver value. And companies that can kind of play more on the surface, so, their onboarding and their time to value is as short as possible. I don’t think that’s going to be the strategy for vendors forever, I think it is going to be for now.
I think RPA is going to see a bigger uptick than AI for maybe the next 18 months. But the hope there, is that when people think a lot more about digital transformation, they start to recover their revenues, things start to look a little bit better. They’ll have aggregately better soil to grow AI in.
So short term run, I think it’s going to be aggregately worse for AI adoption deployment budgets.
I think the vendors that are going to come out of this thing winning are going to be the vendors who absolutely minimize onboarding hurdles and absolutely minimize time to value. And that often means focusing on really narrow problem sets and focusing on simplifying how we can onboard to get to those particular results, and also minimizing the amount of AI maturity required. So, near term, there’s opportunities for vendors. There’s also a lot of challenges for vendors and companies.
Two years out, I think hopefully, revenues will be bouncing back, and folks will be more open to thinking about transformation than ever. But in the now I’d say net more challenges than opportunities, but the cream will rise, Ron, and I’m certainly rooting for you guys.
Ron: Awesome. Thank you. Appreciate that. So, as a second to the last question, I’m curious to hear. I feel like often times there’s fear about AI replacing humans in the workplace. What does your research show? Do you expect that to happen? Do you expect that to happen, but the people be redeployed to maybe better jobs, more value-adding jobs? What do you foresee?
Daniel: Well, there’s a couple of things here, first within insurance, and even financial services broadly, but if we can just talk about insurance.
Within insurance, we don’t really see big shakeups in terms of AI taking away jobs much at all. I mean underwriting and claims really seems to be kind of augmenting workflows more than automating folks away.
The jobs that are automated are likely to be the stuff that’s outsourced. So, the stuff, the manual things that we have going on in Pune or Hyderabad right now in India, are likely to be the things that get automated before the things in London and Chicago are getting automated. And there’s less bad press releases that go out about you if you stop using a BPO versus if you fire 300 people in your Claims Department. It’s just less bad PR, just not as bad. You know, we could argue it is a similar impact, but for whatever reason, people don’t care as much about Hyderabad.
So, I would suspect that low hanging fruit kind of data entry basic process stuff a lot of what’s being outsourced. We will over the course of the next two years be able to automate. There may be some pockets because, Ron nobody knows what the future holds. There may be pockets in areas like claims, pockets in areas like customer service, where AI really can be dominant and really just conquer massive chunks of work that would have required manual human effort.
And we’ll have to see, do those folks all get redeployed in some productive sense? Do those folks’ sort of, get a little bit left behind there in that transition? I think over the course of the coming two years, if you were to ask me, I would say I don’t see that big break in job loss happening anytime soon. I think the first place it’s going to happen is to business process outsourcing folks. And my guess is if it remains nearly as gradual as it has been, we’ll likely be able to do a pretty good job of reallocating people. If I had a fear of the armageddon getting a job loss, and insurance or banking I would have been...I’d be blowing that clarion call from the top of the roof for the last couple months, but I really don’t right now. So, fortunately, I think if transformation continues at pace, we might be using less outsourcers. But hopefully, we’ll have humans that are more augmented to do their jobs. Not 100% certain on the future there, but I think aggregately that’s what the pattern looks like today.
Ron: Great insight. And so, as we wrap up, what is one piece of wisdom that you would share with our listeners if they only had 30 seconds, and this was the most important thing that they walked away with? What would it be?
Daniel: Yeah, I’d say that the easiest way to waste money on AI initiatives, if we’re thinking longer term business value is really only measuring short-term financial benefit of an AI solution. We obviously have to factor that in, it’s got to be one of the most important things. But if there’s no thought about the new capabilities we can build with an AI initiative and what we can learn and if there’s no thought to where this plays into our broader digital transformation journey. Where we’re going to be, how we’re going to win in the market moving forward, if there’s literally no consideration to that, and we’re just slapping this on like a band-aid. That is an absolutely unsustainable strategy for AI. It’s also the origin of a lot of wasted money. So be sure we’re thinking about maturity, be sure we’re thinking about strategy and at least factor it in to who we work with and what initiatives we pick.
Ron: Great stuff. So, Dan, where can people find out more about yourself and Emerj?
Daniel: Yeah, sure. For folks that are interested in what we do at Emerj, it’s just emerj.com.
That’s the homepage of our market research firm, probably most of your audience, Ron, is interested in what we do in insurance. We actually have kind of an executive cheat sheet on artificial intelligence use cases in insurance. That’s a free PDF on use cases and trends in insurance broadly. So, if people want to kind of put what we’ve said today in a nutshell, and maybe add a little bit to it and get up to speed a little smarter than their peers, that would probably be the best thing to sink their teeth into. And they can always reach out to us at Emerj if they’re interested in what we do here.
Ron: Awesome. And as always, you can find out more about Chisel at www.chisel.ai. Dan, thank you so much for taking the time and stay safe out there.
Daniel: Yeah, same to you, brother. Thanks for having me on.
That’s a wrap for this episode of “AI Wisdom” hosted by Chisel AI and me, Ron Glozman. Thanks for listening.
Join us next time for more expert insights and straight talk on how AI and insurtech innovations are transforming the insurance value chain. See you on the next episode!