Over the past five years, artificial intelligence (AI) solutions have seen a prominent surge in commercial insurance. Whether AI is deployed within large carriers, brokers, or business process outsourcers, AI is increasingly driving efficiency gains where rote manual effort previously existed.
However, several important technical questions continue to be asked by stakeholders who are seeking to better understand the true value they can expect to gain by rolling out artificially intelligent solutions.
Get our viewpoints delivered to you inbox
Unlike traditional software products, AI-based solutions are still not widely understood, and they come with a considerable amount of change management in order to truly recoup the full value of the investment made.
In this article, we provide answers to questions we continue to hear throughout our sales process. These questions and answers largely apply to AI solutions like ours designed to extract data from digital insurance documents.
1. How accurate is the solution? What’s your Straight-Through-Processing (STP) rate?
This is by far the most commonly asked question. Typically, it is posed by an executive sponsor seeking to automate most of their existing manual process. It is also one that is difficult to answer without the most clichéd response of all time: “It depends.”
Let’s dispel one big misconception up front: No one can or will ever achieve 100 percent accuracy and 100 percent STP.
The best a machine learning model will ever do is to perform as well as your best person on their average day, every day. This is because the models are mathematical approximations based on the observations of actual people. As such, the model can never be perfect.
When you consider the wide variety of unstructured documents used in commercial insurance today, it’s easy to see why accuracies are lower than 100 percent.
Do not let perfect be the enemy of good
Starting with automation in the range of 80 percent is STILL a considerable cost saving and efficiency gain. Of course, if your particular use case is one where the documents are widely structured, you can achieve much higher extraction accuracies. For example, you might receive a particular broker application 85% of the time. At some point, any machine learning model will tightly learn the patterns of that document and begin to perform with high accuracy and high consistency. In Chisel AI’s case, we capture feedback from production usage and use external data validation to dramatically improve each customer’s extraction accuracy (see next point for more information).
2. How quickly will your system improve?
In order for any solution/implementation to get better with time, your AI-vendor might provide some capability for continuous learning. (Some vendors do not offer this feedback loop; others will only claim they have it.) This can either be an internal service offered by the vendor whenever exceptions occur in the data extraction or can be a customer-facing interface that allows your business users to correct the model (like a teacher with their pupil) as they go about their daily tasks. In Chisel AI’s platform, we offer the latter, and make it accessible through an intuitive UI in both our Policy Check and Submission Intake applications. We have found this workflow to be a trust-builder with end-users who want to review the AI solution’s work.
Model improvement rates cannot be reasonably predicted. They can be estimated based on priors and experience, but they largely depend on the complexity of the data you want extracted and the volume.
3. How much data do we (the customer) need to provide? Can the models be exclusive to us?
Cybersecurity is a top priority for any organization looking to safeguard its customer data. In general, most new customers or adopters of AI tend to be guarded with respect to providing large volumes of data to vendors – and rightly so. However, machine learning models need data samples to improve. One key competitive differentiator between AI vendors is their existing data assets. Nonetheless, most vendors will require some number of documents to test their solution and to further tune the accuracy of their models. This should not disqualify a vendor from consideration. Instead, interrogating their security and data management practices might help ease any concerns of compromise.
For our Submission Intake product, we typically require less training data (tens of documents) up-front from the customer if they are satisfied with the mix of data fields we extract from their email, broker application, loss run and statement of value documents. Our existing training corpus is diverse enough that immediate production usage can increase the extraction accuracy in case it is still not satisfactory.
For our Policy Check product, we work with customers to identify the Lines of Business (LOB) and Issuer documents they would like to review. If our existing training corpus already contains examples of LOBs and Issuers that a new customer would like, then we require much less training data (again, tens of documents) and are able to make use of our existing models. Should a customer require a net-new Line of Business, we work with them to identify how well our existing models “generalize” to their data and look for opportunities to supplant our models with some of their examples. In these cases, we typically ask for several hundred examples of documents.
Common Cause
We’ve also spoken to large organizations who have asked that their machine learning models be exclusive to their accounts. These kinds of requests are typically made in the same spirit of wanting greater security around data handling, which is understandable. However, they come at a considerable downside: a model’s performance is heavily linked to the diversity of the training examples it sees. There is a great deal of common cause for having shared models. Everyone’s system benefits. As such, we generally take a firm position on requests of this nature. In order to enforce greater security, each customer’s documents are isolated in their own storage, which is private to that customer. Furthermore, the models do not contain or leak proprietary information (see diagram below).
4. How long does it take you to process our documents?
We tend to receive this question frequently from customers who are looking to integrate our solution into their existing workflows via API. The Chisel AI inference platform can take in the range of seconds to minutes to process a document. It largely is dependent on the number of pages in the document and its complexity. The platform processes each page of a document in parallel and is able to scale elastically to handle larger workloads. In any case, we always work with our customers to meet their SLA requirements.
Final Thoughts
We do caution most of our customers, as they evaluate various AI-solutions in the market, to pay attention to the processing times advertised. If a vendor promises a turnaround time of several hours or days, it is extremely likely that they are employing a manual workforce to conduct/verify some of the extraction. This system obviously does not scale with higher volumes and will be a limitation when the solution finds larger adoption within customer organizations.
By providing some clarity around AI-powered products, we hope that commercial insurance customers will feel more informed and empowered to employ such solutions and reap the benefits of intelligent automation. By alleviating the burden of routine, manual tasks that – let’s be honest – computers can do better and faster, Chisel AI’s purpose-built solutions free up skilled staff to focus on customers and growth.
George Hanna, MASc is Technical Product Manager at Chisel AI. Prior to joining Chisel AI, he founded a startup in Natural Language Processing and worked in a variety of R&D settings such as Nokia, Sunnybrook Hospital, and several successful startups. He has published research findings in leading journals such as IEEE. George completed his Masters of Applied Science from the University of Toronto in Machine Learning and Neuroscience.