Machine Learning Approach to Promoting Compliance

Background

Increasing complexities and emerging vulnerabilities threaten organizations to various types of risk, such as compliance and inefficient business processes. To combat these vulnerabilities, comprehensive policies and governance are required to address these risks, inefficiencies, and to align with an organization’s broader strategic vision and plans. The Ethics Oversight Board (EOB) of the National Association of Certified Valuators and Analysts (NACVA) was formed to tackle these challenges. As stated in NACVA’s EOB Policies and Procedures Manual, “the EOB’s responsibilities include educating, monitoring and enforcing compliance of NACVA’s members. The EOB’s duties include recommending standards, creating awareness and understanding of NACVA’s Professional Standards, monitoring compliance; and when necessary, investigating and determining whether a member or members have violated NACVA standards.” Standards violations can result in undesirable consequences such as additional expenses, scrutiny, suspension, or even membership termination.


There are five (5) steps which any organization can take a proactive approach to driving compliance:


  1. Develop a policy framework

  2. Evaluate the policy on a regular basis

  3. Formalize a governance process

  4. Deploy systematic controls, tracking, and monitoring

  5. Enhance exceptions, accountability, and consequences

This article discusses how to adopt a machine learning (ML) approach to leverage data and technology to make course recommendations to aid and promote ethics and compliance in the workplace. To explain ML, let us take a common use case: educational course recommendations on a shopping site. Suppose you have been tasked with creating the back-end application that will provide course recommendations to NACVA members based on their past purchases. How can one address this need?


One could either go the classical programming route or utilize ML. In classical programming, these rules are created by humans, based on factors like business requirements and domain knowledge. Classical programming is traditionally how these needs were handled in the past but when it comes to developing a prediction (output) from data (input), we know that there needs to be some rules applied to the data.


Programmers would set up rules that said, “if member A purchased course X in the past, show course Y” because there was some established relationship between those two courses. While this can occasionally prompt members to make that second purchase, it required programmers to explicitly define and set these rules. They could not take very much additional context about the members or the courses into account. In addition, members are unique and just because one member was interested in course X and Y, that does not mean most or even many others will be interested in both courses as well. Finally, even if you were to spend the time developing more complex prediction rules, whenever a recommendation needed to be made, the application would have to run through all of the appropriate rules all over again. As more rules are added, the process takes longer to return a result, meaning members are waiting for those recommendations to load and likely getting frustrated and moving beyond the opportunity to learn the required material.


Machine Learning




What About ML?

ML, by contrast, would let us use a variety of data collected in the past to automatically derive the patterns hidden in that data. The patterns are then used to create the model, which is applied to new data to provide a more well-informed and adaptive prediction. In this example, one would be able to use a member’s course purchase history in combination with other enriched features. We can then use ML to identify the patterns between past members, sales, and compliance, among others to then apply those patterns to new members in order to provide better recommendations to them.


So What is a Model, Exactly?

A model in ML is the trained algorithm you use to identify patterns in your data. The key there is that it is trained through the ML process. It is not created manually by programmers setting up rules like in classical programming. Let’s look at a very simple example algorithm:


F(x) = a0x0+a1x1+…+anxn


This has been simplified from what you might use in a real environment, but it shows the two key components of an algorithm: features and weights.


Features are the parts of your datasets that are identified as important in determining accurate outcomes. For example, with our course recommendation algorithm, the first feature might be whether or not the item is an ethics course. These features have to be expressed mathematically, so in this case our model will convert a Yes into a one and a No into a zero.


But key to ML is this idea of context. This is where weights come in. Weights represent how important an associated feature is to determining the accuracy of the outcome. Therefore, something that has a higher likelihood of accuracy has a higher weight and vice versa. In this case, our model has been trained and has determined that because this customer has taken eight out of 10 available hours of ethics courses in the past, that translates to a weight of 0.8.


Let’s look at the next set. The second feature has to do with whether or not a course is from a particular educator. The course is from that same instructor, so that converts again into a one. The second weight says that since two out of the eight courses this member took in the past were from that educator, that results in a weight of 0.25.


F(x) = 0.8*1+0.25*1

F(x) = 1.05

If F(x) > 1, recommend the course


It is important to note that this is a very simplified version of what really goes on but if this was a very simple model, this is what one would end up with. Let’s say the standard for whether or not to recommend is if the final result is greater than one. The model performs this calculation and finds the final value to be 1.05. Therefore, the course would be an acceptable recommendation.


Conclusion

Promoting compliant and ethical behavior through educational outreach and awareness can yield multiple benefits for NACVA and its members. Firstly, it can provide a more proactive approach to identify areas of opportunity for education and promoting awareness. Strengthening the brand can help bolster NACVA’s external reputation. By introducing connected technology, data, and process can enhance and enrich governance, oversight, and decision making. ML application is a multifaceted approach used to solve today’s real-world problems across various industries such as unsupervised learning to detect anomalies and combatting fraud.


If you have questions, please do not hesitate to contact us.


Reneé Fair, CVA, is a Certified Valuation Analyst (CVA) and managing partner and co-founder of Trustee Capital LLC, an independent valuation and analytics automation consulting firm based in Tampa, FL. Trustee Capital LLC specializes in business valuation, data science, data visualization, analytics, and robotic process automation. Ms. Fair was elected by NACVA membership to the National Association of Certified Valuators and Analysts (NACVA) Ethics and Oversight Board (EOB) in 2021.


Ms. Fair can be contacted at (813) 397-3648 or by e-mail to rfair@trusteecap.com.

9 views0 comments