example case study with solution about crisp dm methodology

Technology is Opportunity

A practical guide to crisp-dm, what is crisp-dm.

At some point working in data science, it is common to come across CRISP-DM. I like to irreverently call it the Crispy Process. It is an old concept for data science that’s been around since the mid-1990s. This post is meant as a practical guide to CRISP-DM.

CRISP-DM stands for CR oss I ndustry S tandard P rocess for D ata M ining. The process model spans six phases meant to fully describe the data science life cycle.

  • Business understanding
  • Data understanding
  • Data preparation

CRISP-DM Process Diagram

This cycle comes off as an abstract process with little meaning if it cannot be grounded into some sort of practical example. That’s what this post is meant to be. The following is going to take a casual scenario appropriate for many Midwestern gardeners about this time of year.

What am I planting in my backyard this year?

It is a vague, urgent question reminiscent of many data science client problems.  In order to do that we’re going to use the Crispy Process in this practical guide to CRISP-DM.

First, we need a “Business understanding”. What does the business (or in this case, gardener) need to know?

Next, we have to form “Data understanding”. So, what data is going to cover our first needs? What format do we need that data in?

With our data found, we need to do “Data preparation”. The data has to be organized and formatted so it can actually be used for whatever analysis we’re going to use for it.

The fourth phase is the sexy bit of data science, “Modeling”. There’s information in those hills! Er…data. But we need to apply algorithms to extract that information. I personally find the conventional title of this fifth phase to be somewhat confusing in contemporary data science. Colloquial conversations I would have among fellow data professionals wouldn’t use “Modeling” but rather “Algorithm Design”  for this part.

“Evaluation” time. We have information As one of my former team leads would ask at this stage, “You have the what. So what?” 

Now that you have something, it needs to be shared with the “Deployment” stage. Don’t ignore the importance of this stage!

I pick up on who is a new professional and who is a veteran by how they feel about this part of a project. Newbies have put so much energy into Modeling and Evaluation, Deployment is like an afterthought. Stop! It’s a trap! 

For the rest of us “Deployment” might as well be “What we’re actually being paid for”. I cannot stress enough that all the hours, sweat, and frustration of the previous phases will be for nothing if you do not get this part right .

Business understanding: What does the business need to know?

We have a basic question from our gardener.

To get a full understanding of what they need though in order to take action as a gardener to plant their backyard this year, we need to break this question down into more specific concrete questions.

If I ever can, I want to learn as much about the context of the client. This does necessarily mean I want them to answer “What data do you want?” It is also important to keep a client steered away from preconceived notions of the end result of the project. Hypothesizes can dangerously turn into premature predictions and disappointment when reality does not match those expectations.

Rather, it is important to appreciate what kind of piece you’re creating for the greater puzzle your client is putting together.

About the client

I am a Midwestern gardener myself so I’m going to be my own customer.

Gardening hobbyist who wants to understand the plants best suited for a given environment. Ideal environment to include would be the American Midwest, the client’s location. Their favorite color is red and they like the idea of bits of showy red in their backyard. Anything that is low maintenance is a plus.

For this client, they would prefer the ability to keep the data simple and shareable to other hobbyists. Whatever data, we get should be verified for what it does and does not have as the client is skeptical of a dataset’s true objectivity.  

Data understanding: What data is going to cover our needs?

One trick I use to try to objectively break down complex scenarios in real life is to sift the business problem for distinct entities and use those to push my data requirements.

We can infer from the scenario that the minimal amount of entities are the gardener and the plant. As the gardener is presumably a hobbyist and probably doesn’t have something like a greenhouse at their disposal, we can also presume that their backyard is another major entity, which is made of dirt and is a location. It is outside, so other entities at play include the weather. That is also dependent on location. Additionally, the client cares about a plant’s hardiness and color.

So we know we have the following issues at least to address:

  • Plant Hardiness
  • Plant Color

The Gardener is our client and is seeking to gain knowledge about what is outside their person. So we can discard them as an essential data point. 

The plant can be anything. It is also the core of our client question. We should find data that is plant-centric for sure.

Location is essential because that can dictate the other entities I’m considering like Dirt and Weather. Our data should help us figure out these kinds of environmental factors.

Additionally, we need to find data that could help us find the color and hardiness of a plant.

There are many datasets for plants, especially for the US and UK. Our client is American so American-focused datasets will narrow our search. 

usda plant finder page

It has several issues though with our needs. One of the most glaring ones is location. While it does have state information, single states in the United States can be larger than one country in many parts of the world. They can cover large geography types so our concern about issues like weather are not accounted for in this dataset.

Perhaps ironically, the USDA does have a measuring system for handling geographically-based plant growing environments, the USDA Plant Hardiness Zones.

USDA Plant Hardiness Zones are so prevalent, they are what Americans gardeners typically used to shop for plants. Given that our client is an American gardener, it is going to be important to grab that information. Below is an example of an American plant store describing the hardiness zone for the plant listed.

burpee seeds plant listing

American institutions dedicated to plants and agriculture are not limited to just the federal government. In the Midwest, the Missouri Botanical Garden has its own plant database, which shows great promise.

mobot plant finder page

The way the current data is set up, we could send this on to our client, but we have no way of helping them verify exactly what this dataset does and does not have. We only know what it could have (drought-resistant, flowering, etc), but not how many entries.

We’re going to have to extract this data out of MOBOT’s website and into a format we can explore in something like a Jupyter notebook.

Data preparation: How does the data need to be formatted?

Getting the data.

The clearest first step is that we need to get that data out of just MOBOT’s website. 

Using Python, this is a straightforward process using the popular library, Beautiful Soup. The following presumes you are using some version of Python 3.x.

The first function we want to approach this is a systematic way of crawling all the individual webpages with plant entries. Luckily, for every letter in the Latin alphabet,  MOBOT has web pages that use the following URL pattern: 

https://www.missouribotanicalgarden.org/PlantFinder/PlantFinderListResults.aspx?letter=<LETTER>

So for every letter in the Latin alphabet, we can loop through all the links in all the webpages we need.

The following is how I tackled this need. To just go straight to the code, follow this link .

def find_mobot_links():

    alphabet_list = [“A”, “B”, “C”, “D”, “E”, “F”, “G”, “H”, “I”, “J”, “K”, “L”, “M”, “N”, “O”, “P”, “Q”, “R”, “S”, “T”, “U”, “V”, “W”, “X”, “Y”, “Z”]

    for letter in alphabet_list:

       file_name = “link_list_” + letter + “.csv”

       g = open(“mobot_entries/” + file_name, ‘w’)

        

       url = “https://www.missouribotanicalgarden.org/PlantFinder/PlantFinderListResults.aspx?letter=” + letter

       page = requests.get(url)

       soup = BeautifulSoup(page.content, ‘html.parser’)

       for link in soup.findAll(‘a’, id=lambda x: x and x.startswith(“MainContentPlaceHolder_SearchResultsList_TaxonName_”)):

           g.write(link.get(‘href’) + “\n”)

       g.close()

Now that we have the links we know we need, let’s visit them and extract data from them. Web page scraping is a process of trial and error. Web pages are diverse and often change. The following grabbed the data I needed and wanted from MOBOT but things can always change. 

def scrape_and_save_mobot_links():

    alphabet_list = [“A”, “B”, “C”, “D”, “E”, “F”, “G”, “H”, “I”, “J”, “K”, “L”, “M”, “N”, “O”, “P”, “Q”, “R”, “S”, “T”, “U”, “V”, “W”, “X”, “Y”, “Z”]

    for letter in alphabet_list:

        file_name = “link_list_” + letter + “.csv”

        with open(“./mobot_entries/” + file_name, ‘r’) as f:

            for link_path in f:

                url = “https://www.missouribotanicalgarden.org” + link_path

                html_page = requests.get(url)

                http_encoding = html_page.encoding if ‘charset’ in html_page.headers.get(‘content-type’, ”).lower() else None

                html_encoding = EncodingDetector.find_declared_encoding(html_page.content, is_html=True)

                encoding = html_encoding or http_encoding

                soup = BeautifulSoup(html_page.content, from_encoding=encoding)

                file_name = str(soup.title.string).replace(”  – Plant Finder”, “”)

                file_name = re.sub(r’\W+’, ”, file_name)

   

                g = open(“mobot_entries/scraped_results/” + file_name + “.txt”, ‘w’)

                g.write(str(soup.title.string).replace(”  – Plant Finder”, “”) + “\n”)

                g.write(str(soup.find(“div”, {“class”: “row”})))

                g.close()

                print(“finished ” + file_name)

            f.close()

            time.sleep( 5 )

Side note: A small, basic courtesy is to avoid overloading websites serving the common good like MOBOT with a barrage of activity. That is why the timer is used in between every loop.

Transforming the Data

With the data out and in our hands, we still need to bring it together in one convenient file, we can examine all at once using another Python library like pandas. The method is relatively straightforward and also already on Github if you would like to just jump in here .

Because our previous step got us almost everything we could possibly get from MOBOT’s Plant Finder, we can pick and choose just what columns we really want to deal with in a simple, flat csv file. You may notice the code allows for the near constant instances where a data column we want to fill in doesn’t have a value from a given plant. We just have to work with what we have.

Ultimately, the code pulls Attracts, Bloom Description, Bloom Time, Common Name, Culture, Family, Flower, Formal Name, Fruit, Garden Uses, Height, Invasive, Leaf, Maintenance, Native Range, Noteworthy Characteristics, Other, Problems, Spread, Suggested Use, Sun, Tolerate, Type, Water, and Zone.

That should get us somewhere!

Modeling: How are we extracting information out of the data?

I am afraid there isn’t going to be anything fancy happening here. I do not like doing anything complicated when it can be straightforward. In this case, we can be very straightforward. For the entirity of my data anlsysi process, I encourage you to go over to my Jupyter Notebook here for more: https://github.com/prairie-cybrarian/mobot_plant_finder/blob/master/learn_da_mobot.ipynb

The most important part is the results of our extracted information:

  • Chinese Lilac (Syringa chinensis Red Rothomagensis)
  • Common Lilac (Syringa vulgaris Charles Joly)
  • Peony (Paeonia Zhu Sha Pan CINNABAR RED)
  • Butterfly Bush (Buddleja davidii Monum PETITE PLUM)
  • Butterfly Bush (Buddleja davidii PIIBDII FIRST EDITIONS FUNKY …)
  • Blanket Flower (Gaillardia Tizzy)
  • Coneflower (Echinacea Emily Saul BIG SKY AFTER MIDNIGHT)
  • Miscellaneous Tulip (Tulipa Little Beauty)
  • Coneflower (Echinacea Meteor Red)
  • Blanket Flower (Gaillardia Frenzy)
  • Lily (Lilium Barbaresco)

Additionally, we have a simple csv we can hand over to the client. I will admit as far as clients go, I am easy. Almost like I can read my mind.

Evaluation: You have the what. So what?

In some cases, this step is simply done. We have answered the client’s question. We have addressed the client’s needs. 

Yet, we can still probably do a little more. In the hands of a solid sales team, this is the time for the upsell. Otherwise, we are in scope-creep territory. 

Since I have a good relationship with my client (me), I’m going to at least suggest the following next steps. 

Things you can now do with these new answers:

  • Cross reference soil preferences of our listed flowers with the actual location of the garden using the USDA Soil Survey’ data ( https://websoilsurvey.sc.egov.usda.gov/App/HomePage.htm ).
  • Identify potential consumer needs of the client in order to find and suggest seed or plant sources for them to purchase the listed flowers.

Deployment: Make your findings known

Person experience has shown me that deployment is largely an exercise in client empathy. Final delivery can look like so many things. Maybe it is a giant blog post. Maybe it is a PDF or a PowerPoint. So long as you deliver in a format that works for your user, it does not matter. All that matters is that it works.

Leave a Reply Cancel Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:

Book cover

International Conference on Research Challenges in Information Science

RCIS 2021: Research Challenges in Information Science pp 55–71 Cite as

Adapting the CRISP-DM Data Mining Process: A Case Study in the Financial Services Domain

  • Veronika Plotnikova 9 ,
  • Marlon Dumas 9 &
  • Fredrik Milani 9  
  • Conference paper
  • First Online: 08 May 2021

1611 Accesses

4 Citations

Part of the book series: Lecture Notes in Business Information Processing ((LNBIP,volume 415))

Data mining techniques have gained widespread adoption over the past decades, particularly in the financial services domain. To achieve sustained benefits from these techniques, organizations have adopted standardized processes for managing data mining projects, most notably CRISP-DM. Research has shown that these standardized processes are often not used as prescribed, but instead, they are extended and adapted to address a variety of requirements. To improve the understanding of how standardized data mining processes are extended and adapted in practice, this paper reports on a case study in a financial services organization, aimed at identifying perceived gaps in the CRISP-DM process and characterizing how CRISP-DM is adapted to address these gaps. The case study was conducted based on documentation from a portfolio of data mining projects, complemented by semi-structured interviews with project participants. The results reveal 18 perceived gaps in CRISP-DM alongside their perceived impact and mechanisms employed to address these gaps. The identified gaps are grouped into six categories. The study provides practitioners with a structured set of gaps to be considered when applying CRISP-DM or similar processes in financial services. Also, number of the identified gaps are generic and applicable to other sectors with similar concerns (e.g. privacy), such as telecom, e-commerce.

This is a preview of subscription content, log in via an institution .

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

KDD - Knowledge Discovery in Databases; SEMMA - Sample, Explore, Modify, Model, and Assess; CRISP-DM - Cross-Industry Process for Data Mining.

The protocol is available at: https://figshare.com/s/33c42eda3b19784e8b21 .

A recently introduced EU legislation to safeguard customer data.

Forbes Homepage (2017). https://www.forbes.com/sites/louiscolumbus/2017/12/24/53-of-companies-are-adopting-big-data-analytics . Accessed 30 Jan 2021

Niaksu, O.: CRISP data mining methodology extension for medical domain. Baltic J. Mod. Comput. 3 (2), 92 (2015)

Google Scholar  

Solarte, J.: A proposed data mining methodology and its application to industrial engineering. Ph.D. thesis, University of Tennessee (2002)

Marbán, Ó., Mariscal, G., Menasalvas, E., Segovia, J.: An engineering approach to data mining projects. In: Yin, H., Tino, P., Corchado, E., Byrne, W., Yao, X. (eds.) IDEAL 2007. LNCS, vol. 4881, pp. 578–588. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-77226-2_59

Chapter   Google Scholar  

Plotnikova, V., Dumas, M., Milani, F.P.: Data mining methodologies in the banking domain: a systematic literature review. In: Pańkowska, M., Sandkuhl, K. (eds.) BIR 2019. LNBIP, vol. 365, pp. 104–118. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31143-8_8

Marban, O., Mariscal, G., Segovia, J.: A data mining and knowledge discovery process model. In: Julio, P., Adem, K. (eds.) Data Mining and Knowledge Discovery in Real Life Applications, pp. 438–453. Paris, I-Tech, Vienna (2009)

Plotnikova, V., Dumas, M., Milani, F.P.: Adaptations of data mining methodologies: a systematic literature review. PeerJ Comput. Sci. 6 , e267, (2020)

Runeson, P., Host, M., Rainer, A., Regnell, B.: Case Study Research in Software Engineering: Guidelines and Examples. Wiley, Hoboken (2012)

Yin, R.K.: Case Study Research and Applications: Design and Methods. Sage Publications, Los Angeles (2017)

Saldana, J.: The Coding Manual for Qualitative Researchers. Sage Publications, Los Angeles (2015)

McNaughton, B., Ray, P., Lewis, L: Designing an evaluation framework for IT service management. Inf. Manag. 47 (4), 219–225 (2010)

Martinez-Plumed, F., et al.: CRISP-DM twenty years later: from data mining processes to data science trajectories. IEEE Trans. Knowl. Data Eng. (2019)

AXELOS Limited: ITIL® Foundation, ITIL 4 Edition. TSO (The Stationery Office) (2019)

Download references

Author information

Authors and affiliations.

Institute of Computer Science, University of Tartu, Narva mnt 18, 51009, Tartu, Estonia

Veronika Plotnikova, Marlon Dumas & Fredrik Milani

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Veronika Plotnikova .

Editor information

Editors and affiliations.

Conservatoire National des Arts et Métiers, Paris, France

Samira Cherfi

Fondazione Bruno Kessler, Trento, Italy

Anna Perini

University Paris 1 Panthéon-Sorbonne, Paris, France

Selmin Nurcan

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Cite this paper.

Plotnikova, V., Dumas, M., Milani, F. (2021). Adapting the CRISP-DM Data Mining Process: A Case Study in the Financial Services Domain. In: Cherfi, S., Perini, A., Nurcan, S. (eds) Research Challenges in Information Science. RCIS 2021. Lecture Notes in Business Information Processing, vol 415. Springer, Cham. https://doi.org/10.1007/978-3-030-75018-3_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-75018-3_4

Published : 08 May 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-75017-6

Online ISBN : 978-3-030-75018-3

eBook Packages : Computer Science Computer Science (R0)

Share this paper

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research
  • CRISPR-DM Case Study
  • by Arga Adyatama
  • Last updated about 3 years ago
  • Hide Comments (–) Share Hide Toolbars

Twitter Facebook Google+

Or copy & paste this link into an email or IM:

Data Science Process Alliance

What is CRISP DM?

CRISP DM

The  CR oss  I ndustry  S tandard  P rocess for  D ata  M ining ( CRISP-DM ) is a process model that serves as the base for a data science process . It has six sequential phases:

  • Business understanding – What does the business need?
  • Data understanding – What data do we have / need? Is it clean?
  • Data preparation – How do we organize the data for modeling?
  • Modeling – What modeling techniques should we apply?
  • Evaluation – Which model best meets the business objectives?
  • Deployment – How do stakeholders access the results?

Published in 1999 to standardize data mining processes across industries, it has since become the most common methodology for data mining, analytics, and data science projects.

Data science teams that combine a loose implementation of CRISP-DM with overarching team-based agile project management approaches will likely see the best results.

What are the 6 CRISP-DM Phases?

I. business understanding.

Any good project starts with a deep understanding of the customer’s needs. Data mining projects are no exception and CRISP-DM recognizes this.

The Business Understanding phase focuses on understanding the objectives and requirements of the project. Aside from the third task, the three other tasks in this phase are foundational project management activities that are universal to most projects:

  • Determine business objectives: You should first “thoroughly understand, from a business perspective, what the customer really wants to accomplish.” ( CRISP-DM Guide ) and then define business success criteria.
  • Assess situation: Determine resources availability, project requirements, assess risks and contingencies, and conduct a cost-benefit analysis.
  • Determine data mining goals: In addition to defining the business objectives, you should also define what success looks like from a technical data mining perspective.
  • Produce project plan: Select technologies and tools and define detailed plans for each project phase.

While many teams hurry through this phase, establishing a strong business understanding is like building the foundation of a house – absolutely essential.

II. Data Understanding

Next is the Data Understanding phase. Adding to the foundation of Business Understanding , it drives the focus to identify, collect, and analyze the data sets that can help you accomplish the project goals. This phase also has four tasks:

  • Collect initial data: Acquire the necessary data and (if necessary) load it into your analysis tool.
  • Describe data: Examine the data and document its surface properties like data format, number of records, or field identities.
  • Explore data: Dig deeper into the data. Query it, visualize it, and identify relationships among the data.
  • Verify data quality: How clean/dirty is the data? Document any quality issues.

III. Data Preparation

A common rule of thumb is that 80% of the project is data preparation.

This phase, which is often referred to as “data munging”, prepares the final data set(s) for modeling. It has five tasks:

  • Select data: Determine which data sets will be used and document reasons for inclusion/exclusion.
  • Clean data: Often this is the lengthiest task. Without it, you’ll likely fall victim to garbage-in, garbage-out. A common practice during this task is to correct, impute, or remove erroneous values.
  • Construct data: Derive new attributes that will be helpful. For example, derive someone’s body mass index from height and weight fields.
  • Integrate data: Create new data sets by combining data from multiple sources.
  • Format data: Re-format data as necessary. For example, you might convert string values that store numbers to numeric values so that you can perform mathematical operations.

IV. Modeling

What is widely regarded as data science’s most exciting work is also often the shortest phase of the project.

Here you’ll likely build and assess various models based on several different modeling techniques. This phase has four tasks:

  • Select modeling techniques: Determine which algorithms to try (e.g. regression, neural net).
  • Generate test design: Pending your modeling approach, you might need to split the data into training, test, and validation sets.
  • Build model: As glamorous as this might sound, this might just be executing a few lines of code like “reg = LinearRegression().fit(X, y)”.
  • Assess model: Generally, multiple models are competing against each other, and the data scientist needs to interpret the model results based on domain knowledge, the pre-defined success criteria, and the test design.

Although the  CRISP-DM Guide  suggests to “iterate model building and assessment until you strongly believe that you have found the best model(s)”,  in practice teams should continue iterating until they find a “good enough” model, proceed through the CRISP-DM lifecycle, then further improve the model in future iterations.

V. Evaluation

Whereas the A ssess Model task of the Modeling phase focuses on technical model assessment, the Evaluation phase looks more broadly at which model best meets the business and what to do next. This phase has three tasks:

  • Evaluate results: Do the models meet the business success criteria? Which one(s) should we approve for the business?
  • Review process: Review the work accomplished. Was anything overlooked? Were all steps properly executed? Summarize findings and correct anything if needed.
  • Determine next steps: Based on the previous three tasks, determine whether to proceed to deployment, iterate further, or initiate new projects.

VI. Deployment

“Depending on the requirements, the deployment phase can be as simple as generating a report or as complex as implementing a repeatable data mining process across the enterprise.”

– CRISP-DM Guide

A model is not particularly useful unless the customer can access its results. The complexity of this phase varies widely. This final phase has four tasks:

  • Plan deployment: Develop and document a plan for deploying the model.
  • Plan monitoring and maintenance: Develop a thorough monitoring and maintenance plan to avoid issues during the operational phase (or post-project phase) of a model.
  • Produce final report: The project team documents a summary of the project which might include a final presentation of data mining results.
  • Review project: Conduct a project retrospective about what went well, what could have been better, and how to improve in the future.

Your organization’s work might not end there. As a project framework, CRISP-DM does not outline what to do after the project (also known as “operations”). But if the model is going to production, be sure you maintain the model in production. Constant monitoring and occasional model tuning is often required.

Is CRISP-DM Agile or Waterfall?

Some argue that it is flexible and agile and while others see CRISP-DM as rigid. What really matters is how you implement it.

Waterfall : On one hand, many view CRISP-DM as a rigid waterfall process – in part because of its reporting requirements are excessive for most projects. Moreover, the guide states in the business understanding phase that “the project plan contains detailed plans for each phase” – a hallmark aspect of traditional waterfall approaches that require detailed, upfront planning.

Indeed, if you follow CRISP-DM precisely (defining detailed plans for each phase at the project start and include every report) and choose not to iterate frequently, then you’re operating more of a waterfall process.

Agile : On the other hand, CRISP-DM indirectly advocates agile principles and practices by stating: “The sequence of the phases is not rigid. Moving back and forth between different phases is always required. The outcome of each phase determines which phase, or particular task of a phase, has to be performed next.”

Thus if you follow CRISP-DM in a more flexible way, iterate quickly, and layer in other agile processes, you’ll wind up with an agile approach.

Example: To illustrate how CRISP-DM could be implemented in either an Agile or waterfall manner, imagine a churn project with three deliverables: a voluntary churn model, a non-pay disconnect churn model, and a propensity to accept a retention-focused offer.

CRISP-DM Waterfall: Horizontal Slicing

Learn more about slicing at Vertical vs Horizontal Slicing Data Science

In a waterfall-style implementation, the team’s work would comprehensively and horizontally span across each deliverable as shown below. The team might infrequently loop back to a lower horizontal layer only if critically needed. One “big bang” deliverable is delivered at the end of the project.

CRISP-DM Agile: Vertical Slicing

Alternatively, in an agile implementation of CRISP-DM, the team would narrowly focus on quickly delivering one vertical slice up the value chain at a time as shown below. They would deliver multiple smaller vertical releases and frequently solicit feedback along the way.

Which is better?

When possible, take an agile approach and slice vertically so that:

  • Stakeholders get value sooner
  • Stakeholders can provide meaningful feedback
  • The data scientists can assess model performance earlier
  • The project team can adjust the plan based on stakeholder feedback

How popular is CRISP-DM?

Definitive research does not exist on how frequently data science teams use different management approaches. So to get an idea on approach popularity, we investigated KDnuggets polls, conducted our own poll, and researched Google search volumes. Each of these views suggests that CRISP-DM is the most commonly used approach for data science projects.

KDnuggets Polls

Bear in mind that the website caters toward data mining, and the data science field has changed a lot since 2014.

KDnuggets is a common source for data mining methodology usage. Each of the polls in 2002 , 2004 , 2007 posed the question: “What main methodology are you using for data mining?”, and the 2014 poll expanded the question to include “…for analytics, data mining, or data science projects.” 150-200 respondents answered each poll.

data science methodology poll

CRISP-DM was the popular methodology in each poll spanning the 12 years.

Our 2020 Poll

To learn more about the poll, go to this post .

For a more current look into the popularity of various approaches, we conducted our own poll on this site in August and September 2020.

Note the response options for our poll were different from the KDnuggets polls, and our site attracts a different audience.

most popular data science processes

CRISP-DM was the clear winner, garnering nearly half of the 109 votes.,

Google Searches

Given the ambiguity of a searcher’s intent, some searches like “my own” could not be analyzed and others like “tdsp” and “semma” could be misleading.

For yet third view into CRISP-DM, we turned to Google Keyword Planner tool which provided the average monthly search volumes in the USA for select key search terms and related terms (e.g. “crispdm” or “crisp dm data science”). Clearly irrelevant searches like “tdsp electrical charges” or “semma both aagatha” were then removed.

search volume for data science processes

CRISP-DM yet again reigned as king, and this time with a much broader margin.

Should I use CRISP-DM for Data Science?

So CRISP is popular. But should you use it?

Like most answers in data science, it’s kind of complicated. But here’s a quick overview.

From today’s data science perspective this seems like common sense.  This is exactly the point.  The common process is so logical that it has become embedded into all our education, training, and practice.

-William Vorheis, one of CRISP-DM’s authors (from  Data Science Central )

  • Generalize-able:  Although designed for data mining, William Vorhies, one of the creators of CRISP-DM, argues that because all data science projects start with business understanding, have data that must be gathered and cleaned, and apply data science algorithms, “CRISP-DM provides strong guidance for even the most advanced of today’s data science activities” ( Vorhies, 2016 ).
  • Common Sense:  When students were asked to do a data science project without project management direction, they “tended toward a CRISP-like methodology and identified the phases and did several iterations.” Moreover, teams which were trained and explicitly told to implement CRISP-DM performed better than teams using other approaches ( Saltz, Shamshurin, & Crowston, 2017 ).
  • Adopt-able:  Like  Kanban , CRISP-DM can be implemented without much training, organizational role changes, or controversy.
  • Right Start:  The initial focus on  Business Understanding  is helpful to align technical work with business needs and to steer data scientists away from jumping into a problem without properly understanding business objectives.
  • Strong Finish:  Its final step  Deployment  likewise addresses important considerations to close out the project and transition to maintenance and operations.
  • Flexible: A loose CRISP-DM implementation can be flexible to provide many of the benefits of agile  principles and practices. By accepting that a project starts with significant unknowns, the user can cycle through steps, each time gaining a deeper understanding of the data and the problem. The empirical knowledge learned from previous cycles can then feed into the following cycles.

Weaknesses & Challenges

In a controlled experiment, students who used CRISP-DM “were the last to start coding” and “did not fully understand the coding challenges they were going to face”

– Saltz, Shamshurin, & Crowston, 2017

  • Rigid: On the other hand, some argue that CRISP-DM suffers from the same weaknesses of  Waterfall  and encumbers rapid iteration.
  • Documentation Heavy:  Nearly every task has a documentation step. While documenting one’s work is key in a mature process, CRISP-DM’s documentation requirements might unnecessarily slow the team from actually delivering increments.
  • Not Modern:  Counter to Vorheis’ argument for the sustaining relevance of CRISP-DM, others argue that CRISP-DM, as a process that pre-dates big data, “might not be suitable for Big Data projects due its four V’s” ( Saltz & Shamshurin, 2016 ).
  • Not a Project Management Approach:  Perhaps most significantly, CRISP-DM is not a true project management methodology because it implicitly assumes that its user is a single person or small, tight-knit team and ignores the teamwork coordination necessary for larger projects ( Saltz, Shamshurin, & Connors, 2017 ).

Recommendations

For a more comprehensive view of recommendations view the data science process post .

CRISP-DM is a great starting point for those who are looking to understand the general data science process. Five tips to overcome these weaknesses are:

  • Iterate quickly: Don’t fall into a waterfall trap by working thoroughly across layers of the project. Rather, think vertically and deliver thin vertical slices of end-to-end value. Your first deliverable might not be too useful. That’s okay. Iterate.
  • Document enough…but not too much:  If you follow CRISP-DM precisely, you might spend more time documenting than doing anything else. Do what’s reasonable and appropriate but don’t go overboard.
  • Don’t forgot modern technology: Add steps to leverage cloud architectures and modern software practices like git  version control and CI/CD pipelines to your project plan when appropriate.
  • Set expectations: CRISP-DM lacks communication strategies with stakeholders. So be sure to set expectations and communicate with them frequently.
  • Data Driven Scrum

Dive Deeper: Explore key actions to consider

for Data Science projects using CRISP-DM

What are other CRISP-DM Alternatives?

A few years prior to the publication of CRISP-DM, SAS developed  Sample, Explore, Modify, Model, and Assess   ( SEMMA ) . Although designed to help guide users through tools in SAS Enterprise Miner for data mining problems, SEMMA is often considered to be a general data mining methodology. SEMMA’s popularity has waned with only 1% of respondents in our 2020 poll stating they use it.

Compared to CRISP-DM, SEMMA is even more narrowly focused on the technical steps of data mining. It skips over the initial  Business Understanding  phase from CRISP-DM and instead starts with data sampling processes. SEMMA likewise does not cover the final  Deployment  aspects. Otherwise, its phases somewhat mirror the middle four phases of CRISP-DM. Although potentially useful as a process to follow data mining steps, SEMMA should not be viewed as a comprehensive project management approach.

See the main article for  SEMMA

KDD and KDDS

Dating back to 1989, Knowledge Dicsovery in Database (KDD)  is the general process of discovering knowledge in data through  data mining,  or the extraction of patterns and information from large datasets using machine learning, statistics, and database systems. There are different representations of KDD with perhaps the most common having five phases: Select , Pre-Processing , Transformation , Data Mining , and Interpretation/Evaluation. Like SEMMA, KDD is similar to CRISP but more narrowly focused and excludes the initial Business Understanding and Deployment phases.

In 2016, Nancy Grady of SAIC, published the  Knowledge Discovery in Data Science (KDDS)  describing it “as an end-to-end process model from mission needs planning to the delivery of value”, KDDS specifically expands upon KDD and CRISP-DM to address big data problems. It also provides some additional integration with management processes. KDDS defines four distinct phases: a ssess, architect, build,  and  improve  and five process stages:  plan, collect, curate, analyze,  and  act .

KDD tends to be an older term that is less frequently used. KDDS never had significant adoption.

See the main article for  KDD and Data Mining Process .

Where can I learn more?

  • Blog Post:  What is a Data Science Life Cycle?
  • Blog Post:  What is a Data Science Workflow?
  • Blog Post:  What is the Data Science Process?
  • Blog Post:  Steps to Define an Effective Data Science Process
  • Blog Post:  CRISP-DM for Data Science  – 5 Actions to Consider
  • Blog Post:  CRISP-DM is still the most Popular Framework
  • Blog Post:  Data Science vs Software Engineering
  • (external):  Official CRISP-DM Guide

Related Posts

What is a Data Science Workflow?

The Modern Guide to CRISP-DM

CRISP-DM is the classic data mining life cycle. Should you use it?

Get our white paper to learn its strengths, weaknesses, and key actions to consider when using CRISP-DM.

Please enable JavaScript in your browser to complete this form. e-mail * GET IT NOW!

Thank you for your interest in a DSPA course!

Please fill out the form below as a first step towards course registration.

  • Team Lead Foundations ($795)
  • Team Lead ($1,395)
  • Team Lead Plus ($1,995)

Please enable JavaScript in your browser to complete this form. Name * First Last Email * We do not share your email address with anyone GET BROCHURE

  • Data Science Manager / Team Lead
  • Data Scientist / Data Engineer
  • Software Developer

Last Updated on March 26, 2024

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Big Data

Logo of frontbigdata

Interpretability of Machine Learning Solutions in Public Healthcare: The CRISP-ML Approach

Inna kolyshkina.

1 Analytikk Consulting, Sydney, NSW, Australia

Simeon Simoff

2 School of Computer, Data and Mathematical Sciences, Western Sydney University, Sydney, NSW, Australia

3 MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW, Australia

Associated Data

The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.

Public healthcare has a history of cautious adoption for artificial intelligence (AI) systems. The rapid growth of data collection and linking capabilities combined with the increasing diversity of the data-driven AI techniques, including machine learning (ML), has brought both ubiquitous opportunities for data analytics projects and increased demands for the regulation and accountability of the outcomes of these projects. As a result, the area of interpretability and explainability of ML is gaining significant research momentum. While there has been some progress in the development of ML methods, the methodological side has shown limited progress. This limits the practicality of using ML in the health domain: the issues with explaining the outcomes of ML algorithms to medical practitioners and policy makers in public health has been a recognized obstacle to the broader adoption of data science approaches in this domain. This study builds on the earlier work which introduced CRISP-ML, a methodology that determines the interpretability level required by stakeholders for a successful real-world solution and then helps in achieving it. CRISP-ML was built on the strengths of CRISP-DM, addressing the gaps in handling interpretability. Its application in the Public Healthcare sector follows its successful deployment in a number of recent real-world projects across several industries and fields, including credit risk, insurance, utilities, and sport. This study elaborates on the CRISP-ML methodology on the determination, measurement, and achievement of the necessary level of interpretability of ML solutions in the Public Healthcare sector. It demonstrates how CRISP-ML addressed the problems with data diversity, the unstructured nature of data, and relatively low linkage between diverse data sets in the healthcare domain. The characteristics of the case study, used in the study, are typical for healthcare data, and CRISP-ML managed to deliver on these issues, ensuring the required level of interpretability of the ML solutions discussed in the project. The approach used ensured that interpretability requirements were met, taking into account public healthcare specifics, regulatory requirements, project stakeholders, project objectives, and data characteristics. The study concludes with the three main directions for the development of the presented cross-industry standard process.

1. Introduction and Background to the Problem

Contemporary data collection and linking capabilities, combined with the growing diversity of the data-driven artificial intelligence (AI) techniques, including machine learning (ML) techniques, and the broader deployment of these techniques in data science and analytics, have had a profound impact on decision-making across many areas of human endeavors. In this context, public healthcare sets priority requirements toward the robustness, security (Qayyum et al., 2021 ), and interpretability (Stiglic et al., 2020 ) of ML solutions. We use the term solution to denote the algorithmic decision-making scenarios involving ML and AI algorithms (Davenport and Kalakota, 2019 ). While the early AI solutions for healthcare, like expert systems, possessed limited explanatory mechanisms (Darlington, 2011 ), these mechanisms proved to have an important role in clinical decision-making and, hence, made healthcare practitioners, clinicians, health economists, patients, and other stakeholders aware about the need to have such capabilities.

Healthcare domain imposes a broad spectrum of unique challenges to contemporary ML solutions, placing much higher demands with respect to interpretability, comprehensibility, explainability, fidelity, and performance of ML solutions (Ahmad et al., 2018 ). Among these properties of ML solutions, interpretability is particularly important for human-centric areas like healthcare, where it is crucial for the end users to not only have access to an accurate model but also to trust the validity and accuracy of the model, as well as understand how the model works, what recommendation has been made by the model, and why. These aspects have been emphasized by a number of recent studies, most notably in Caruana et al. ( 2015 ) and Holzinger et al. ( 2017 ), and summarized in the study by Ahmad et al. ( 2018 ).

Healthcare, similar to government and business digital services, manufacturing with its industrial internet of things and creative industries, experienced the much celebrated manifestations of “big data,” “small data,” “rich data,” and the increased impact of ML solutions operating with these data. Consequently, the interpretability of such solutions and the explainability of the impact of the judgements they assist to make or have made and, where needed, the rationale of recommended actions and behavior are becoming essential requirements of contemporary analytics, especially in society-critical domains of health, medical analysis, automation, defense, security, finance, and planning. This shift has been further accentuated by the growing worldwide commitment of governments, industries, and individual organizations to address their endeavors toward the United Nations Sustainable Development Goals 1 and by the data-dependent scientific and technological challenges faced by the rapid response to the COVID-19 pandemic. The later challenges highlight and reinforce the central role of healthcare, backed by science, technology, lateral thinking, and innovative solutions in societal and economic recovery.

Some state-of-the-art overviews, such as Doshi-Velez and Kim ( 2017 ) and Gilpin et al. ( 2019 ) related to interpretability, as well as more method-focused papers, like Lipton ( 2018 ) and Molnar et al. ( 2019 ), tend to use interpretability and explainability interchangeably. They also report that the interpretability of ML solutions and the underlying models is not well-defined. The study related to interpretability is scattered throughout a number of disciplines, such as AI, ML, human-computer interaction (HCI), visualization, cognition, and social sciences (Miller, 2019 ), to name a few of the areas. In addition, the current research seems to focus on particular categories or techniques instead of addressing the overall concept of interpretability.

Recent systematic review studies, Gilpin et al. ( 2018 ) and Mittelstadt et al. ( 2019 ), have clarified some differences and relationships between interpretability and explainability in the context of ML and AI. In these domains, interpretability refers to the degree of human interpretability of a given model, including “black box” models (Mittelstadt et al., 2019 ). Machine interpretability of the outcomes of ML algorithms is treated separately. Explanability refers primarily to the number of ways to communicate an ML solution to others (Hansen and Rieger, 2019 ), i.e., the “ways of exchanging information about a phenomenon, in this case the functionality of a model or the rationale and criteria for a decision, to different stakeholders.” Both properties of ML solutions are central to the broader adoption of such solutions in diverse high-stake healthcare scenarios, e.g., predicting the risk of complications to the health condition of a patient or the impact of treatment change.

While some authors (for instance, Hansen and Rieger, 2019 ; Mittelstadt et al., 2019 ; Samek and Müller, 2019 ) consider interpretability as an important component of explainability of ML solutions in AI, we view interpretability and explainability as complementary to each other, with interpretability being fundamental in ensuring trust in the results, transparency of the approach, confidence in deploying the results, and, where needed, quality of the maintenance of ML solutions. Further, in this study, we used the term interpretability in a broader sense, which subsumes communication and information exchange aspects of explainability.

We considered two connected aspects of the development of the overall concept of interpretability in ML solutions:

  • methods , which include the range of interpretable ML algorithms and interpretability solutions for AI/ML algorithms;
  • methodologies in data science, which consider explicitly the achievement of the necessary (for the project) interpretability of the ML solutions.

There is a wide collection of interpretable ML methods and methods for the interpretation of ML models. Murdoch et al. ( 2019 ) provide a compact and systematic approach toward their categorization and evaluation. Methods are categorized into model-based and post-hoc interpretation methods. They are evaluated using predictive accuracy, descriptive accuracy, and relevancy, the PDR framework (Murdoch et al., 2019 ), where relevancy is evaluated against human audience. The framework also provides common terminology for practitioners. Guidotti et al. ( 2018 ) and Carvalho et al. ( 2019 ) provide extensive systematic overviews with elaborate frameworks of the state-of-the-art of interpretability methods. Mi et al. ( 2020 ) provide broader taxonomy and comparative experiments, which can help practitioners in selecting suitable models with complementary features for addressing interpretability problems in ML solutions.

Model interpretability and explainability are crucial for clinical and healthcare practice, especially, since not only non-linear models but also inherently more interpretable ones, like decision trees, if large and complex, become difficult to comprehend (Ahmad et al., 2018 ).

On the other hand, working with data in the healthcare domain is complex at every step, starting from establishing and finding the relevant, typically numerous, diverse, and heterogeneous data sources required to address the research objective; integrating and mapping these data sources; identifying and resolving data quality issues; pre-processing and feature engineering without losing information or distorting it; and finally using the resulting high-dimensional, complex, sometimes unstructured, data to build a high-performing interpretable model. This complexity further supports the argument for the development of ML methodologies which explicitly embed interpretability through the data science project life cycle and ensure the achievement of the level of interpretability of ML solutions that had been agreed for the project. Interpretability of an ML solution can serve a variety of stakeholders involved in data science projects in connection with the implementation of their outcomes.

Interpretability of an ML solution can serve a variety of stakeholders, involved in data science projects and related to the implementation of their outcomes in algorithmic decision making (Berendt and Preibusch, 2017 ). For instance, the human-centric visual analytics methodology “Extract-Explain-Generate” for interrogating biomedical data (Kennedy et al., 2008 ) explicitly relates different stakeholders (molecular biologist, clinician, analysts, and managers) with specific areas of knowledge extraction and understanding associated with the management of patients. This study is focused on addressing the methodological challenges and opportunities of broad embedding of interpretability (including the selection of methods of interpretability that are appropriate for a project, given its objectives and constraints).

2. Challenges and Opportunities in Creating Methodologies Which Consistently Embed Interpretability

In order to progress with the adoption of ML in healthcare, a consistent and comprehensive methodology is needed: first, to minimize the risk of project failures, and second, to establish and ensure the needed level of interpretability of the ML solution while addressing the above-discussed diverse requirements to ML solutions. The rationale supporting these needs is built on a broader set of arguments about:

  • – the high proportion of data science project failures, including those in healthcare;
  • – the need to support an agreed level of interpretability and explainability of ML solutions;
  • – the need for consistent measurement and evaluation of interpretability of ML solutions; and
  • – the emerging need for standard methodology, which explicitly embeds mechanisms to manage the achievement of the level of interpretability of ML solutions required by stakeholders through the project.

Further, in this section, we use these arguments as dimensions around which we elaborate the challenges and opportunities for the design of cross-industry data science methodology, which is capable of handling interpretability of ML solutions under the complexity of the healthcare domain.

2.1. High Proportion of Data Science Project Failures

Recent reports, which include healthcare-related organizations, estimate that up to 85% of data science/ML/AI projects do not achieve their stated goals. The latest NewVantage Partners Big Data and AI Executive Survey, based on the responses from C-Executives from 85 blue-chip companies of which 22% are from Healthcare and Life Sciences, noted that only 39% of companies are managing data as an asset (NewVantage Partners LLC, 2021 ). Fujimaki ( 2020 ) emphasized that “the economic downturn caused by the COVID-19 pandemic has placed increased pressure on data science and BI teams to deliver more with less. In this type of environment, AI/ML project failure is simply not acceptable.” On the other hand, the NewVantage Partners survey (NewVantage Partners LLC, 2021 ) emphasized that, over the 10 years of conducting these surveys, organizations continue to struggle with their transformation into data-driven organizations, with only 29% achieving transformational business outcomes. Only 24% have created a data-driven organization, a decline from 37.8%, and only 24% have forged a data culture (NewVantage Partners LLC, 2021 ), a result which, to a certain extent, is counterintuitive to the overall expectation of the impact of AI technologies to decision-making and which projected benefits from the adoption of such technologies.

A number of sources (e.g., vander Meulen and Thomas, 2018 ; Kaggle, 2020 ; NewVantage Partners LLC, 2021 ) established that a key reason for these failures is linked to the lack of proper process and methodology in areas, such as requirement gathering, realistic project timeline establishment, task coordination, communication, and designing a suitable project management framework (see also Goodwin, 2011 ; Stieglitz, 2012 ; Espinosa and Armour, 2016 ). Earlier works have suggested (see, e.g., Saltz, 2015 ) that improved methodologies are needed as the existing ones do not cover many important aspects and tasks, including those related to interpretability (Mariscal et al., 2010 ). Further, studies have shown that the biased focus on the tools and systems has limited the ability to gain value from the effort of organizational analytics effort (Ransbotham et al., 2015 ) and that data science projects need to increase their focus on process and task coordination (Grady et al., 2014 ; Gao et al., 2015 ; Espinosa and Armour, 2016 ). A recent Gartner Consulting report also emphasizes the role of processes and methodology (Chandler and Oestreich, 2015 ) and practitioners agree with this view (for examples and analyses from diverse practical perspectives see Goodson, 2016 ; Arcidiacono, 2017 ; Roberts, 2017 ; Violino, 2017 ; Jain, 2019 ).

2.2. Support for the Required Level of Interpretability and Explainability of ML Solutions

In parallel with the above-discussed tendencies, there is pressure on the creation of frameworks/methodologies, which can ensure the necessary interpretability for sufficient explainability of the output of the ML solutions. While it has been suggested, in recent years, that it is only a matter of time before ML will be universally used in healthcare, building ML solutions in the health domain proves to be challenging (Ahmad et al., 2018 ). On the one hand, the demands for explainability, model fidelity, and performance in general in healthcare are much higher than in most other domains (Ahmad et al., 2018 ). In order to build the trust in ML solutions and incorporate them in routine clinical and healthcare practice, medical professionals need to clearly understand how and why an ML solution-driven decision has been made (Holzinger et al., 2017 ; Vellido, 2020 ).

This is further affected by the fact that the ML algorithms that achieve a high level of predictive performance, e.g., boosted trees (Chen and Guestrin, 2016 ) or deep neural networks (Goodfellow et al., 2016 ), are quite complex and usually difficult to interpret. In fact, some researchers argue that performance and interpretability of an algorithm are in reverse dependence (Ahmad et al., 2018 ; Molnar et al., 2019 ). Additionally, while there are a number of techniques aiming to explain the output of the models that are not directly interpretable, as many authors note (e.g., Holzinger et al., 2017 ; Gilpin et al., 2019 ; Rudin, 2019 ; Gosiewska et al., 2020 ), current explanatory approaches, while promising, do not seem to be sufficiently mature. Molnar et al. ( 2019 ) found that the reliability of some of these methods deteriorates if the number of features is large or if the level of feature interactions is high, which is often the case in health data. Further, Gosiewska and Biecek ( 2020 ) showed that current popular methods for explaining the output of ML models, like SHAP (Lundberg and Lee, 2017 ) and LIME (Ribeiro et al., 2016 ), produce inconsistent results, while Alvarez-Melis and Jaakkola ( 2018 ) found that the currently popular interpretability frameworks, particularly model-agnostic perturbation-based methods, are often not robust to small changes of the input, which clearly is not acceptable in the health domain.

There is a firm recognition of the impact of ML solutions in economics, including health economics, especially in addressing “predictive policy” problems (Athey, 2019 ). Many authors (e.g., Holzinger et al., 2017 ; Dawson et al., 2019 ; Rudin, 2019 ) note that in the high-stake areas (e.g., medical field, healthcare) solutions, in which the inner workings are not transparent (Weller, 2019 ), can be unfair, unreliable, inaccurate, and even harmful. Such views are reflected in the legislation on data-driven algorithmic decision-making, which affects citizens across the world. The European Union's General Data Protection Regulation (GDPR) (EU, 2016 ), which entered into force in May 2018, is an example of such early legislation. In the context of the emerging algorithmic economy, there are also warnings to policymakers to be aware of the potential impact of legislations like GDPR on the development of new AI and ML solutions (Wallace and Castro, 2018 ).

These developments increased the pressure on creation of frameworks and methodologies, which can ensure sufficient interpretability of ML solutions. In healthcare, such pressure is amplified by the nature of the interactive processes, wherein neither humans nor the algorithms operate with unbiased data (Sun et al., 2020 ).

Major technology developers, including Google, IBM, and Microsoft, recommend responsible interpretability practices (see, e.g., Google, 2019 ), including the development of common design principles for human-interpretable machine learning solutions (Lage et al., 2019 ).

2.3. Consistent Measurement and Evaluation of Interpretability of ML Solutions

While there are a number of suggested approaches to measuring interpretability (Molnar et al., 2019 ), a consensus on the ways of measuring or evaluating the level of interpretability has not been reached. For example, Gilpin et al. ( 2019 ) found that the best type of explanation metrics is not clear. Murdoch et al. ( 2019 ) mentioned that, currently, there is confusion about the interpretability notion and a lack of clarity about how the proposed interpretation approaches can be evaluated and compared against each other and how to choose a suitable interpretation method for a given issue and audience. The PDR framework (Murdoch et al., 2019 ), mentioned earlier, is a step in the direction of developing consistent evaluations. Murdoch et al. ( 2019 ) further note that there is limited guidance on how interpretability can actually be used in data science life cycles.

2.4. The Emerging Need for Standard Methodology for Handling Interpretability

Having a good methodology is important for the success of a data science project. To our knowledge, there is no formal standard for methodology in the data science projects (see Saltz and Shamshurin, 2016 ). Through the years, the CRISP-DM methodology (Shearer, 2000 ) created in the late 1990s has become a de-facto standard, as evidenced from a range of works (see, e.g., Huang et al., 2014 ; Niño et al., 2015 ; Fahmy et al., 2017 ; Pradeep and Kallimani, 2017 ; Abasova et al., 2018 ; Ahmed et al., 2018 ). An important factor of its success is the fact that it is industry, tool, and application agnostic (Mariscal et al., 2010 ). However, the research community has emphasized that, since its creation, CRISP-DM had not been updated to reflect the evolution of the data science process needs (Mariscal et al., 2010 ; Ahmed et al., 2018 ). While various extensions and refined versions of the methodology, including IBM's Analytics Solutions Unified Method for Data Mining (ASUM-DM) and Microsoft's Team Data Science Process (TDSP), were proposed to compensate the weaknesses of CRISP-DM, at this stage, none of them has become the standard. In the more recent years, variations of CRISP-DM tailored for the healthcare (Catley et al., 2009 ) and medical domain, such as CRISP-MED-DM (Niaksu, 2015 ), have been suggested. The majority of organisations that apply a data analysis methodology prefers extensions of CRISP-DM (Schäfer et al., 2018 ). Such extensions are fragmented and either propose additional elements into the data analysis process, or focus on organisational aspects without the necessary integration of domain-related factors (Plotnikova, 2018 ). These might be the reasons for the observed decline of its usage as reported in studies by Piatetsky-Shapiro ( 2014 ), Bhardwaj et al. ( 2015 ), and Saltz and Shamshurin ( 2016 ). Finally, while methodologies from related fields, like the agile approach used in software development, are being considered for use in data science projects, there is no clear clarity on whether they are fully suitable for the purpose, as indicated by Larson and Chang ( 2016 ); therefore, we did not include them in the current scope.

This overall lack of consensus has provided an opportunity to reflect on the philosophy of the CRISP-DM methodology and create a comprehensive data science methodology, through which interpretability is embedded consistently into an ML solution. Such methodology faces a list of requirements:

  • – It has to take into account the different perspectives and aspects of interpretability, including model and process explainability and interpretability;
  • – It has to consider the desiderata of explainable AI (fidelity, understandability, sufficiency, low construction overhead, and efficiency) as summarized in Hansen and Rieger ( 2019 );
  • – It needs to support consistent interaction of local and global interpretability of ML solutions with other established key factors in data science projects, including predictive accuracy, bias, noise, sensitivity, faithfulness, and domain specifics;

In addition, healthcare researchers have indicated that the choice of interpretable models depends on the use case (Ahmad et al., 2018 ).

In order to standardize the expectations for interpretability, some of these requirements have been addressed in the recently proposed CRISP-ML methodology (Kolyshkina and Simoff, 2019 ). In section 3, we will briefly discuss the major concepts differentiating CRISP-ML methodology. The CRISP-ML approach includes the concepts of necessary level of interpretability (NLI) and interpretability matrix (IM), described in detail by Kolyshkina and Simoff ( 2019 ), and therefore aligns well with the view of health researchers that the choice of interpretable models depends upon the application and use case for which explanations are required (Ahmad et al., 2018 ). To illustrate that, in section 4, we present a use case in the public health field that illustrates the typical challenges met and the ways CRISP-ML helped to address and resolve them.

3. CRISP-ML Methodology—Toward Interpretability-Centric Creation of ML Solutions

The CRISP-ML methodology (Kolyshkina and Simoff, 2019 ) of building interpretability of an ML solution is based on revision and update of CRISP-DM to address the opportunities discussed in section 2. It follows the CRISP-DM approach in terms of being industry-, tool-, and application-neutral. CRISP-ML accommodates the necessary elements to work with diverse ML techniques and create the right level of interpretability through the whole ML solution creation process. Its seven stages are described in Figure 1 ), which is an updated version of the CRISP-ML methodology diagram in the study by Kolyshkina and Simoff ( 2019 ).

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0001.jpg

Conceptual framework of CRISP-ML methodology.

Central to CRISP-ML is the concept of necessary level of interpretability of an ML solution. From this view point, CRISP-ML can be differentiated as a methodology of establishing and building the necessary level of interpretability of a business ML solution. In line with Google's guidelines on the responsible AI practices in the interpretability area (Google, 2019 ) and expanding on the approach proposed by Gleicher ( 2016 ), we have specified the concept of minimal necessary level of interpretability of a business ML solution as the combination of the degree of accuracy of the underlying algorithm and the extent of understanding the inputs, inner workings, the outputs, the user interface, and the deployment aspects of the solution, which is required to achieve the project goals. If this level is not achieved, the solution will be inadequate for the purpose. This level needs to be established and documented at the initiation stage of the project as part of requirement collection (see Stage 1 in Figure 1 ).

We then describe an ML solution as sufficiently interpretable or not based on whether or not it achieved the required level of interpretability. Obviously, this level will differ from one project to another depending on the business goals. If individuals are directly and strongly affected by the solution-driven decision, e.g., in medical diagnostics or legal settings, then both the ability to understand and trust the internal logic of the model, as well as the ability of the solution to explain individual predictions, are of highest priority. In other cases, when an ML solution is used in order to inform business decisions about policy, strategy, or interventions aimed to improve the business outcome of interest, then it is necessary to understand and trust the internal logic of the model that is of most value, while individual predictions are not the focus of the stakeholders. For example, in one of our projects, an Australian state organization wished to establish what factors influenced the proportion of children with developmental issues and what interventions can be undertaken in specific areas of the state in order to reduce that proportion. The historical, socioeconomic, and geographic data provided for the project was aggregated at a geographic level of high granularity.

In other cases, e.g., in the case of an online purchase recommender solution, the overall outcome, such as increase in sales volume, may be of higher importance than interpretability of the model. Similar requirements of solution interpretability were in a project where an organization owned assets that were located in remote areas and were often damaged by birds or animals nests. The organization wished to lower their maintenance cost and planning by identifying as soon as possible the assets where such nests were present instead of doing expensive examination of each asset. This was achieved by building a ML solution that classified Google Earth images of the assets into those with and without nests. In this project, it was important to identify a proportion of assets that were as high as possible with nests on them, while misclassifying an individual asset image was not of great concern.

The recently published CRISP-ML(Q) (Studer et al., 2020 ) proposes an incremental extension of CRISP-DM with the monitoring and maintenance phases. While the study mentions “model explainability” referring to the technical aspects of the underlying model, it does not consider interpretability and explainability in a systematic way as CRISP-ML (Kolyshkina and Simoff, 2019 ). Interpretability is now one of the most important and quickly developing universal requirements, not only a “best practice” requirement in some industries. It is also a legal requirement. CRISP-ML (Kolyshkina and Simoff, 2019 ) ensures that the necessary interpretability level is identified at the requirement collection stage. The methodology then ensures that participants establish the activities for each stakeholder group at each process stage that are required to achieve this level. CRISP-ML (Kolyshkina and Simoff, 2019 ) includes stages 3 and 4 (data predictive potential assessment and data enrichment in Figure 1 ), which are not present in CRISP-ML(Q) (Studer et al., 2020 ). As indicated in Kolyshkina and Simoff ( 2019 ), skipping these important phases can result in potential scope creep and even business project failure.

In Kolyshkina and Simoff ( 2019 ), the individual stages of the CRISP-ML methodology were presented in detail. Each stage was illustrated with examples from cases from a diverse range of domains. There, the emphasis was on the versatility of CRISP-ML as a industry-neutral methodology, including its approach to interpretability. In this study, we focus on a single case study from health-related domain in order to present a comprehensive coverage of each stage and the connections between the stages, and provide examples of how the required level of interpretability of the solution is achieved through carefully crafted involvement of the stakeholders as well as decisions made at each stage. This study does not provide comparative evaluation of CRISP-ML methodology in comparison to CRISP-DM (Shearer, 2000 ), ASUM-DM (IBM Analytics, 2015 ), TDSP (Microsoft, 2020 ), and other methodologies discussed by Kolyshkina and Simoff ( 2019 ). The purpose of the study is to demonstrate, in a robust way, the mechanics of explicit management of interpretability in ML through the project structure and life cycle of a data science methodology. Broader comparative evaluation of the methodology is the subject of a separate study.

The structure of the CRISP-ML process methodology has embedded flexibility in it, indicated by the cycles, which link the model-centric stages back to the early data-centric stages, as shown in Figure 1 . Changes inevitably occur in any project over the course of the project life cycle, and CRISP-ML reflects that. The most typical changes, related to data availability, quality, and analysis findings, occur mostly at stages 2–4, as shown in Figure 1 . This is illustrated in our case study and was discussed in detail in the study by Kolyshkina and Simoff ( 2019 ). Less often changes occur at stages 5–7 in Figure 1 . From experiential observations, such changes are more likely to occur in longer projects with a volume of work requiring more than 6–8 months for completion. They are usually driven by amendments in project scope and requirements including the necessary level of interpretability (NLI), that are caused by factors external to the analytical part of the project. These factors can be global, such as environmental, political, or legislative factors; organization-specific (e.g., updates in the organizational IT structure, the way of data storage or changes in the stakeholder team), or they could be related to the progress in ML and ML-related technical areas (e.g., the advent of a new, better performing predictive algorithm).

In this study, we present the stages of CRISP-ML in a rigid manner, around the backbone of the CRISP-ML process, represented by the solid black triangle arrows in Figure 1 to maintain the emphasis on the mechanisms for handling interpretability in each of these steps, rather than exploring the iterative nature of the approach. For consistency of the demonstration, we draw all detailed examples through the study from the specific public health case study. As a result, we are able to illustrate in more depth how we sustain the level of interpretability through the process structure of the project. The study complements the study by Kolyshkina and Simoff ( 2019 ), where, through the examples drawn from a variety of cases, we demonstrated the versatility of CRISP-ML. The methodological treatment of interpretability in evolving scenarios and options is beyond the scope of this study.

4. Case Study Illustrating the Achievement of the NLI of Machine Learning Solution

In this study, we will describe a detailed real-world case study in which, by going through each project stage, we illustrate how CRISP-ML facilitates data science project stakeholders in establishing and achieving the necessary level of interpretability of ML solution.

We would like to emphasize that the specific analytic techniques and tools mentioned in the respective stages of the case study are relevant specifically to this particular study. They illustrate the approach and the content of the interpretability mechanisms of CRISP-ML. However, there are many other available methods and method combinations that can achieve the objectives of this and other projects.

We place a particular focus on the aspects and stages of CRISP-ML from the perspective of demonstrating the flow and impact of interpretability requirements and on how they have been translated into the necessary level of interpretability of the final ML solution. Further, the structure of this section follows the stages of CRISP-ML process structure in Figure 1 . All sensitive data and information have been masked and altered to protect privacy and confidentiality, without loss of the sensible aspects relevant to this presentation.

4.1. Background. High-Level Project Objectives and Data Description

An Australian State Workers Compensation organization sought to predict, at an early stage of a claim, the likelihood of the claim becoming long-term, i.e., a worker staying on income support for 1 year or more from the date of lodgement. A further requirement was that the prediction model should be easily interpretable by the business.

The data that the analysis was to be based upon were identified by the organizational experts, based on the outcomes for about 20,000 claims incurred in the recent years, and included the following information:

  • – injured worker attributes, e.g., date of birth, gender, occupation, average weekly earnings, residential address;
  • – injury attributes, e.g., injury date, the information on the nature, location, mechanism, and agency of injury coded according to the National Type of Occurrence Classification System 2 ;
  • – employer attributes (size, industry classification);
  • – details of all worker's income support or similar payments.

4.2. Building the Project Interpretability Matrix: An Overall Approach

Interpretability matrix is usually built at Stage 1 of the project as part of the requirement collection process. Data science practitioners recognize Stage 1 as crucial for the overall project success (see, e.g., PMI, 2017 ), as well as from the solution interpretability building perspective (Kolyshkina and Simoff, 2019 ).

The IM as a structure for capturing and translating interpretability requirements into specific actions and activities is generalized. However, the specific content of its cells depends on the project. Kolyshkina and Simoff ( 2019 ) demonstrated the CRISP-ML stages consistently applied to different projects across a number of industries, data sets, and data types.

It covers the activities needed to start up the data science project: (a) the identification of key stakeholders; (b) documenting project objectives and scope; (c) collecting requirements; (d) agreeing upon initial data; (e) preparing a detailed scope statement; and (f) developing project schedule and plan. The deliverable of this stage was a project charter documenting the above activities.

4.2.1. Interpretability-Related Aspects of the Project Charter: Business Objectives, Main Stakeholders, and Interpretability Level

We will describe in more detail the aspects of the project charter that were directly related to this study, specifically the established business objectives, main stakeholders, and the established necessary interpretability requirements.

4.2.1.1. Business objectives and main stakeholders .

The established objectives included:

  • Build an ML system that will explain what factors and to what extent influence the outcome, i.e., claim duration;
  • Allow the organization to derive business insights that will help make data-driven accurate decisions regarding what changes can be done to improve the outcome, i.e., reduce the likelihood of a long claim by a specified percentage;
  • Be accurate, robust, and work with real-world organizational data;
  • Have easy-to-understand outputs that would make sense to the executive team and end users (case managers) and that the end users could trust;
  • Present the output as business rules that are easy to understand for end users and to deploy, monitor, and update in organizational data.
  • Ensure that the overall ML solution is easy to understand and implement by the Information Technology (IT) team of the organization and to monitor/update the Business Intelligence (BI) team of the organization.

The main stakeholders were identified as follows: Executive team (E); End Users/Domain Experts, i.e., Case management team (DE); Information Technology team who would implement the solution in the organizational data (IT); Business Intelligence team who would monitor the solution performance and update the underlying model (BI); and Modeling team (M). These abbreviations are used further in the descriptions of the stages of the IM.

4.2.1.2. The established necessary interpretability level .

The necessary interpretability level (Kolyshkina and Simoff, 2019 ) was established as follows.

  • – The E, IT, and DE teams needed to have a clear understanding of all internal and external data inputs to be used: their reliability, quality, and whether the internal inputs were representative of the organizational data that the solution would be deployed on.
  • – The E and DE teams needed to have a clear understanding of the high-level data processing approach (e.g., missing values treatment, aggregation level), as well as high-level modeling approach and its proven validity.
  • – The outputs needed to be provided in the form of easily understandable business rules. The E and DE teams needed to gain a clear understanding of the rules and to be able to assess their business validity and usefulness from the business point of view.
  • – the data processing stage, as well as the modeling algorithm, its validity, and suitability from the ML point of view;
  • – how to assess the solution performance and how the solution needs to be audited, monitored, and updated, as well as how often this should occur.
  • – The IT team, who would deploy the solution needed to have a clear understanding of the format of the output and confirm that it can be deployed in the organizational data within the existing constraints (e.g., resources, cost) and without disrupting the existing IT systems.

4.2.2. Creating the Project IM: An Overall Approach

The next step is to create and fill out the IM, whose rows show CRISP-ML stages, and columns represent key stakeholders. In each cell of the matrix, we showed what needs to be done by each stakeholder at each project stage to ensure that the required level of solution interpretability is achieved. Matrix cells can be grouped horizontally when there are common requirements for a group of stakeholders. Matrix cells can be grouped vertically when there are common requirements for a specific stakeholder across a number of stages in CRISP-ML. This matrix, once completed, becomes part of the business requirements document. The activities it outlines are integrated into the project plan and are reviewed and updated along with the project plan.

4.2.2.1. Definition of stakeholder involvement extent .

We define the extent of involvement of a stakeholder group needed to achieve the necessary interpretability level in a particular project stage as follows:

  • – high extent of involvement—the stakeholder group needs to be directly and actively involved in the solution development process to ensure that the NLI is achieved at the stage;
  • – medium extent of involvement—the stakeholder group needs to receive detailed regular updates on the progress of the stage and get directly involved in the work from time to time to ensure that the NLI is achieved at the stage. For example, this can refer to DE and IT providing information helping to better understand data sources and business processes of the organization.
  • – low extent of involvement—the stakeholder group is kept informed on the general progress of the stage.

In Figure 2 , green color background indicates high extent of involvement of a stakeholder group, yellow color shows medium extent of involvement, and the cells with no color in the background show low level of involvement. Depending on the project, the coloring of the cells of the IM will vary. For example, if it had not been necessary to provide knowledge transfer (“Ongoing knowledge and skill development”) to the BI team, then their involvement in Stage 2–5 would have been low and the respective cells would have been left with no color in the background.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0002.jpg

High-level interpretability matrix for the project.

4.2.2.2. High-level IM diagram .

Figure 2 shows a high-level interpretability matrix for the project.

4.3. Entries to the Project Interpretability Matrix at Each Stage of CRISP-ML

Further, we discuss entries to the project IM at each stage of CRISP-ML.

4.3.1. Stage 1

The content of the interpretability matrix related to the project initiation and planning stage (i.e., the first row of the matrix) has been discussed in detail above and is summarized in Figure 3 .

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0003.jpg

Interpretability matrix content for Stage 1.

4.3.2. Stages 2–4

Stages 2–4 in Figure 1 are mainly data-related and form the data comprehension, cleansing, and enhancement mega-stage. Further, we consider the content of the interpretability matrix for each of these stages, they are represented by the second, third, and fourth rows of interpretability matrix.

4.3.2.1. Stage 2 .

Data audit, exploration, and cleansing played a key role in achieving the interpretability level needed for the project. Figure 4 demonstrates the content of the interpretability matrix at this stage.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0004.jpg

Interpretability matrix content for Stage 2.

This stage established that the data contained characteristics that significantly complicated the modeling, such as a large degree of random variation, multicollinearity, and a highly categorical nature of many potentially important predictors. These findings helped guide the selection of the modeling and data pre-processing approach.

Random variation . During workshops with E, DE, and other industry experts, it became clear that there were certain “truths” that pervaded the industry, and we used these to engage with subject matter experts (SME) and promote the value of our modeling project. One such “truth” was that claim duration was influenced principally by nature and location of injury, but in combination with the age of the injured worker, and specifically, older workers tended to have longer duration claims. Our analysis demonstrated the enormous amount of random variation that existed in the data. For example, age, body location, and injury type only explained 3–7% of variation in claim duration. There was agreement among the experts that the industry “truths” were insufficient to accurately triage claims and that different approaches were needed.

Our exploratory analysis revealed strong random variation in the data, confirming the prevalent view among the workers' compensation experts that it is the intangible factors, like the injured worker's mindset and relationship with the employer, that play the key role in the speed of recovery and returning to work. The challenge for the modeling, therefore, was to uncover the predictors that represent these intangibles.

Sparseness . Most of the available variables were categorical with large numbers of categories. For example, the variable “Injury Nature” has 143 categories and “Body Location of Injury” has 76 categories. Further, some categories had relatively few observations which made any analysis involving them potentially unreliable and not statistically valid. Such sparseness presented another data challenge.

Multicollinearity . There was a high degree of multicollinearity between numerical variables in the data.

Data pre-processing . First, we reduced the sparseness among categories by combining some categorical levels in consultation with SMEs to ensure that the changes made business sense. Second, we used a combination of correlation analysis, as well as advanced clustering and feature selection approaches, e.g., Random Forests (see, e.g., Shi and Horvath, 2006 ) and PCAMIX method using iterative relocation algorithm and ascendant hierarchical clustering (Chavent et al., 2012 ) to reduce multicollinearity and exclude any redundant variables.

4.3.2.2. Stage 3 .

Figure 5 shows the content of the interpretability matrix related to the evaluation of the predictive potential of the data (i.e., the third row of the matrix). This stage is often either omitted or not stated explicitly in other processes/frameworks (Kolyshkina and Simoff, 2019 ); however, it is crucial for the project success because it establishes whether the information in the data is sufficient for achieving the project goals.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0005.jpg

Interpretability matrix content for Stage 3.

To efficiently evaluate what accuracy could be achieved with the initially supplied data, we employed the following different data science methods that have proven their excellence at extracting maximum predictive power from the data: Deep Neural Nets, Random Forests, XGBoost, and Elastic Net. The results were consistent for all the methods used and showed that only a small proportion of the variability of claim duration was explained by the information available in the data. Therefore, the predictive potential of the initially supplied data, containing claim and worker's data history, indicated that the data set is insufficient for the project objectives. Data enrichment was required.

These findings were discussed with DE who then were invited to share their business knowledge about sources that could enrich the initial data predictive power.

4.3.2.3. Stage 4 .

Data enrichment . Figure 6 shows the content of the interpretability matrix related to the data enrichment stage. Based on the DE feedback and results of external research, we enriched the data with additional variables, including:

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0006.jpg

Interpretability matrix content for Stage 4.

  • – lag between injury occurrence and claim lodgement (claim reporting lag);
  • – information on the treatment received (e.g., type of providers visited, number of visits, provider specialty);
  • – information on the use of medications and, specifically, on whether a potent opioid was used.

We assessed the predictive value of the enriched data in the same way as before (see section 4.3.2.2), and found that there was a significant increase in the proportion of variability explained by the model. Of particular relevance was the incorporation of the prior claim history of claimants, including previous claim count, type and nature of injury, and any similarity with the current injury.

Further, the data enrichment was a key step in building further trust of the DE team. The fact that the model showed that the cost of a claim can be significantly dependent on the providers a worker visited built further trust in the solution, because it confirmed the hunch of domain experts that they previously had not had enough evidence to prove.

4.3.3. Stage 5

Figure 7 shows the content of the interpretability matrix for the model building and evaluation stage. To achieve the right interpretability level, it is crucial that modelers choose the right technique that will balance the required outcome interpretability with the required level of accuracy of the model, which is often a challenge (see, e.g., Freitas, 2014 ), as well as with other requirements/constraints (e.g., the needed functional form of the algorithm). In our case, it was required that the model explained at least 70% of variability.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0007.jpg

Interpretability matrix content for Stage 5.

At this stage, the ML techniques to be used for modeling are selected, taking into account the predictive power of the model, its suitability for the domain and the task, and the NLI. The data is pre-processed, and modeled, and the model performance is evaluated. The solution output was required to be produced in the form of business rules, and therefore, the feature engineering methods and modeling algorithms used included rule-based techniques, e.g., decision trees, and association rules-based methods.

4.3.4. Stage 6

Figure 8 shows how the interpretability matrix reflects the role of interpretability in the formulation of business insights necessary to achieve the project goals and in helping the E and DE to understand the derived business insights and to develop trust in them. Modelers, BI and DEs, prepared a detailed presentation for the E, explaining not only the learnings from the solution but also the high-level model structure and its accuracy.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0008.jpg

Interpretability matrix content for Stage 6.

4.3.5. Stage 7

The final model provided the mechanism for the organization to allocate claims to risk segments based on the information known at early stages. From the technical point of view, the business rules were confirmed by the E, DE, and IT to be easy to deploy as they are readily expressed as SQL code. Based on this success, a modified version of claims triage was deployed into production.

Figure 9 shows the shift of responsibilities for ensuring the achieved interpretability level is maintained during the future use of the solution. At this stage, the deployment was being scheduled, and the monitoring/updating process and schedule was prepared, based on the technical report provided by the M team that included project code, the solution manual, and updating and monitoring recommendations.

An external file that holds a picture, illustration, etc.
Object name is fdata-04-660206-g0009.jpg

Interpretability matrix content for Stage 7 includes activities ensuring the achieved interpretability level is maintained during the future utilization of the solution.

5. Conclusions

This study contributes toward addressing the problem for providing organizations with capabilities to ensure that the ML solutions they develop to improve decision-making are transparent and easy to understand and interpret. If needed, the logic behind the decisions can be explained to any external party. Such capability is essential in many areas, especially in health-related fields. It allows the end users to confidently interpret the ML output use to make successful evidence-based decisions.

In an earlier study (Kolyshkina and Simoff, 2019 ), we introduced CRISP-ML, a methodology of determining the interpretability level required for the successful real-world solution and then achieving it via integration of the interpretability aspects into its overall framework instead of just the algorithm creation stage. CRISP-ML combines practical, common-sense approach with statistical rigor and enables organizations to establish shared understanding across all key stakeholders about the solution and its use and build trust in the solution outputs across all relevant parts of the organization. In this study, we illustrated CRISP-ML with a detailed case study of building an ML solution in the Public Health sector. An Australian state workplace insurer sought to use their data to establish clear business rules that would identify, at an earlier stage of a claim, those with high probability of becoming serious/long-term. We showed how the necessary level of solution interpretability was determined and achieved. First, we showed how it was established by working with the key stakeholders (Executive team, end users, IT team, etc.). Then, we described how the activities that were required to be included at each stage of building the ML solution to ensure that this level is achieved was determined. Finally, we described how these activities were integrated into each stage.

The study demonstrated how CRISP-ML addressed the problems with data diversity, unstructured nature of the data, and relatively low linkage between diverse data sets in the healthcare domain (Catley et al., 2009 ; Niaksu, 2015 ). The characteristics of the case study which we used are typical for healthcare data, and CRISP-ML managed to deliver on these issues, ensuring the required interpretability of the ML solutions in the project.

While we have not completed formal evaluation of CRISP-ML, there are two aspects which indicate that the use of this methodology improves the chances of success of data science projects. On the one hand, CRISP-ML is built on the strengths of CRISP-DM, which made it the preferred and effective methodology (Piatetsky-Shapiro, 2014 ; Saltz et al., 2017 ), addressing its identified limitations in previous works (e.g., Mariscal et al., 2010 ). On the other hand, CRISP-ML has been successfully deployed in a number of recent real-world projects across several industries and fields, including credit risk, insurance, utilities, and sport. It ensured on meeting the interpretability requirements of the organizations, regardless of industry specifics, regulatory requirements, types of stakeholders involved, project objectives, and data characteristics, such as type (structured as well as unstructured), size, or complexity level.

CRISP-ML is a living organism and, as such, it responds to the rapid progress in the development of ML algorithms and the evolution of the legislation for their adoption. Consequently, CRISP-ML development includes three directions: (i) the development of a richer set of quantitative measures of interpretability features for human interpretable machine learning, (ii) the development of the methodology and respective protocols for machine interpretation, and (iii) the development of formal process support. The first one is being extended in a way to provide input to the development and evaluation of common design principles for human interpretable ML solutions in line with that described in the study by Lage et al. ( 2019 ). This strategic development adds the necessary agility for the relevance of the presented cross-industry standard process.

Data Availability Statement

Author contributions.

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

IK was employed by the company Analytikk Consulting Services. The remaining author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

1 https://www.un.org/sustainabledevelopment/sustainable-development-goals/ and https://sdgs.un.org/goals .

2 Type of Occurrence Classification System (3rd Edition, Revision 1), Australian Government—Australian Safety and Compensation Council, Canberra, 2008, https://www.safeworkaustralia.gov.au/doc/type-occurrence-classification-system-toocs-3rd-edition-may-2008 ).

Funding. Elements of the study on interpretability in ML solutions are partially supported by the Australian Research Council Discovery Project (grant no.: DP180100893).

  • Abasova J., Janosik J., Simoncicova V., Tanuska P. (2018). Proposal of effective preprocessing techniques of financial data , in 2018 IEEE 22nd International Conference on Intelligent Engineering Systems (INES) (IEEE: ), 293–298. 10.1109/INES.2018.8523922 [ CrossRef ] [ Google Scholar ]
  • Ahmad M. A., Eckert C., Teredesai A., McKelvey G. (2018). Interpretable machine learning in healthcare . IEEE Intell. Inform. Bull . 19 , 1–7. Available online at: https://www.comp.hkbu.edu.hk/~cib/2018/Aug/iib_vol19no1.pdf [ Google Scholar ]
  • Ahmed B., Dannhauser T., Philip N. (2018). A lean design thinking methodology (LDTM) for machine learning and modern data projects , in Proceedings of 2018 10th Computer Science and Electronic Engineering (CEEC) (IEEE: ), 11–14. 10.1109/CEEC.2018.8674234 [ CrossRef ] [ Google Scholar ]
  • Alvarez-Melis D., Jaakkola T. S. (2018). Towards robust interpretability with self-explaining neural networks , in Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18 (Red Hook, NY: Curran Associates Inc.), 7786–7795. [ Google Scholar ]
  • Arcidiacono G. (2017). Comparative research about high failure rate of it projects and opportunities to improve . PM World J . VI , 1–10. Available online at: https://pmworldlibrary.net/wp-content/uploads/2017/02/pmwj55-Feb2017-Arcidiacono-high-failure-rate-of-it-projects-featured-paper.pdf [ Google Scholar ]
  • Athey S. (2019). The impact of machine learning on economics , in The Economics of Artificial Intelligence: An Agenda , eds Agrawal A., Gans J., Goldfarb A. (Chicago, IL: University of Chicago Press; ), 507–547. [ Google Scholar ]
  • Berendt B., Preibusch S. (2017). Toward accountable discrimination-aware data mining: the importance of keeping the human in the loop and under the looking glass . Big Data 5 , 135–152. 10.1089/big.2016.0055 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bhardwaj A., Bhattacherjee S., Chavan A., Deshpande A., Elmore A. J., Madden S., et al.. (2015). DataHub: collaborative data science and & dataset version management at scale , in Proceedings of the 7th Biennial Conference on Innovative Data Systems Research (CIDR'15), January 4–7, 2015 (Asilomar, CA: ). [ Google Scholar ]
  • Caruana R., Lou Y., Gehrke J., Koch P., Sturm M., Elhadad N. (2015). Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission , in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15 (New York, NY: Association for Computing Machinery; ), 1721–1730. 10.1145/2783258.2788613 [ CrossRef ] [ Google Scholar ]
  • Carvalho D. V., Pereira E. M., Cardoso J. S. (2019). Machine learning interpretability: a survey on methods and metrics . Electronics 8 :832. 10.3390/electronics8080832 [ CrossRef ] [ Google Scholar ]
  • Catley C., Smith K., McGregor C., Tracy M. (2009). Extending crisp-dm to incorporate temporal data mining of multidimensional medical data streams: a neonatal intensive care unit case study , in 22nd IEEE International Symposium on Computer-Based Medical Systems , 1–5. 10.1109/CBMS.2009.5255394 [ CrossRef ] [ Google Scholar ]
  • Chandler N., Oestreich T. (2015). Use Analytic Business Processes to Drive Business Performance . Technical report, Gartner. [ Google Scholar ]
  • Chavent M., Liquet B., Kuentz V., Saracco J. (2012). Clustofvar: an R package for the clustering of variables . J. Statist. Softw . 50 , 1–16. 10.18637/jss.v050.i13 [ CrossRef ] [ Google Scholar ]
  • Chen T., Guestrin C. (2016). Xgboost: a scalable tree boosting system , in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '16 (New York, NY: Association for Computing Machinery; ), 785–794. 10.1145/2939672.2939785 [ CrossRef ] [ Google Scholar ]
  • Darlington K. W. (2011). Designing for explanation in health care applications of expert systems . SAGE Open 1 , 1–9. 10.1177/2158244011408618 [ CrossRef ] [ Google Scholar ]
  • Davenport T., Kalakota R. (2019). Digital technology: the potential for artificial intelligence in healthcare . Future Healthc. J . 6 , 94–98. 10.7861/futurehosp.6-2-94 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dawson D., Schleiger E., Horton J., McLaughlin J., Robinson C., Quezada G., et al.. (2019). Artificial Intelligence: Australia's Ethics Framework . Technical report, Data61 CSIRO, Australia. [ Google Scholar ]
  • Doshi-Velez F., Kim B. (2017). Towards a rigorous science of interpretable machine learning . arXiv 1702.08608. [ Google Scholar ]
  • Espinosa J. A., Armour F. (2016). The big data analytics gold rush: a research framework for coordination and governance , in Proceedings of the 49th Hawaii International Conference on System Sciences (HICSS) , 1112–1121. 10.1109/HICSS.2016.141 [ CrossRef ] [ Google Scholar ]
  • EU (2016). General data protection regulation (GDPR) . Off. J. Eur. Union L 119. [ Google Scholar ]
  • Fahmy A. F., Mohamed H. K., Yousef A. H. (2017). A data mining experimentation framework to improve six sigma projects , in 2017 13th International Computer Engineering Conference (ICENCO) , 243–249. 10.1109/ICENCO.2017.8289795 [ CrossRef ] [ Google Scholar ]
  • Freitas A. A. (2014). Comprehensible classification models: a position paper . SIGKDD Explor. Newslett . 15 , 1–10. 10.1145/2594473.2594475 [ CrossRef ] [ Google Scholar ]
  • Fujimaki R. (2020). Most Data Science Projects Fail, But Yours Doesn't Have To. Datanami . Available online at: https://www.datanami.com/2020/10/01/most-data-science-projects-fail-but-yours-doesnt-have-to/
  • Gao J., Koronios A., Selle S. (2015). Towards a process view on critical success factors in big data analytics projects , in Proceedings of the 21st Americas Conference on Information Systems (AMCIS) , 1–14. [ Google Scholar ]
  • Gilpin L. H., Bau D., Yuan B. Z., Bajwa A., Specter M., Kagal L. (2018). Explaining explanations: an overview of interpretability of machine learning , in 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA) , 80–89. 10.1109/DSAA.2018.00018 [ CrossRef ] [ Google Scholar ]
  • Gilpin L. H., Testart C., Fruchter N., Adebayo J. (2019). Explaining explanations to society . CoRR abs/1901.06560. [ Google Scholar ]
  • Gleicher M. (2016). A framework for considering comprehensibility in modeling . Big Data 4 , 75–88. 10.1089/big.2016.0007 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Goodfellow I., Bengio Y., Courville A. (2016). Deep Learning . MIT Press. [ Google Scholar ]
  • Goodson M. (2016). Reasons Why Data Projects Fail . KDnuggets. [ Google Scholar ]
  • Goodwin B. (2011). Poor Communication to Blame for Business Intelligence Failure, Says Gartner . Computer Weekly. [ Google Scholar ]
  • Google (2019). Google AI: Responsible AI Practices–Interpretability . Technical report, Google AI. [ Google Scholar ]
  • Gosiewska A., Biecek P. (2020). Do not trust additive explanations . arXiv 1903.11420. [ Google Scholar ]
  • Gosiewska A., Woznica K., Biecek P. (2020). Interpretable meta-measure for model performance . arXiv 2006.02293. [ Google Scholar ]
  • Grady N. W., Underwood M., Roy A., Chang W. L. (2014). Big data: challenges, practices and technologies: NIST big data public working group workshop at IEEE big data 2014 , in Proceedings of IEEE International Conference on Big Data (Big Data 2014) , 11–15. 10.1109/BigData.2014.7004470 [ CrossRef ] [ Google Scholar ]
  • Guidotti R., Monreale A., Ruggieri S., Turini F., Giannotti F., Pedreschi D. (2018). A survey of methods for explaining black box models . ACM Comput. Surv . 51 , 93 :1–93: 42. 10.1145/3236009 [ CrossRef ] [ Google Scholar ]
  • Hansen L. K., Rieger L. (2019). Interpretability in intelligent systems–a new concept? in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Volume 11700 of LNAI , eds Samek W., Montvon G., Vedaldi A., Hansen L. K., Müller K. R. (Springer Nature), 41–49. 10.1007/978-3-030-28954-6_3 [ CrossRef ] [ Google Scholar ]
  • Holzinger A., Biemann C., Pattichis C. S., Kell D. B. (2017). What do we need to build explainable AI systems for the medical domain? arXiv 1712.09923. [ Google Scholar ]
  • Huang W., McGregor C., James A. (2014). A comprehensive framework design for continuous quality improvement within the neonatal intensive care unit: integration of the SPOE, CRISP-DM and PaJMa models , in IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI) , 289–292. 10.1109/BHI.2014.6864360 [ CrossRef ] [ Google Scholar ]
  • IBM Analytics (2015). IBM Analytics Solutions Unified Method (ASUM) . Available online at: http://i2t.icesi.edu.co/ASUM-DM_External/index.htm
  • Jain P. (2019). Top 5 Reasons for Data Science Project Failure . Medium: Data Series. [ Google Scholar ]
  • Kaggle (2020). State of Machine Learning and Data Science 2020 Survey. Technical report, Kaggle . Available online at: https://www.kaggle.com/c/kaggle-survey-2020
  • Kennedy P., Simoff S. J., Catchpoole D. R., Skillicorn D. B., Ubaudi F., Al-Oqaily A. (2008). Integrative visual data mining of biomedical data: investigating cases in chronic fatigue syndrome and acute lymphoblastic leukaemia , in Visual Data Mining: Theory, Techniqus and Tools for Visual Analytics, Volume 4404 of LNCS , eds Simoff S. J., Böhlen M. H., Mazeika A. (Berlin; Heidelberg:Springer; ), 367–388. 10.1007/978-3-540-71080-6_21 [ CrossRef ] [ Google Scholar ]
  • Kolyshkina I., Simoff S. (2019). Interpretability of machine learning solutions in industrial decision engineering , in Data Mining , eds Le T. D., Ong K. L., Zhao Y., Jin W. H., Wong S., Liu L., et al.. (Singapore: Springer Singapore; ), 156–170. 10.1007/978-981-15-1699-3_13 [ CrossRef ] [ Google Scholar ]
  • Lage I., Chen E., He J., Narayanan M., Kim B., Gershman S. J., et al.. (2019). Human evaluation of models built for interpretability , in The Proceedings of the Seventh AAAI Conference on Human Computation and Crowdsourcing (HCOMP-19) , Vol. 7 , 59–67. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Larson D., Chang V. (2016). A review and future direction of agile, business intelligence, analytics and data science . Int. J. Inform. Manage . 36 , 700–710. 10.1016/j.ijinfomgt.2016.04.013 [ CrossRef ] [ Google Scholar ]
  • Lipton Z. C. (2018). The mythos of model interpretability . ACM Queue 16 , 30 :31–30: 57. 10.1145/3236386.3241340 [ CrossRef ] [ Google Scholar ]
  • Lundberg S. M., Lee S. I. (2017). A unified approach to interpreting model predictions , in Advances in Neural Information Processing Systems 30 , eds Guyon I., Luxburg U. V., Bengio S., Wallach H., Fergus R., Vishwanathan S., et al.. (Curran Associates, Inc.), 4765–4774. [ Google Scholar ]
  • Mariscal G., Marbán O., Fernández C. (2010). A survey of data mining and knowledge discovery process models and methodologies . Knowl. Eng. Rev . 25 , 137–166. 10.1017/S0269888910000032 [ CrossRef ] [ Google Scholar ]
  • Mi J. X., Li A. D., Zhou L. F. (2020). Review study of interpretation methods for future interpretable machine learning . IEEE Access 8 , 191969–191985. 10.1109/ACCESS.2020.3032756 [ CrossRef ] [ Google Scholar ]
  • Microsoft (2020). Team Data Science Process . Microsoft. [ Google Scholar ]
  • Miller T. (2019). Explanation in artificial intelligence: insights from the social sciences . Artif. Intell . 267 , 1–38. 10.1016/j.artint.2018.07.007 [ CrossRef ] [ Google Scholar ]
  • Mittelstadt B., Russell C., Wachter S. (2019). Explaining explanations in AI , in Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT '19 (New York, NY: Association for Computing Machinery; ), 279–288. 10.1145/3287560.3287574 [ CrossRef ] [ Google Scholar ]
  • Molnar C., Casalicchio G., Bischl B. (2019). Quantifying interpretability of arbitrary machine learning models through functional decomposition . arXiv 1904.03867. [ Google Scholar ]
  • Murdoch W. J., Singh C., Kumbier K., Abbasi-Asl R., Yu B. (2019). Definitions, methods, and applications in interpretable machine learning . Proc. Natl. Acad. Sci. U.S.A . 116 , 22071–22080. 10.1073/pnas.1900654116 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • NewVantage Partners LLC (2021). Big Data and AI Executive Survey 2021: The Journey to Becoming Data-Driven: A Progress Report on the State of Corporate Data Initiatives . Technical report. Boston, MA; New York, NY; San Francisco, CA; Raleigh, NC: NewVantagePartners LLC. [ Google Scholar ]
  • Niaksu O. (2015). Crisp data mining methodology extension for medical domain . Baltic J. Mod. Comput . 3 , 92–109. Available online at: https://www.bjmc.lu.lv/fileadmin/user_upload/lu_portal/projekti/bjmc/Contents/3_2_2_Niaksu.pdf [ Google Scholar ]
  • Niño M., Blanco J. M., Illarramendi A. (2015). Business understanding, challenges and issues of Big Data Analytics for the servitization of a capital equipment manufacturer , in 2015 IEEE International Conference on Big Data, Oct 29–Nov 01, 2015 , eds Ho H., Ooi B. C., Zaki M. J., Hu X., Haas L., Kumar V., et al.. (Santa Clara, CA), 1368–1377. [ Google Scholar ]
  • Piatetsky-Shapiro G. (2014). CRISP-DM, still the top methodology for analytics, data mining, or data science projects . KDnuggets News 14. [ Google Scholar ]
  • Plotnikova V. (2018). Towards a data mining methodology for the banking domain , in Proceedings of the Doctoral Consortium Papers Presented at the 30th International Conference on Advanced Information Systems Engineering (CAiSE 2018) , eds Kirikova M., Lupeikiene A., and E. Teniente , 46–54. [ Google Scholar ]
  • PMI (2017). PMBOK ® Guide, 6th Edn . Project Management Institute. [ Google Scholar ]
  • Pradeep S., Kallimani J. S. (2017). A survey on various challenges and aspects in handling big data , in Proceedings of the 2017 International Conference on Electrical, Electronics, Communication, Computer, and Optimization Techniques (ICEECCOT) , 1–5. 10.1109/ICEECCOT.2017.8284606 [ CrossRef ] [ Google Scholar ]
  • Qayyum A., Qadir J., Bilal M., Al-Fuqaha A. (2021). Secure and robust machine learning for healthcare: a survey . IEEE Rev. Biomed. Eng . 14 , 156–180. 10.1109/RBME.2020.3013489 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ransbotham S., Kiron D., Prentice P. K. (2015). Minding the Analytics Gap . MIT Sloan Management Review. [ Google Scholar ]
  • Ribeiro M. T., Singh S., Guestrin C. (2016). “Why should I trust you?”: explaining the predictions of any classifier , in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD'16) (New York, NY: ACM; ), 1135–1144. 10.1145/2939672.2939778 [ CrossRef ] [ Google Scholar ]
  • Roberts J. (2017). 4 Reasons Why Most Data Science Projects Fail . CIO Dive. [ Google Scholar ]
  • Rudin C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead . Nat. Mach. Intell . 1 , 206–215. 10.1038/s42256-019-0048-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Saltz J. S. (2015). The need for new processes, methodologies and tools to support big data teams and improve big data project effectiveness , in Proceedings of 2015 IEEE International Conference on Big Data (Big Data) , 2066–2071. 10.1109/BigData.2015.7363988 [ CrossRef ] [ Google Scholar ]
  • Saltz J. S., Shamshurin I. (2016). Big data team process methodologies: a literature review and the identification of key factors for a project's success , in Proceedings of 2016 IEEE International Conference on Big Data (Big Data) , 2872–2879. 10.1109/BigData.2016.7840936 [ CrossRef ] [ Google Scholar ]
  • Saltz J. S., Shamshurin I., Crowston K. (2017). Comapring data science project management methodologies via a controlled experiment , in HICSS . 10.24251/HICSS.2017.120 [ CrossRef ] [ Google Scholar ]
  • Samek W., Müller K. R. (2019). Towards explainable artificial intelligence , in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Volume 11700 of LNAI , eds Samek W., Montvon G., Vedaldi A., Hansen L. K., Müller K. R. (Springer Nature), 5–22. 10.1007/978-3-030-28954-6_1 [ CrossRef ] [ Google Scholar ]
  • Schäfer F., Zeiselmair C., Becker J., Otten H. (2018). Synthesizing CRISP-DM and quality management: a data mining approach for production processes , in 2018 IEEE International Conference on Technology Management, Operations and Decisions (ICTMOD) , 190–195. 10.1109/ITMC.2018.8691266 [ CrossRef ] [ Google Scholar ]
  • Shearer C. (2000). The CRISP-DM Model: The new blueprint for data mining . J. Data Warehousing 5 , 13–22. Available online at: https://mineracaodedados.files.wordpress.com/2012/04/the-crisp-dm-model-the-new-blueprint-for-data-mining-shearer-colin.pdf [ Google Scholar ]
  • Shi T., Horvath S. (2006). Unsupervised learning with random forest predictors . J. Comput. Graph. Stat . 15 , 118–138. 10.1198/106186006X94072 [ CrossRef ] [ Google Scholar ]
  • Stieglitz C. (2012). Beginning at the end-requirements gathering lessons from a flowchart junkie , in PMI ® Global Congress 2012–North America, Vancouver, British Columbia, Canada (Newtown Square, PA: Project Management Institute; ). Available online at: https://www.pmi.org/learning/library/requirements-gathering-lessons-flowchart-junkie-5981 [ Google Scholar ]
  • Stiglic G., Kocbek P., Fijacko N., Zitnik M., Verbert K., Cilar L. (2020). Interpretability of machine learning-based prediction models in healthcare . WIREs Data Mining Knowl. Discov . 10 , 1–13. 10.1002/widm.1379 [ CrossRef ] [ Google Scholar ]
  • Studer S., Bui T. B., Drescher C., Hanuschkin A., Winkler L., Peters S., et al.. (2020). Towards CRISP-ML(Q): a machine learning process model with quality assurance methodology . arXiv 2003.05155. [ Google Scholar ]
  • Sun W., Nasraoui O., Shafto P. (2020). Evolution and impact of bias in human and machine learning algorithm interaction . PLoS ONE 15 :e0235502. 10.1371/journal.pone.0235502 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • vander Meulen R, Thomas M. (2018). Gartner Survey Shows Organizations Are Slow to Advance in Data and Analytics . Gartner Press Release. [ Google Scholar ]
  • Vellido A. (2020). The importance of interpretability and visualization in machine learning for applications in medicine and health care . Neural Comput. Appl . 32 , 18069–18083. 10.1007/s00521-019-04051-w [ CrossRef ] [ Google Scholar ]
  • Violino B. (2017). 7 Sure-Fire Ways to Fail at Data Analytics . CIO. [ Google Scholar ]
  • Wallace N., Castro D. (2018). The Impact of the EU's New Data Protection Regulation on AI . Technical report, Center for Data Innovation. [ Google Scholar ]
  • Weller A. (2019). Transparency: motivations and challenges , in Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, Volume 11700 of LNAI , eds Samek W., Montvon G., Vedaldi A., Hansen L. K., Müller K. R. (Springer Nature), 23–40. 10.1007/978-3-030-28954-6_2 [ CrossRef ] [ Google Scholar ]

IMAGES

  1. What is CRISP DM?

    example case study with solution about crisp dm methodology

  2. SOLUTION: What is the crisp dm methodology complete researche

    example case study with solution about crisp dm methodology

  3. How CRISP-DM Methodology Can Accelerate Data Science Projects

    example case study with solution about crisp dm methodology

  4. Phases of the CRISP-DM Process

    example case study with solution about crisp dm methodology

  5. CRISP DM Methodology

    example case study with solution about crisp dm methodology

  6. Using CRISP-DM to Grow as Data Scientist

    example case study with solution about crisp dm methodology

VIDEO

  1. Data Science

  2. Data_ science _ Methodology- CRISP DM- Case Study

  3. The interaction between Business Understanding & Data Understanding in CRISP-DM

  4. Fuzzy Sets

  5. NPTEL Research Methodology Assignment 3 Solution #week3

  6. Data Science

COMMENTS

  1. A Practical Guide to CRISP-DM Using A Case Study in Steps

    CRISP-DM stands for CR oss I ndustry S tandard P rocess for D ata M ining. The process model spans six phases meant to fully describe the data science life cycle. Business understanding. Data understanding. Data preparation. Modeling. Evaluation. Deployment.

  2. PDF Adapting the CRISP-DM Data Mining Process: A Case Study in the

    these gaps. The case study was conducted based on documentation from a portfolio of data mining projects, complemented by semi-structured in-terviews with project participants. The results reveal 18 perceived gaps in CRISP-DM alongside their perceived impact and mechanisms employed to address these gaps. The identi ed gaps are grouped into six ...

  3. Adapting the CRISP-DM Data Mining Process: A Case Study in the

    A case study is an empirical research method aimed at investigating a specific reality within its real-life context [].This method is suitable when the defining boundaries between what is studied and its context are unclear [], which is the case in our research.The case study was conducted according to a detailed protocol Footnote 2.The protocol provides details of the case study design and ...

  4. (PDF) Integrating Crisp DM Methodology for a Business ...

    The aim of this term paper is to understand how the C RISP-DM Model works and how it. is used as a data m in ing methodology in six phases by businesses for their data minin g. projects. Tableau ...

  5. A CRISP-DM Approach for Predicting Liver Failure Cases: An Indian Case

    2 Methodology. The present study followed the CRISP-DM methodology. This is a procedure developed for carrying out DM processes based on six phases, namely, business understanding, data understanding, data preparation, modeling, evaluation, and deployment [ 8 ]. Each phase of the CRISP-DM methodology will be described in the following sections.

  6. CRISP-DM Methodology For Your First Data Science Project

    Image by Author. If you enjoy my content and want to get more in-depth knowledge regarding data or just daily life as a Data Scientist, please consider subscribing to my newsletter here.. The cross-industry standard process for data mining or CRISP-DM is an open standard process framework model for data mining project planning. This is a framework that many have used in many industrial ...

  7. Domain Knowledge in CRISP-DM: An Application Case in Manufacturing

    By using a human-in-the-loop approach and efficiently utilizing current domain knowledge in combination with data analytics, the higher success of implementation can be achieved. A common approach today to perform data analytics projects is to use the general Cross Industry Standard Process for Data Mining (CRISP-DM) methodology.

  8. A Systematic Literature Review on Applying CRISP-DM Process Model

    7. Conclusion This paper explores CRISP-DM phases in recent studies. CRISP-DM is a de-facto standard process model in data mining projects. This systematic literature review is used to give an overview of how CRISP-DM is used in recent studies and to find research focus, best practices and innovative methods.

  9. PDF Copy of CRISP-DM for Data Science

    Published in 1999, CRISP-DM (CRoss Industry Standard Process for Data Mining (CRISP-DM) is the most popular framework for executing data science projects. It provides a natural description of a data science life cycle (the workflow in data-focused projects). However, this task-focused approach for executing projects fails to address team and ...

  10. CRISP-DM Framework: A Comprehensive Guide to Solving Complex ...

    The CRISP-DM framework is a popular methodology for data science projects, consisting of six stages: Business Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, and…

  11. Predicting Churning Customers Using CRISP-DM Methodology

    Dec 17, 2020. 4. The development of this project aimed to identify the churn generation of customers. The project's motivation was to analyze patterns, trends and predictions extracted from the data using machine learning models capable of identifying the significant decrease in the use of services and products by customers.

  12. Adapting the CRISP-DM Data Mining Process: A Case Study in the

    A case study based on documentation from a portfolio of data mining projects, complemented by semi-structured interviews with project participants, reveals 18 perceived gaps in CRISP-DM alongside their perceived impact and mechanisms employed to address these gaps. . Data mining techniques have gained widespread adoption over the past decades, particularly in the financial services domain.

  13. CRISP-DM Brief Explanation and Example

    CRISP-DM stands for Cross-industry standard process for data mining. It is a common method used to find many solutions in Data Science. It has bee a standard practice used by industry for years ...

  14. Using CRISP-DM to Predict Car Prices

    As mentioned above, the CRISP-DM process starts with the understanding of the business problem. Imagine for example a used car dealer who needs estimates what the price of a used car could be. The ...

  15. A Systematic Literature Review on Applying CRISP-DM Process Model

    The objective of this systematic literature review is to identify the research focus, best practices and new methods. for applying CRISP DM phases. The results show that s everal data mining ...

  16. Application of CRISP-DM methodology for managing human-wildlife

    The Cross-Industry Standard Process for Data Mining (CRISP-DM) is a widely-used process model for structured decision-making. This study demonstrates the novel application of CRISP-DM to HWC related decision-making. We apply CRISP-DM and conduct hotspot and temporal (monthly) analysis of HWC data from Ramnagar Forest Division, India.

  17. PDF Adapting the CRISP-DM Data Mining Process: A Case Study in the

    these gaps. The case study was conducted based on documentation from a portfolio of data mining projects, complemented by semi-structured interviews with project participants. The results reveal 18 perceived gaps in CRISP-DM alongside their perceived impact and mechanisms employed to address these gaps. The identified gaps are grouped into six ...

  18. PDF A Case Study of Evaluating Job Readiness with Data Mining Tools and

    2. CRISP-DM . CRISP-DM is a freely available model that has become the leading methodology in data mining. Because of its industry and tool independence, CRISP-DM can provide guidelines for organized and transparent execution of any project. Typically, it groups all scheduled tasks into six consecutive phases [1]:

  19. CRISP-DM: The Ultimate Framework for Data Mining Success

    This book covers the CRISP-DM process model and provides examples and case studies in R. "Data Mining: Concepts and Techniques" by Jiawei Han, Micheline Kamber, and Jian Pei: This book covers ...

  20. RPubs

    Or copy & paste this link into an email or IM:

  21. What is CRISP DM?

    The CRoss Industry Standard Process for Data Mining (CRISP-DM) is a process model with six phases that naturally describes the data science life cycle. ... Example: To illustrate how CRISP-DM could be implemented in either an Agile or waterfall manner, ... KDnuggets is a common source for data mining methodology usage. Each of the polls in 2002 ...

  22. How to apply CRISP-DM to real business case

    The process of CRISP-DM is described in these six major steps: Business Understanding. Data Understanding. Data Preparation. Modeling. Evaluation. Deployment. This post will go through the process ...

  23. Interpretability of Machine Learning Solutions in Public Healthcare

    This study builds on the earlier work which introduced CRISP-ML, a methodology that determines the interpretability level required by stakeholders for a successful real-world solution and then helps in achieving it. CRISP-ML was built on the strengths of CRISP-DM, addressing the gaps in handling interpretability.