Favorites
b/bonnybooksbycuongnhung1234

Interpreting Machine Learning Models With SHAP : A Guide With Python Examples And Theory On Shapley Values

Interpreting Machine Learning Models With SHAP : A Guide With Python Examples And Theory On Shapley Values

English | 2023 | ISBN: NA | 216 Pages | PDF, EPUB + extras | 18.3 MB

Master machine learning interpretability with this comprehensive guide to SHAP – your tool to communicating model insights and building trust in all your machine learning applications.

THE BOOK WILL BE PUBLISHED ON AUGUST 1ST. SIGN UP TO GET A EARLY BIRD COUPON!

Machine learning is transforming fields from healthcare diagnostics to climate change predictions through their predictive performance. However, these complex machine learning models often lack interpretability, which is becoming more essential than ever for debugging, fostering trust, and communicating model insights.

Introducing SHAP, the Swiss army knife of machine learning interpretability

SHAP can be used to explain individual predictions.
By combining explanations for individual predictions, SHAP allows to study the overall model behavior.
SHAP is model-agnostic – it works with any model, from simple linear regression to deep learning.
With its flexibility, SHAP can handle various data formats, whether it’s tabular, image, or text.
The Python package shap makes the application of SHAP for model interpretation easy.
This book will be your comprehensive guide to mastering the theory and application of SHAP. It starts with the quite fascinating origins in game theory and explores what splitting taxi costs has to do with explaining machine learning predictions. Starting with using SHAP to explain a simple linear regression model, the book progressively introduces SHAP for more complex models. You’ll learn the ins and outs of the most popular explainable AI method and how to apply it using the shap package.

In a world where interpretability is key, this book is your roadmap to mastering SHAP. For machine learning models that are not only accurate but also interpretable.

Who This Book Is For
This book is for data scientists, statisticians, machine learners, and anyone who wants to learn how to make machine learning models more interpretable. Ideally, you are already familiar with machine learning to get the most out of this book. And you should know your way around Python to follow the code examples.

What's in the Book
Introduction
A Short History of Shapley Values and SHAP
Theory of Shapley Values
From Shapley Values to SHAP
Estimating SHAP Values
SHAP for Linear MOdels
Classification with Logistic Regression
SHAP for Additive Models
Understanding Feature Interactions with SHAP
The Correlation Problem
Regressing Using a Random Forest
Image Classification with Partition Explainer
Image Classification with Deep and Gradient Explainer
Explaining Language Models
Limitations of SHAP
Building SHAP Dashboards with Shapash
Alternatives to the shap Library
Extensions of SHAP
Other Applications of Shapley Values in Machine Learning
SHAP Estimators
The Role of Maskers and Background Data
About me (Christoph Molnar)
Author of the free online book Interpretable Machine Learning. I have a background in both statistics and machine learning and did my Ph.D. in interpretable machine learning. After a mix of data scientist jobs and academia, I'm now a full-time machine learning book author.

No comments have been posted yet. Please feel free to comment first!

    Load more replies

    Join the conversation!

    Log in or Sign up
    to post a comment.