Skip to main content
  1. All Posts/

what-if-tool

Tools HTML

What-If Tool

The What-If Tool (WIT) provides an easy-to-use interface for expanding
understanding of a black-box classification or regression ML model.
With the plugin, you can perform inference on a large set of examples and
immediately visualize the results in a variety of ways.
Additionally, examples can be edited manually or programmatically and re-run
through the model in order to see the results of the changes.
It contains tooling for investigating model performance and fairness over
subsets of a dataset.
The purpose of the tool is that give people a simple, intuitive, and powerful
way to play with a trained ML model on a set of data through a visual interface
with absolutely no code required.
The tool can be accessed through TensorBoard or as an extension in a Jupyter
or
Colab
notebook.

I don’t want to read this document. Can I just play with a demo?

Check out the large set of web and colab demos in the
demo section of the What-If Tool website.
To build the web demos yourself:

  • Binary classifier for UCI Census dataset salary prediction

    • Dataset: UCI Census
    • Task: Predict whether a person earns more or less than $50k based on their
      census information
    • To build and run the demo from code:
      bazel run wit_dashboard/demo:demoserver
      then navigate to http://localhost:6006/wit-dashboard/demo.html
  • Binary classifier for smile detection in images

    • Dataset: CelebA
    • Task: Predict whether the person in an image is smiling
    • To build and run the demo from code:
      bazel run wit_dashboard/demo:imagedemoserver
      then navigate to http://localhost:6006/wit-dashboard/image_demo.html
  • Multiclass classifier for Iris dataset

    • Dataset: UCI Iris
    • Task: Predict which of three classes of iris flowers that a flower falls
      into based on 4 measurements of the flower
    • To build and run the demo from code:
      bazel run wit_dashboard/demo:irisdemoserver
      then navigate to http://localhost:6006/wit-dashboard/iris_demo.html
  • Regression model for UCI Census dataset age prediction

    • Dataset: UCI Census
    • Task: Predict the age of a person based on their census information
    • To build and run the demo from code:
      bazel run wit_dashboard/demo:agedemoserver
      then navigate to http://localhost:6006/wit-dashboard/age_demo.html
    • This demo model returns attribution values in addition to predictions (through the use of vanilla gradients)
      in order to demonstate how the tool can display attribution values from predictions.

What do I need to use it in a jupyter or colab notebook?

You can use the What-If Tool to analyze a classification or regression
TensorFlow Estimator
that takes TensorFlow Example or SequenceExample protos
(data points) as inputs directly in a jupyter or colab notebook.
Additionally, the What-If Tool can analyze
AI Platform Prediction-hosted classification
or regresssion models that take TensorFlow Example protos, SequenceExample protos,
or raw JSON objects as inputs.
You can also use What-If Tool with a custom prediction function that takes
Tensorflow examples and produces predictions. In this mode, you can load any model
(including non-TensorFlow models that don’t use Example protos as inputs) as
long as your custom function’s input and output specifications are correct.
With either AI Platform models or a custom prediction function, the What-If Tool can
display and make use of attribution values for each input feature in relation to each
prediction. See the below section on attribution values for more information.
If you want to train an ML model from a dataset and explore the dataset and
model, check out the What_If_Tool_Notebook_Usage.ipynb notebook in colab, which starts from a CSV file,
converts the data to tf.Example protos, trains a classifier, and then uses the
What-If Tool to show the classifier performance on the data.

What do I need to use it in TensorBoard?

A walkthrough of using the tool in TensorBoard, including a pretrained model and
test dataset, can be found on the
What-If Tool page on the TensorBoard website.
To use the tool in TensorBoard, only the following information needs to be provided:

  • The model server host and port, served using
    TensorFlow Serving. The model can
    use the TensorFlow Serving Classification, Regression, or Predict API.

    • Information on how to create a saved model with the Estimator API that
      will use thse appropriate TensorFlow Serving Classification or Regression
      APIs can be found in the saved model documentation
      and in this helpful tutorial.
      Models that use these APIs are the simplest to use with the What-If Tool
      as they require no set-up in the tool beyond setting the model type.
    • If the model uses the Predict API, the input must be serialized tf.Example
      or tf.SequenceExample protos and the output must be following:

      • For classification models, the output must include a 2D float tensor
        containing a list of class probabilities for all possible class
        indices for each inferred example.
      • For regression models, the output must include a float tensor
        containing a single regression score for each inferred example.
    •   <li>
          The What-If Tool queries the served model using the gRPC API, not the<br /> RESTful API. See the TensorFlow Serving<br /> <a rel="nofollow noopener" target="_blank" href="https://www.tensorflow.org/serving/docker">docker documentation</a> for<br /> more information on the two APIs. The docker image uses port 8500 for the<br /> gRPC API, so if using the docker approach, the port to specify in the<br /> What-If Tool will be 8500.
        </li>
        <li>
          Alternatively, instead of querying a model hosted by TensorFlow Serving,<br /> you can provide a python function for model prediction to the tool through<br /> the &#8220;&#8211;whatif-use-unsafe-custom-prediction&#8221; runtime argument as<br /> described in more detail below.
        </li>
      </ul>
      
    • A TFRecord file of tf.Examples or tf.SequenceExamples to perform inference on
      and the number of examples to load from the file.

      • Can handle up to tens of thousands of examples. The exact amount depends
        on the size of each example (how many features there are and how large the
        feature values are).
      • The file must be in the logdir provided to TensorBoard on startup.
        Alternatively, you can provide another directory to allow file loading
        from, through use of the -whatif-data-dir=PATH runtime parameter.
    • An indication if the model is a regression, binary classification or
      multi-class classification model.
    • An optional vocab file for the labels for a classification model. This file
      maps the predicted class indices returned from the model prediction into class
      labels. The text file contains one label per line, corresponding to the class
      indices returned by the model, starting with index 0.

      • If this file is provided, then the dashboard will show the predicted
        labels for a classification model. If not, it will show the predicted
        class indices.

    Alternatively, the What-If Tool can be used to explore a dataset directly from
    a CSV file. See the next section for details.
    The information can be provided in the settings dialog screen, which pops up
    automatically upon opening this tool and is accessible through the settings
    icon button in the top-right of the tool.
    The information can also be provided directly through URL parameters.
    Changing the settings through the controls automatically updates the URL so that
    it can be shared with others for them to view the same data in the What-If Tool.

    All I have is a dataset. What can I do in TensorBoard? Where do I start?

    If you just want to explore the information in a CSV file using the What-If Tool
    in TensorBoard, just set the path to the examples to the file (with a “.csv”
    extension) and leave the inference address and model name fields blank.
    The first line of the CSV file must contain column names. Each line after that
    contains one example from the dataset, with values for each of the columns
    defined on the first line. The pipe character (“|”) deliminates separate feature
    values in a list of feature values for a given feature.
    In order to make use of the model understanding features of the tool, you can
    have columns in your dataset that contain the output from an ML model. If your
    file has a column named “predictions__probabilities” with a…