Decision Trees/Interpretability/Causality

The session Decision Trees/Interpretability/Causality will be held on tuesday, 2019-09-17, from 16:20 to 18:00, at room 1.011. The session chair is Jilles Vreeken.

Talks

16:20 - 16:40
An Algorithm for Reducing the Number of Distinct Branching Conditions in a Decision Forest (486)
Atsuyoshi Nakamura (Hokkaido University), Kento Sakurada (Hokkaido University)

Given a decision forest, we study a problem of reducing the number of its distinct branching conditionswithout changing each tree's structure while keeping classification performance.A decision forest with a smaller number of distinct branching conditions can not only have a smaller description lengthbut also be implemented by hardware more efficiently. To force the modified decision forest to keep classification performance, we consider a conditionthat the decision paths at each branching node do not change for 100σ% of the given feature vectors passing through the node for a given 0≤σ<1.Under this condition, we propose an algorithm that minimizes the number of distinct branching conditions by sharing the same condition among multiple branching nodes.According to our experimental results using 13 datasets in UCI machine learning repository, our algorithm succeeded more than 90

17:40 - 18:00
Shrinkage Estimators for Uplift Regression (595)
Krzysztof Rudaś (Warsaw University of Technology; Institute of Computer Science, Polish Academy of Sciences), Szymon Jaroszewicz (Institute of Computer Science, Polish Academy of Sciences)

Uplift modeling is an approach to machine learning which allows forpredicting the net effect of an action (with respect to not taking theaction). To achieve this, the training population is divided into twoparts: the treatment group, which is subjected to the action, and thecontrol group, on which the action is not taken. Our task is toconstruct a model which will predict the difference between outcomes inthe treatment and control groups conditional on individual objects'features. When the group assignment is random, the model admits acausal interpretation. When we assume linear responses in bothgroups, the simplest way of estimating the net effect of the action onan individual is to build two separate linear ordinary least squares(OLS) regressions on the treatment and control groups and compute thedifference between their predictions. In classical linear modelsimprovements in accuracy can be achieved through the use of so calledshrinkage estimators such as the well known James-Stein estimator,which has a provably lower mean squared error than the OLSestimator. In this paper we investigate the use of shrinkageestimators in the uplift modeling problem. Unfortunately directgeneralization of the James-Stein estimator does not lead to improvedpredictions, nor does shrinking treatment and control modelsseparately. Therefore, we propose a new uplift shrinkage method whereestimators in the treatment and control groups are shrunk jointly so as tominimize the error in the predicted net effect of the action. Weprove that the proposed estimator does indeed improve on the doubleregression estimator.

17:20 - 17:40
Adjustment Criteria for Recovering Causal Effects from Missing Data (122)
Mojdeh Saadati (Iowa State University), Jin Tian (Iowa State University)

Confounding bias, missing data, and selection bias are three common obstacles to valid causal inference in the data sciences. Covariate adjustment is the most pervasive technique for recovering casual effects from confounding bias. In this paper we introduce a covariate adjustment formulation for controlling confounding bias in the presence of missing-not-at-random data and develop a necessary and sufficient condition for recovering causal effects using the adjustment. We also introduce an adjustment formulation for controlling both confounding and selection biases in the presence of missing data and develop a necessary and sufficient condition for valid adjustment. Furthermore, we present an algorithm that lists all valid adjustment sets and an algorithm that finds a valid adjustment set containing the minimum number of variables, which are useful for researchers interested in selecting adjustment sets with desired properties.

16:40 - 17:00
CaDET: Interpretable Parametric Conditional Density Estimation with Decision Trees and Forests (J07)
Cyrus Cousins, Matteo Riondato


17:00 - 17:20
A Flexible Probabilistic Framework for Large-Margin Mixture of Experts (J08)
Archit Sharma, Siddhartha Saxena, Piyush Rai


Parallel Sessions