Probabilistic Models 2

The session Probabilistic Models 2 will be held on thursday, 2019-09-19, from 14:00 to 16:00, at room 1.011. The session chair is Katharina Morik.

Talks

14:20 - 14:40
Neural Control Variates for Monte Carlo Variance Reduction (265)
Ruosi Wan (Peking University), Mingjun Zhong (University of Lincoln), Haoyi Xiong (Baidu Inc.), Zhanxing Zhu (Peking University; Beijing Institute of Big Data Research)

In statistics and machine learning, approximation of an intractable integration is often achieved by using the unbiased Monte Carlo estimator, but the variances of the estimation are generally high in many applications. Control variates approaches are well-known to reduce the variance of the estimation. These control variates are typically constructed by employing predefined parametric functions or polynomials, determined by using those samples drawn from the relevant distributions. Instead, we propose to construct those control variates by learning neural networks to handle the cases when test functions are complex. In many applications, obtaining a large number of samples for Monte Carlo estimation is expensive, the adoption of the original loss function may result in severe overfitting when training a neural network. This issue was not reported in those literature on control variates with neural networks. We thus further introduce a constrained control variates with neural networks to alleviate the overfitting issue. We apply the proposed control variates to both toy and real data problems, including a synthetic data problem, Bayesian model evidence evaluation and Bayesian neural networks. Experimental results demonstrate that our method can achieve significant variance reduction compared to other methods.

15:00 - 15:20
Bayesian Generalized Horseshoe Estimation of Generalized Linear Models (902)
Daniel F. Schmidt (Monash University; University of Melbourne), Enes Makalic (University of Melbourne)

Bayesian global-local shrinkage estimation with the generalized horseshoe prior represents the state-of-the-art for Gaussian regression models. The extension to non-Gaussian data, such as binary or Student-t regression, is usually done by exploiting a scale-mixture-of-normals approach. However, many standard distributions, such as the gamma and the Poisson, do not admit such a representation. We contribute two extensions to global-local shrinkage methodology. The first is an adaption of recent auxiliary gradient based-sampling schemes to the global-local shrinkage framework, which yields simple algorithms for sampling from generalized linear models. We also introduce two new samplers for the hyperparameters in the generalized horseshoe model, one based on an inverse-gamma mixture of inverse-gamma distributions, and the second a rejection sampler. Results show that these new samplers are highly competitive with the no U-turn sampler for small numbers of predictors, and potentially perform better for larger numbers of predictors. Results for hyperparameter sampling show our new inverse-gamma inverse-gamma based sampling scheme outperforms the standard sampler based on a gamma mixture of gamma distributions.

Reproducible Research
14:00 - 14:20
Fine-Grained Explanations using Markov Logic (937)
Khan Mohammad Al Farabi (University of Memphis), Somdeb Sarkhel (Adobe Research), Sanorita Dey (University of Illinois at Urbana-Champaign), Deepak Venugopal (University of Memphis)

Explaining the results of Machine learning algorithms is crucial given the rapid growth and potential applicability of these methods in critical domains including healthcare, defense, autonomous driving, etc. In this paper, we address this problem in the context of Markov Logic Networks (MLNs) which are highly expressive statistical relational models that combine first-order logic with probabilistic graphical models. MLNs in general are known to be interpretable models, i.e., MLNs can be understood more easily by humans as compared to models learned by approaches such as deep learning. However, at the same time, it is not straightforward to obtain human-understandable explanations specific to an observed inference result (e.g. marginal probability estimate). This is because, the MLN provides a lifted interpretation, one that generalizes to all possible worlds/instantiations, which are not query/evidence specific. In this paper, we extract grounded-explanations, i.e., explanations defined w.r.t specific inference queries and observed evidence. We extract these explanations from importance weights defined over the MLN formulas that encode the contribution of formulas towards the final inference results. We validate our approach in real world problems related to analyzing reviews from Yelp, and show through user-studies that our explanations are richer than state-of-the-art non-relational explainers such as LIME.

14:40 - 15:00
Grouped Gaussian Processes for Solar Power Prediction (J26)
Astrid Dahl, Edwin V. Bonilla


15:20 - 15:40
Stochastic Gradient Hamiltonian Monte Carlo with Variance reduction for Bayesian inference (J27)
Zhize Li, Tianyi Zhang, Shuyu Cheng, Jun Zhu, Jian Li


Parallel Sessions