 She has a specific interest in sensor fusion using inertial sensors and magnetometers for human motion capture and for indoor localisation.

CS forum is a seminar series arranged at the CS department. CS forum is a seminar series arranged at the CS department - open to everyone free-of-charge. Coffee is served at and the talk begins at When 1. Where Computer Science building. Registration Free Entrance.

Event language s english. Altogether, the equation tells us how under a certain probability distribution of the outcomes, when we account for how much we desire an outcome, we can pick a decision that does best on average. But the details behind how we model and make predictions about the weather can be immensely complex.

We invest an awful lot of money and time putting up satellites and developing algorithms to find out if, when and where it will rain, and the resulting forecast gives us an increasingly useful and precise probabilistic forecast of rain in the next week, day or hour. But the decision itself is easy to make: I multiply the function about how much I like the outcome by the probability of the outcomes, and then I decide and take the action.

Machine Learning is good at predicting such simple things. When the action space is large, and the interactions of all the things within it are much more complicated, you need to use probabilities to address the ensuing uncertainty. Most Machine Learning methods are basically number mappers. You input some numbers to a box, run them through some parameters in the middle and then push the resulting numbers out the other end.

## PRISM - Probabilistic Symbolic Model Checker

Machine learning is all about adapting the parameters in the middle so that the relationship between what goes in and what comes out is satisfactory. For example, X could be some image and Y could be some label or tag; if the tag is the right one for the image, the parameters in the middle are doing their job. But to forecast something much more complicated, to model a whole set of points in a complex system, we need to keep a distribution of all the plausible parameters in a probabilistic model.

We get examples of Xs and Ys and ask: what parameters can plausibly explain the relations between them? Deco, W. Finnoff, H. Zimmermann: Unsupervised mutual information criterion for elemination of overtraining in supervised mulilayer networks, Neural Comput.

## Probabilistic Modelling and Reasoning

Salakhutdinov, G. Hinton: Using deep belief nets to learn covariance kernels for Gaussian processes, Adv. Neural Inf.

Seth, J. Principe: Variable selection: A statistical dependence perspective, Proc. Rao, S. Xu, Y. Chen, H. Tagare, J.

Lecture 1 (part 3): Introduction to Probabilistic Modelling and Machine Learning

Principe: A test of independence based on a generalized correlation function, Signal Process. Lee, H. Seung: Learning the parts of objects by non-negative matrix factorization, Nature , — CrossRef Google Scholar. Comon, C. Karhunen, E. Cichocki, R. Zdunek, A. Phan, S. Gaussier, C. Goutte: Relation between plsa and nmf and implications, Proc. ACM Conf. Minami, S.

• Analysis and Design of Marine Structures.
• Departments and Centres.
• NHESSD - Probabilistic modelling of the dependence between rainfed crops and drought hazard.

Eguchi: Robust blind source separation by beta divergence, Neural Comput. Lee, M. Girolami, T. Sejnowski: Independent component analysis using an extended infomax algorithm for mixed sub-Gaussian and super-Gaussian sources, Neural Comput. Labusch, E. Barth, T. Martinetz: Sparse coding neural gas: Learning of overcomplete data representations, Neuro 72 7—9 , — Google Scholar.

Cichocki, S. Cruces, S.

### Submission history

Amari: Generalized alpha-beta divergences and their application to robust nonnegative matrix factorization, Entropy 13 , — CrossRef Google Scholar. Liese, I. Villmann, S. Haase: Divergence based vector quantization, Neural Comput. Villmann, J. Claussen: Magnification control in self-organizing maps and neural gas, Neural Comput. Hammer, A. Hasenfuss, T. Jain, T. Neural Netw. Haase: Magnification in divergence based neural maps, Proc.

IJCNN , ed. Chalasani, J. Principe: Self organizing maps with the correntropy induced metric, Proc. Hegde, D. Erdogmus, J.

• Claus Beisbart and Stephan Hartmann!
• Journal metrics;
• Want to Play?: Monkeewrench Book 1.
• Mathematics and Statistics?
• Madness and Civilization (Routledge Classics).
• The Flying Trapeze: Three Crises For Physicists.

Principe: Vector quantization using information theoretic concepts, Nat. Jenssen, D. Principe, T. Erdogmus, T. Rao, J. Principe: Vector quantization by density matching in the minimum Kullback-Leibler-divergence sense, Proc. Hinton, S.