Data Mining and Uncertainty Quantification (DMQ)
Advances in sensor technology and high-performance computing enable scientists to collect and generate extremely large data sets, usually measured in terabytes and petabytes. These data sets, obtained by means of observation, experiment, or numerical simulation, are not only very large but also highly complex in their structure. Exploring these data sets and discovering patterns and significant structures in them is a critical and highly challenging task that can only be addressed in an interdisciplinary framework combining mathematical modeling, numerical simulation and optimization, statistics, high-performance computing, and scientific visualization.
Besides the size and complexity of these data, quality is another crucial issue in guaranteeing reliable insights into the physical processes under consideration. The associated demands on the quality and reliability of experiments and numerical simulations necessitate the development of models and methods from mathematics and computer science that are able to quantify uncertainties for large amounts of data. Such uncertainties may derive, for example, from measurement errors, lack of knowledge about model parameters or inaccuracy in data processing.
The Data Mining and Uncertainty Quantification group headed by Prof. Dr. Vincent Heuveline started work in May 2013. In this group we make use of stochastic mathematical models, high-performance computing, and hardware-aware computing to quantify the impact of uncertainties in large data sets and/or associated mathematical models and thus help to establish reliable insights in data mining. Currently, the fields of application are medical engineering, biology, and meteorology.