This team is concerned with all methodological and computational aspects of time series analysis.
Nowadays, more and more variables are measured repeatedly and with increasing sampling frequencies to gain information about the evolution of a system over time. Among the many statistical application fields we find finance, epidemiology, genetics, engineering, environmental sciences, etc. Important tasks are the exploration of possibly complex dependencies among subsequent measurements of the same or of different variables for understanding the data generating process, description of trends or seasonal patterns, detection of structural breaks (change-points), prediction of future outcomes, specification of the inherent uncertainties (volatility estimation), among others. There is a big interest in realistic and sophisticated models for the many modern applications and often complex mechanisms encountered in practice, triggering the need for more refined procedures for statistical model building and prediction, and causing many computational issues. Moreover, statistical inference in such contexts cannot rely on asymptotic theory only, but needs to be complemented with sound simulation experiments and computational tools for statistical inference, based e.g. on resampling. So far, most existing algorithms and software packages for time series analysis are restricted to simple linear methods, or have problems with the high-dimensional and massive data sets encountered in practice nowadays, particularly if oversimplistic and stringent modelling assumptions are to be avoided. All of this implies that there is much space for improvement and future research.