Title: New tools for analyzing doubly truncated data
Authors: Carla Moreira - University of Minho (Portugal) [presenting]
Jacobo de Una-Alvarez - University of Vigo (Spain)
Rosa Crujeiras - University of Santiago de Compostela (Spain)
Abstract: The analysis of doubly truncated data is relevant in epidemiological applications, when the observation of the lifetime of interest is limited to events between two specific calendar dates. This implies that small or large times are less probably observed and thus properly corrections to the estimators must be done in order to avoid biased estimations that may lead to wrong conclusions. Given the aforementioned motivation, the interest of the scientific community in the phenomenon of random double truncation has significantly grown, particularly in fields like Epidemiology and Survival Analysis and this has motivated the development of software routines that could facilitate a proper analysis of this kind of data. DTDA, built in 2010, was the first R library spreading the methods for the analysis of doubly truncated data. Due the constant challenge to develop new methods devoted to double truncation, the need to follow the new statistical methods with software development is pressing. In addition to the implemented algorithms to estimate the cumulative distribution function, the renewed DTDA package outfits smoothing methods to estimate the kernel density function and hazard function, including the bandwidth selection procedures for the density function. Different real datasets from different areas were also included.