Abstract
Timely availability of good quality data from clinical studies is both necessary to meet current standards of good clinical practice and critical to successful clinical research. In-process data quality control is the answer to fulfilling the requirements of data validation while a study is active and immediate correction of observed errors and other problems. To accomplish in-process data quality control, all departments involved have to synchronize their work: availability of patient data as soon as possible after the assessment, direct (local or central) data entry, (computer assisted) coding of textual data such as unwanted events, remarks, and so forth using well accepted dictionaries, and (computer assisted) generation of data clarification questions all need to be done. For a timely response, sufficient resources, such as computer power and adequately trained personnel, are essential.
This approach also provides for early warning on systematic errors, such as protocol noncompliance, and their prevention in subsequent activities. Such an approach is most effective when emphasis is put on the preparatory phase of a study, minimizing the risk of problems when the study is active. Thus, case report forms (CRFs), database, and validation criteria need to be well-defined, programmed, and tested beforehand. The environment necessary to accomplish in-process data quality control, the process itself, and its positive effects on data quality and reporting time will be discussed.
Get full access to this article
View all access options for this article.
