Meet Inspiring Speakers and Experts at our 3000+ Global Conference Series Events with over 1000+ Conferences, 1000+ Symposiums
and 1000+ Workshops on Medical, Pharma, Engineering, Science, Technology and Business.

Explore and learn more about Conference Series : World's leading Event Organizer

Back

A N (Ageeth) Rosman

A N (Ageeth) Rosman

Erasmus Medical Center, Netherlands

Title: The inter-observer reliability of a validated professional checklist between a client self-report version and a professional medical interview

Biography

Biography: A N (Ageeth) Rosman

Abstract

Self-report screen and advice instruments can be used to expedite the first antenatal visit. Their outcome in terms of risk profile and suggested actions alert the caregiver and enables efficient and accountable risk management. A key issue is the inter-observer reliability of the outcomes, which is a prerequisite for efficient use/application of any checklist. In this study, we compared outcomes of standard application of a validated professional interview based checklist with outcomes of a client self-report adaptation of this list. Inter-observer agreement was established and the association (if any) or disagreement with particular risks. Pregnant women entering a first antenatal visit at one midwifery practice in Rotterdam, the Netherlands were asked to fill in the self-report checklist on risks for preterm birth, small for gestational age (SGA), low Apgar scores and congenital anomalies at home and return before the appointment. After returning the R4U to the researchers, patients were informed that the midwife in charge during the first visit would ask the same questions unaware of any result the patient filled in. Agreement of 90% and beyond was defined as reflecting equivalence for practical use (interchangeable). At onset some tested variables were judged to require face-to-face confirmation even if concordance was perfect. Primary outcome was the observed inter-observer agreement with and without chance correction and accuracy. The study showed heterogeneous per domain and per item inter-observer (patient vs professional) reliability. In some domains of agreed high relevance/impact, agreement was unacceptably low in absolute and relative terms, where validity could not be simply decided on.