Harmonisation & mental health research

The volume of data collected in British cohort and longitudinal studies provides unique opportunities to answer key questions about mental health and wellbeing. The potential for larger sample sizes and the opportunity to make cross-study comparisons and to maximise the use of existing data has seen growing interest in analysing data across multiple studies (Fortier et al. 2017). However, issues such as variation in measures collected, study design, sampling and data collection are barriers to comparing and integrating data from multiple cohorts (Fortier et al. 2017).

Harmonisation is the process which allows for, or improves, the comparability of related measures collected by separate studies in order to facilitate cross-study data integration. Harmonisation projects can be:

  • Prospective: in which the same or equivalent measures and procedures are planned to be included across studies from the outset.
  • Retrospective: in which researchers generate “inferentially equivalent” (Fortier et al. 2017) content across studies in which data has already been collected. A variety of different strategies can be used to achieve this.

The platform aims to facilitate prospective and retrospective harmonisation by providing information about mental health measures used in British cohort and longitudinal studies. This page provides information about harmonisation projects using mental health and wellbeing measures in British cohort and longitudinal studies.


PROJECT: HARMONISATION OF MENTAL HEALTH MEASURES IN BRITISH BIRTH COHORTS
Centre for Longitudinal Studies, UCL
George Ploubidis, Praveetha Patalay & Eoin McElroy

Common mental health problems such as anxiety and depression make a substantial contribution to the global burden of disease. Such difficulties often emerge early in childhood and demonstrate considerable continuity across the life-course. Worryingly, recent evidence has suggested that mental health problems are increasing at the population-level. In order to address this considerable public health concern, it is important to understand trends and risk factors that are universal across development (i.e. age effects) and those that are specific to individuals who were born at particular points in history (i.e. cohort effects). The British birth cohorts represent a particularly powerful data resource in this regard, as they contain a wealth of information on the mental health of the UK population across multiple generations.

Although a multitude of questionnaires have been used to assess the mental health of participants at different developmental periods, comparing trends both within and across cohorts is not as straightforward as one might imagine. Three main issues hinder such comparisons:

  • Content: Different questionnaires assess different symptoms. Even when similar symptoms are assessed, the wording of questions often differs, which may impact interpretation.
  • Scale: Questionnaires are not always consistent in the way they ask respondents to rate their responses (e.g. Likert, binary, visual analogue). Different time-frames of reference may also be an issue (e.g. symptoms rated weekly, monthly, generally). Again, this impedes direct comparisons, and may influence interpretation and responses.
  • Reporter: Different reporters may provide information depending on the age of the individual in question. For example, in the case of young children, parent or teacher-proxy reports are often used, whereas self-reports are typically favoured from adolescence onwards.

In order to facilitate maximum comparability of mental health measures across and within the British birth cohorts, this project had the following four aims:

  1. Document the measurement properties of the mental health questionnaires in the British birth cohorts.
  2. Where the same questionnaire (e.g. SDQ) has been administered across multiple assessment waves and/or cohorts, explore the measurement equivalence of these scales.
  3. Where different measures have been administered across multiple assessment waves and/or cohorts, attempt to retrospectively harmonise these instruments.
  4. Compare the reliability and utility of maternal and teacher-proxy reports of child mental health.

 

The harmonisation process

A full resource report in which we describe our procedures and results is available at https://www.closer.ac.uk/about-the-research-we-fund/data-harmonisation/

In order to retrospectively harmonise the data in the five British birth cohorts (NSHD, NCDS, BCS70, ALSPAC, MCS), two independent raters systematically inspected the content of each questionnaire administered in the cohorts. Unique codes (reflecting core content) were assigned to each individual question, and inter-rater reliability was explored (approximately 89% agreement). This was used to produce a pool of overlapping items which had potential for harmonisation. While there were too many permutations of items to test (i.e. the number of harmonisable items varies depending on the cohorts and assessment waves of interest), we inspected the measurement equivalence of select permutations in childhood and adulthood.

See also:
Project page
Workshop presenting the project

Harmonisation Items
NO! That's fine
This website is using anonymised Google analytics to help us work out how to make it better! More details