Methodology

Overview

This section describes the research process that followed the scoping review and the definition of priority research questions.  It covers the development and execution of the literature searches, screening and coding of the articles retrieved, assessment of study quality, and definition of the strength of the evidence.  For information on the scoping review, the development of priority research questions, the structure of a narrative synthesis, and the knowledge translation approach, please see Section A.2: Project Background.

Search Process

Priority research questions were identified through the scoping review (see A.2: Project Background).  Following subsequent meetings with knowledge users, the review team narrowed the scope of the review to focus on six chronic diseases: asthma; cardiovascular disease; chronic obstructive pulmonary disease; diabetes; stroke; and renal disease.[1]

The review team worked with a qualified and experienced health librarian to develop a search strategy.  EMBASE, Medline, CINAHL, Web of Science, and PAIS were searched for material relevant to the priority research questions and with a focus on one of the chronic diseases listed above.  The exact search terms used differed in each database in order to take full advantage of variations in indexing methods.  Searches excluded material in languages other than English or French and were restricted to the time period 2005-2010.  Academic publications, white papers, and grey literature were all considered eligible for inclusion, provided an original research element was present. More details on the search process can be found in Appendix A. 

A total of 34,353 citations were retrieved and exported to a basic reference management system. Two members of the research team examined titles and abstracts to exclude material with no apparent relevance to the research questions.  Abstract rating criteria can be seen in Appendix B. After removal of non-relevant items, duplicates, and citations without abstracts, 4611 articles remained.

Screening and Coding

The review team imported the titles and abstracts of 4611 articles into Eppi-Reviewer, a data analysis software designed for literature reviews.  Members of the review team read all abstracts and marked them for relevance, target population, chronic disease, and intervention design. This resulted in:

  • 3536 articles excluded for insufficient relevance to the priority research questions
  • 591 articles flagged as technology reviews or health technology assessments.  These were retained for reference, but excluded from the main body of the review.
  • 29 articles flagged as focusing on a pediatric and/or adolescent population.  Most of these articles examined asthma or type 1 diabetes.  These were retained for reference but excluded from the main body of the review. 

After this process, 455 articles remained. The full text of these articles was then retrieved for additional data extraction.  In a number of instances (60), the full text could not be retrieved. These articles were not included in the present review. 

Three reviewers divided the literature between themselves and used a standardized code sheet to extract data for the review.  Several rounds of multiple coding and reconciliation were used to achieve acceptable levels of inter-rater agreement.[2] If a reviewer was uncertain of an article’s coding, it was brought to the group for discussion. See Appendix C for the code sheet.  This detailed full-text analysis led to the exclusion or reclassification of an additional 188 items. In these cases, there was consensus among the review team that (a) item abstracts had suggested a higher degree of relevance than the full text demonstrated; or (b) the item was a technology review or health technology assessment.

Finally, articles and their code sheets were grouped by chronic disease in order to facilitate the writing process.  Within these groupings, each article was further classified as quantitative, qualitative, or systematic review. A detailed breakdown of the final number of included articles can be seen in Table A.3.2., below.  Please note that some studies yielded several articles. These article clusters were classified as ‘multi-part’ or ‘case series’ studies, depending on design. For this reason, the numbers given in this table do not correspond exactly with the study numbers provided in the main body of the Evidence Companion.

 

Chronic Disease

Quantitative

Qualitative

Systematic Reviews

Totals

Asthma

7

1

3

11

Cardiovascular Disease – Coronary Artery Disease

11

0

2

13

Cardiovascular Disease – Heart Failure

27

1

11

39

COPD

9

3

3

15

Diabetes, Type 1

13

1

5

19

Diabetes, Type 2

59

3

13

75

Renal Disease

2

1

0

3

Stroke

9

1

1

11

Mixed or Unspecified

3

0

18

21

Totals

140

11

56

207

Table A.3.2: Publications Retrieved Through Initial Search (01/01/2005-12/31/2010)[3]

 

Assessing the Strength of the Evidence  

This review is a narrative synthesis rather than a formal meta-analysis.  No statistical weightings were attempted.  However, the review team did make use of a structured three-step process to assess the design and execution/reporting quality of the articles retrieved.  This provided reviewers with a systematic way of describing the strength of the evidence for the clinical outcomes reported in the literature, though it was not considered appropriate for non-clinical outcomes (i.e., patient uptake; cost savings).  These ratings were also used to compare the state of the literature within each chronic disease section.  

 

Step 1: Oxford Levels of Evidence

The review team used the Oxford Centre for Evidence-Based Medicine’s (OCEBM) 2011 Levels of Evidence[4]to assign levels to each article included in the review. Articles were placed on a scale running from Level 1, considered the highest level of evidence, through to Level 5. See Row 4 of the OCEBM 2011 Levels of Evidence Table (‘Does this intervention help?’) for details. Levels are based primarily on study design, but make provision for incorporating quality of execution and reporting scores.

Step 2: Execution/Reporting Scores

The review team developed an instrument to assess quality of execution and reporting.  This was used in conjunction with the Oxford 2011 Levels of Evidence.  A low execution/reporting score resulted in a one-level downgrade. 

Development of the quality of execution/reporting instrument drew heavily on the framework presented in Zaza et al. (2000), although time and resource limitations necessitated a considerably more abbreviated process.  Sampling method, drop-out rates, blinding, and interpretation of data were among the criteria considered. The instrument used can be seen in Appendix D.

 Inter-rater agreement was tested several times.  Scores were initially inconsistent; while two out of three reviewers neared 100% agreement, agreement between all three reviewers was unacceptably low (55.6%).  The group discussed areas of difference and took steps to reconcile inconsistencies. Reviewers’ final ratings (Low/Moderate/Strong) were within 1 degree of each other in 100% of cases.

Authors were not contacted for additional information.  When an article did not provide enough information on a study to determine whether a certain criterion was met, a score of zero was given for that criterion.  In practice, it was rare for inadequate reporting to result in a loss of enough points to result in a change of category (Low/Moderate/Strong).  An unexpected advantage of this approach was that it revealed areas in which reporting was consistently inadequate.

Step 3: Synthesizing the Evidence

The strength of the evidence supporting a particular outcome was based on the Oxford Levels of the publications in which those outcomes were reported:

If the highest Oxford Level for an outcome is . . .

Then the following can be said of the strength of the evidence for that outcome:

Level 5 studies

and/or

two or fewer Level 4 studies

“Evidence is insufficient”

One Level 3 study

and/or

three or more Level 4 studies

“There is weak evidence . . .”

One Level 2 study

or

Two Level 3 studies

or

One Level 2 study and one Level 3 study

“There is moderate evidence . . .”

Two Level 2 studies

and/or

Three or more studies of Level 3 or higher

“There is strong evidence . . .”

If there are contradictions within the evidence base:

Studies that find benefit do not negate studies that find negative effect, and vice-versa.  However, inconsistent findings provide insight into the importance of differences in study design, study execution, and population characteristics.  The extent to which a finding appears to be generalizable should be explicitly stated.

Table A.3.3.2: Strength of Evidence

 

Follow-Up Searches

The review team conducted a final follow-up search for highly relevant studies published between the end date of the initial searches (12/31/2010) and the completion of the draft report (07/09/2012).  This search was intended as a ‘last sweep’ to fill gaps in the literature and to incorporate any major developments that had occurred since our initial searches.  It was not as comprehensive as the initial searches and the data extraction process was abbreviated.

The follow-up search was run in the two databases that yielded the highest number of results in the initial search: EMBASE and Web of Science.[5]  The original search terms were left unchanged; however, the date range was set from 01/01/2011 to 07/09/2012.  4839 articles were identified in EMBASE and 4371 in Web of Science. As in the initial search, titles and abstracts were exported to a reference management system for de-duplication and relevance screening.  Due to time and resource limitations, items that were primarily qualitative or focused on a pediatric and/or adolescent population were excluded.  See Table A.3.4 for the distribution of the remaining items.

 

Chronic Disease

Quantitative

Systematic Reviews

Totals

Asthma

5

2

7

Cardiovascular Disease – Coronary Artery Disease

9

0

9

Cardiovascular Disease – Heart Failure

26

11

37

COPD

12

2

14

Diabetes, Type 1

3

2

5

Diabetes, Type 2

49

7

56

Renal Disease

2

0

2

Stroke

6

1

7

Totals

112

25

137

Table A.3.4: Publications Retrieved Through Follow-Up Search (01/01/2011-07/09/2012)[6]

Material from these items was incorporated into the review only if it filled a gap in the literature or substantially changed the review’s findings.  Reviewers provided citations and brief summaries of content in the appropriate sections of the Evidence Companion, but did not attempt a comprehensive analysis.

Involvement of Knowledge Users

An integral aspect of this narrative synthesis was the ongoing involvement of knowledge users.  The rationale behind this approach is described in Project Background

The core group of knowledge users consisted of seven decision makers from British Columbia’s Regional Health Authorities.  Reviewers met with members of this core group roughly six times over the course of the study.  Attendance varied with availability.  On several occasions, follow-up meetings were held with members who were unable to attend.  At these meetings, knowledge users identified key issues in their work, defined the information needs associated with these issues, and worked with the review team to prioritize research questions and solidify the scope of the review.  The development and implementation of the research strategy was guided by their input.

 The research team also sought to involve the wider community of knowledge users by holding 2 half-day knowledge translation (KT) events.  An open invitation was issued to policy makers, clinicians, patient advocacy groups, and all other interested parties.  In order to maximize attendance, the KT events were scheduled to coincide with the annual workshops of the BC Alliance for Telehealth Policy and Research (06/14/2011 and 06/21/2012).  Each was attended by roughly 30 people.  The structure was designed to be highly participatory: a brief presentation by the research team was followed by several hours of facilitated group discussion. 

These events were highly valuable to the research team.  Participants discussed how the project might be relevant to their own needs, identified perceived omissions in the project design and content, and suggested further avenues to pursue.   KT events were particularly helpful in the area of dissemination.  Participants from diverse backgrounds defined their personal barriers to accessing information (time, format, etc.) and discussed how the findings of this review could best be made accessible.

 


[1] First-round yields of 1553, 1833, and 2292, respectively.

[2] The numbers given in this table correspond to the number of publications retrieved within each category.  The numbers given in the Evidence Companion refer to the number of distinct studies.  Mismatches should therefore be expected.

[3] For a comprehensive overview of this system, please refer to Jeremy Howick, Iain Chalmers, Paul Glasziou, Trish Greenhalgh, Carl Heneghan, Alessandro Liberati, Ivan Moschetti, Bob Phillips, and Hazel Thornton. “Explanation of the 2011 Oxford Centre for Evidence-Based Medicine (OCEBM) Levels of Evidence (Background Document)”.
Oxford Centre for Evidence-Based Medicine.
http://www.cebm.net/index.aspx?o=5653

[4] Cardiovascular disease was later broken down into studies focusing on heart failure and studies focusing on managing coronary artery disease.  Diabetes was broken down into type 1 and type 2 diabetes. These decisions were based on close examination of the literature retrieved.

[5] Average percentage agreement was calculated at 80% (range 70-91).

[6] The numbers given in this table correspond to the number of publications retrieved within each category.  The numbers given in the Evidence Companion refer to the number of distinct studies.  Mismatches should therefore be expected.

One response to “Methodology”

  1. Elden

    Hey just happened upon your blog via Bing after I entered in,
    “Methodology | Home Telehealth” or perhaps something similar (can’t quite remember exactly). Anyways, I’m delighted I found
    it simply because your subject material is exactly what I’m searching for (writing a university paper) and I hope you don’t mind if I gather
    some material from here and I will of course credit you as
    the reference. Thank you so much.

Leave a Reply to Elden Click here to cancel reply.