Participants
We contacted 860 people registered in the iEtD database. The project team sent introductory emails and two reminders in the summer of 2017. Eighty-one registrants responded to these emails (9.5% response rate), and 61 (7%) were excluded as they did not complete an entire framework for a real group decision context or for educational purposes. Twenty participants were considered iEtD users and were invited to the interviews. Eight participants agreed to be interviewed.
Participants worked in international or national organizations that developed guidelines (e.g. World Health Organization, Australian Health and Medical Research Council). Participants used the iEtD mostly for guideline development, but also for educational purposes (i.e., training workshops of panels). Two participants reported expertise in both the GRADE approach and the iEtD; two attended workshops before starting to use the iEtD, and four did not receive any training. All participants were methodologists who were members of guidelines’ technical teams, not decision-makers or panelists. Most participants had sole responsibility within their teams for completing GRADE-EtD frameworks using the iEtD solution.
Main findings
We organized findings according to users’ general impressions of the iEtD and the tool-specific tasks users carry out using iEtD.
Participants’ general impressions
Overall, participants had positive experiences working with the iEtD. They gave several reasons for this, such as the simplicity of the tool, that it was easy to work with, and that it was free. Users liked the way the iEtD is organized, felt that the tool was designed for someone like them, and that it was a useful tool for their organization(s). Regarding the interaction with panelists and other members of the guideline development group, they perceived the iEtD as logical and easy to follow tool during meetings:
“Yes. It was really helpful both for the people compiling the evidence-to-decision framework, but also as a way [for us] to share it with the people making the decisions. So…we shared them with the guideline groups, and they used the decision-making frameworks as they were presented in this format”.
Nevertheless, some drawbacks emerged from the interviews. Some participants said that due to the amount of information and type of evidence available they had to conduct additional work to synthesize and present research evidence (e.g. prepare new tables). Participants working in large groups expressed that it was difficult to coordinate framework completion work across the group.
“I think the difficulty is using it in a group situation. I think you have to have a very motivated team who have all been trained in using the online version to be able to really use it well. So I think the challenge for us is that we had a big group with quite a number of different people, often from different departments, all developing their evidence profiles. So, lots of different people putting the evidence in. So, if it's a very small team I can see that it's much easier to use the online version compared to a larger team of people who may not be able to use it”.
Getting help to use the tool
Two of the participants expressed they would have liked access to online help or support; however, this did not stop them when using the iEtD. Despite being one of the least commonly used sections of the iEtD as reported by participants, some considered the help files as useful.
“Well, first of all the little drop pin boxes that give you instructions are very helpful. So we kept referring back to those”.
Creating GRADE-EtD frameworks
Formulating the question and background
Participants did not report any difficulties with the PICO question section and expressed that the structure of this section was clear.
Assessment
This section, which includes all the different criteria to be considered by a panel, was the most used section of the iEtD. However, not all teams used all of the criteria, for example when conducting rapid health technology assessments that had no formal health economic analysis. Participants’ general impressions about this section and its structure were positive. Moreover, they appreciated being able to distinguish between research evidence and additional considerations by placing them in separate cells.
Some participants criticized aspects of this section, although we observed that this was often coupled with basic misunderstandings. For instance, some participants demonstrated a poor understanding of some of the criteria (described below), the purpose of some of the features (e.g. the rationale behind Additional Considerations cells), and more fundamentally, the GRADE approach for formulating recommendations that underlies iEtD structure and functionality.
For six criteria in the assessment section (Problem, Certainty of the evidence, Balance of effects, Resources required, Cost-effectiveness, Acceptability and Feasibility), participants reported having only positive experiences. For three criteria (Values, Desirableand undesirable effects and Equity), participants had mixed experiences that we describe below.
Values: Some participants found confusing the term “Values” (how people value outcomes) in the Assessment section menu, and others found confusing the signaling question (Is there important uncertainty about or variability in how much people value the main outcomes?). However, this difficulty did not stop them using the tool and no other major problems were identified.
“On the ‘values’, the options are, "important”, "uncertainty" or "variability"; “possibly important”, “uncertainty or variability”; “probably not important or no important”. But the question was: “Is there important uncertainty about, or variability in how much people value the main outcome? That is a hard question, and everyone had trouble with reading it. "
“The way the question is phrased is the variability and how much people value it; nearly everybody had problems understanding what that means.”
Desirable and undesirable effects: Despite the overall feedback being positive, participants consistently expressed their wish to have both desirable and undesirable effects in only one section rather than in two separate sections.
Equity: Most participants expressed favorable experiences with the use of this criterion. However, some participants referred no clear understanding of its definition:
"Ah, I think we had trouble with the definition around "equity". The way that is written and defined… and how you define is... it wasn't nicely articulated so people had often difficulties with it. Otherwise, most things were reasonable. "
They pointed out that there is no information about whether it refers to the intervention or the comparison, and at the time of this judgment, the panel does not know about the direction or strength of the recommendation. To solve this conundrum, two participants suggested moving recommendations right before these three criteria. Some of the comments from some of the participants reflected suboptimal understanding of the GRADE approach.
"I would definitely put "recommendation" under the "desirable" and "undesirable effects". In fact, if it were up to me, I would do desirable effects, undesirable effects, and after that I would put the draft recommendation. And then I worked through values"
Conclusions section
Overall experiences with the Conclusions section were positive.
Embedding tabulated summaries
Some participants found difficult to insert tables (e.g. Summary of Findings tables) to present the research evidence within the different criteria. This led them to stop using the iEtD and moved to Excel.
"So it was an easy way for me to use the tool for tables, to do my own tables. And it was too much work and it was not fitting because we couldn't really... I'm trying to remember exactly what the issue was but I think the problem is that any study... So I decided to frame the table, the evidence-to-summary table as GRADE does, so starting from the outcomes. But then for the same kind of outcome we did too many different studies recording the outcomes in different ways. So even for the same kind of outcome I couldn't put anything. So eventually I decided to use the Excel."
Use of the Export-to-Word function
The iEtD was designed to facilitate users to complete GRADE-EtD frameworks in a both online and interactive way. The tool was intended to allow people to create tailored templates for making decisions or recommendations as well as interactive end user summaries. However, such online use was not common among participants we interviewed. Many participants referred completing their work with the GRADE-EtD frameworks in a Word format rather than online. They logged on to the tool, created a framework and exported it as Word document. Overall, participants reported that other members of the guideline development group were satisfied with using the iEtD just as a guide to structure the work that then continued in Word.
“So for both of those guidelines we downloaded the sheets and used them in Word format. So we used the tool as a template and that's what we used for both guideline meetings, to fill-in for quite a number of different PICO's”.
"But there are always people that are not confident with online tools. So I asked them, please use the Word file if you want to send me comments"
The main reasons why participants preferred to work with a Word format were lack of confidence in using a new tool among members of the guideline development group, and their familiarity and perceived ease of use of Word.
“It was easier to get everyone else in the team to use Word than to use it online”.
“People tended to find very difficult to...they were all experts in the field but they are not necessarily familiar with that sort of platform”
"Honestly there were also technical issues that I had to face. Not everyone is so comfortable working on these things"
The non-online use of the iEtD implied extra work for the person in charge of completing the frameworks. One participant said: “I sent them, together with an instruction document explaining how to use the iEtD. Explaining what I did, what we did, and the way they would have to interpret what I did”.
Exporting frameworks
We asked participants about their experience with the (vertical) Word document format that is generated when exporting a framework in iEtD. We also showed them another format from the GRADEpro system, which was horizontal. Participants were also asked to share their experiences with their own formats, which they had produced and tailored. Most of the participants perceived the horizontal format as clearer and more logical; they deemed the vertical format exported from the iEtD as harder to read. Moreover, participants expressed that the vertical format demanded a lot of further formatting once it was in Word:
"It is repetitive; you see the same tables several times...messy"
"It is not friendly,…., and requires too much editing to be able to generate a document that is easily usable and readable by decision makers"
“ I think, while the information is the same it doesn't feel like I can see things so well but I think is just because it's all... it's feels like it's more text, which is rare because it's the same text, but it's not as appealing to me.”
Tailoring frameworks
Some participants tailored the frameworks. It was common for people to translate and modify the wording, particularly of the judgment options.
“I think it was felt that it was too... introduced too much uncertainty, to have the options as they are... some of them we took out the "various" option, so that we just had, "don't know", "no", "probably no", "probably yes", "yes".”
Participants viewed tailoring as a valuable functionality. It gave them the possibility to modify the frameworks to their specific needs, such as limiting the number of criteria for rapid health technology assessments or modifying the order of the criteria to improve understanding.
Motivations to use the iEtD
Despite some difficulties, participants still expressed motivation to use the iEtD. Some chose it because it is part of the GRADE approach, and they were familiar with it. The attendance to iEtD workshops was also highlighted as a facilitator. Participants said that the systematic and comprehensive structure of the iEtD was a crucial aspect for deciding to use it. They considered it a suitable tool for producing systematic and transparent guidelines, as it provided a comprehensive overview of the different factors involved. Most of the participants expressed that they would like to receive further training on the tool.
“We decided to take the iEtD because we it was a good match between the dimensions considered in framework, to assess the effectiveness and feasibility”
"I went through the criteria for the evidence-to-decision framework and I found that it fit quite well with what I was looking for, a kind of framework or methodological system that could allow me to include everything. So criteria-like values, equity, feasibility, acceptability, were all criteria that we were considering in our guidelines. So that was eventually the reason...."
In addition, participants expressed they chose the iEtD partly due to the online voting function that could be used during meetings.
"Well, we wanted to do real time voting with in the panel meeting and so because that feature was available, and because it was easy to migrate from MAGIC into this, we decided to go with it"
Using iEtD in guideline meetings
In the context of guideline meetings, voting was one of the features most valued by participants and received positive feedback from most. However, the ways that groups used the voting function varied. For instance, some collected votes manually outside the iEtD, then compiled results and entered them into the system.
"We did the voting two ways. We started by asking each panel member to go in and register their vote and comment, and that provided a baseline. We extracted all that information and circulated it to the whole group. Then, we met and put the information up on the screen --and did it live--and we read through and amended it, and then we all voted".
“…only one person in the room had the iEtD framework opened, projected on the screen, and counted out the votes and recorded them in the iEtD”.
A few participants expressed connectivity issues when working online.
"The system could not take all ten of us working on the same iEtD, at the same time voting in the same way, so we stopped doing that”
"When we used it live, when everyone was online at the same time and they were all voting together it kept crashing, so what we actually moved into was... we printed the relevant document note, extracted the relevant document, and tables and headings, and send them to people on an email, and they completed the framework. They send it back to us"
Participants’ suggestions
Two main suggestions for improvement emerged from the interviews: 1) need to provide more guidance, including examples, about what type of information should be include in each of the criteria; 2) need to improve the wording of some domain headings, signaling questions as well as more detailed definitions. We compiled a list of problems and potential suggestions for further improvement of the iEtD tool (Table 1).