To identify current policy-relevant lessons on a sustainable CE and propose a research agenda, the core research team (as specified in the authors’ contributions) conducted an expert survey with 54 key CE academics in a Delphi-inspired process for creating a research agenda together with survey participants54. This process took place online between April and October 2020 and involved a series of methodological steps, which are defined in the following sections. Expert anonymity was ensured in steps 2, 3 and 4, whereas participants interacted in step 5, which included group discussions enabling a unified definition of the research agenda. This Delphi-inspired process is well suited to identify and prioritize problems and actions by enabling expert-based brainstorming, thus consolidating and evaluating a particular problem or question55–57. Expert-based brainstorming and discussion formats have also been applied successfully to develop comprehensive, policy-relevant research agendas58,59.
Step 1: Identification and selection of experts
The study relies on experts who have a deep understanding of the topic under analysis, instead of statistical representation56. As suggested by the Delphi method, a Knowledge Resource Nomination Worksheet (KRNW) was used to find, identify and finally select these CE academic experts. Three main criteria were used: 1) publication in relevant scientific peer-reviewed journals, 2) expertise and 3) coverage of disciplines in the social and natural sciences and engineering.
The core research team consulted the most recent bibliometric results reported by Geissdoerfer et al.6, Merli et al.60 and Prieto-Sandoval et al.61 to identify the main journals publishing CE research. The experts are authors of articles published in 13 different journals. To generate the first list of authors, a search was conducted on Web of Science to find articles published in each journal, including the term “circular economy” in the title, abstract and/or keywords. As of April 9, 2020, 1481 articles were found, 75% of which were published in the Journal of Cleaner Production; Resources, Conservation and Recycling; Sustainability, and Waste Management. Additionally, this list was complemented with high-impact articles that might not have been published in these 13 journals. On Web of Science, articles tagged as “highly cited in the field” and “hot papers in the field” were selected. Using the search string “circular economy”, 117 articles were found.
After finding the authors, their CE expertise was assessed. This was done by identifying authors with at least 3 publications, which was indicative of an extensive dedication to this research topic. 266 authors complied with this criterion. However, the use of “circular economy” in an article is not always indicative of research conducive to policy-relevant findings. For instance, some authors showed a stronger focus on experimental research to test new materials or components at the lab scale or to optimize engineering processes. Although equally relevant for a CE, technical profiles with less emphasis on political and societal processes were excluded. To do so, publications and author profiles were reviewed individually and 83 of them were excluded.
Finally, the disciplinary background of the authors was reviewed to ensure the coverage of social and natural sciences and engineering disciplines. Given the predominance of natural science and engineering, additional authors were searched. Members of exemplary third-party funded CE projects with an interdisciplinary approach were included. Experts from underrepresented disciplines, such as anthropology, were found through CE seminars and events. This process added 11 researchers to the pool of experts.
On May 14, 2020, 179 invitations were sent out via email after removing some authors whose contact details were not available. Invitations included a detailed calendar with the different steps of the research where expert involvement was expected/encouraged and directed the recipients to the survey and later to the subsequent rounds of brainstorming and review.
The survey secured the extensive input of 54 experts. While the recommended size of a Delphi panel is 10 to 18 experts56, we took advantage of the study’s virtual format to extend the discussion to a larger group of experts and, thus, make it more comprehensive. Certainly, the study does not cover all possible scholarly viewpoints and the participants’ background unavoidably influenced the results. Nevertheless, the general topics that emerged would likely surface if this process was replicated with a similarly large and diverse group of participants.
Step 2: Collecting learnings and research questions
Upon receipt of the invitation, experts were asked to fill out an online survey by June 5, 2020 (see Supporting Information 4). The questions were divided into two blocks. The first block aimed to collect background information, including the main research focus (i.e. disciplinary, interdisciplinary or transdisciplinary), the main disciplines, the focus areas of the expert’s CE research in the past 5 years and the number of years they had been integrating this concept into their research. The second block asked two open-ended questions:
1) “Based on your research on the circular economy, what are the three most important policy-relevant learnings that facilitate or hinder a socially just and environmentally sustainable transition?” Three scales (local, national and international) were suggested to structure the relevance of the findings.
2) “In your opinion, what are the three most important questions that should guide future policy-relevant research on the circular economy?”
The 54 experts who filled out the survey were invited to consult with their colleagues and asked to report the number of people they discussed with. As a result, 18 collaborators were consulted. Supporting Information 4 summarizes the participants’ disciplinary profiles and research focus. Interdisciplinary research is the dominant profile, with a strong interaction between environmental science, engineering and social science. The main areas of research include waste, material recycling, resource efficiency, business models and CE conceptualization.
Step 3: Analyzing the learnings
The core research team qualitatively coded the pool of policy-relevant lessons following an interpretive approach62,63. One member of the core research team was responsible for identifying and condensing the lessons formulated by the participants in their answers. Some were excluded because they were not formulated as lessons or were not clear enough. The first list of lessons was discussed within the core research team. A different member of the core research team conducted a second coding round to review the initial text and propose grouping categories. The core research team reviewed the new list and defined a set of categories, thus achieving intersubjective plausibility64.
Since the lessons diverged largely in terms of their stance towards a CE, they were grouped into three narratives, i.e. more critical towards CE and its potential for change (skeptical narrative), more cautiously optimistic (reformist narrative) or very optimistic (optimist narrative). Using these three narratives, the lessons were further split into the political, social, economic, environmental and research dimensions involved in CE research (see Supporting Information 1).
Each lesson was then assigned to each narrative and dimension. To identify overlaps and shorten the formulations, they were split among the core research team and discussed in a final round to ensure an agreed interpretation. The core research team condensed the lessons and gave structure to the narratives of each position. The experts were then invited to provide feedback on the final summary document. The core research team implemented changes on the formulations accordingly. Additional content suggested by the participants was not integrated into the final list in order not to jeopardize the validity of the methodological process.
Step 4: Grouping and prioritizing research questions
The survey generated 124 explicit research questions and several normative statements. In order to reduce the interpretive bias, normative statements were excluded from the analysis. Most of the questions were not altered. However, a few showed similarities and the core research team grouped them into a single question. This grouping reduced the final list to 78 questions (see Supporting Information 4). Given the wide range of topics and questions collected, the next step was to select a reduced number of research questions in order to define priorities for future policy-relevant and high-impact research. On June 22, 2020, the experts who had previously completed the survey were asked to select 10 questions. To ensure transparency and an agreed understanding with the experts, the regrouped questions were listed in an appendix. Experts were also asked to identify overlapping or similar questions and to provide feedback on the formulations.
The core research team initially created categories to structure the list of questions. However, reaching a consensus was not possible, as various formulations were not clear. The questions were not further grouped into categories because this might have generated a selection bias among the participants influenced by a subjective grouping.
41 experts participated in this round. The votes cast for each question were accounted for to create a ranking. The top 20 questions, which received at least 8 votes each, obtained 52% of the total votes and were thus prioritized for further discussion. However, the experts provided divergent views around the similarities and overlaps among the initial 78 questions. The questions considered to be similar to one or more of the top 20 questions were listed alongside their counterparts to help refine the final list in the group discussions.
Step 5: Refining research questions
On July 13 and 15, 2020, four discussion sessions with the experts were organized (two sessions per day, 1 hour each). The aim was to formulate research questions that could be operationalized in academic research and that are understandable to experts from different disciplines. Discussions were held in the form of an online video conference. Based on their availability, experts were randomly split into 8 discussion groups (two groups per session). Each group was contacted in advance and provided with a list of 2–3 questions from the top 20 to be discussed during the session. The questions were paired/grouped according to the similarities indicated in the previous round of feedback. 30 experts and the core research team participated in the discussions.
Each 1-hour session had the same structure. Participants introduced themselves and a moderator from the core research team explained the goal of the session. The rules were clarified: 1) questions needing clarification could be reformulated as needed, 2) if they included a mix of concepts or were too long, participants were allowed to formulate a general research question and a maximum of 3 sub-questions, and 3) if the questions to be discussed showed overlaps, they could be combined. Depending on attendance, each group consisted of up to 5 experts, a moderator and a note-taker, who also took an active role in the redefinition of questions. All participants were able to follow the discussion using a shared document where all changes, comments and suggestions were documented. In some instances, moderators referred to the questions listed as similar in the ranking step to identify supporting concepts. When participants did not finalize their set of questions by the end of a session, the core research team consulted their notes and sent a suggested formulation to the group. Participants either approved or provided feedback on the new question.
Once all questions had been discussed, the core research team reviewed the new list to identify overlaps and potential categories. The final list of agreed-upon questions (Table 2) and sub-questions (Supporting Information 2) was shared with all participants to receive written feedback about the categories and formulations. Finally, the core research team implemented changes on the formulations. Similar to the lessons (Step 3), comments referring to missing content were not integrated into the final list in order not to alter the validity of the methodological process.
This study has some limitations in the interpretation and development of the three narratives and the elaboration of the research agenda. Only academic experts were included, as the results are meant to characterize current scientific efforts and lessons to further inform future research. Further, this analysis addresses policy-relevant research for a sustainable CE in general, leaving room for adapting this knowledge to specific geographical scales or socio-economic sectors. Research agendas targeting certain sectors (e.g. food waste management) will benefit from integrating knowledge and experiences from stakeholders, including practitioners, policymakers and civil society.
Given the online configuration of this analysis, which was due to COVID-19 restrictions as well as the invitation of a broad set of international experts, the number and duration of the live discussion rounds was limited. To mitigate this limitation, several rounds of written feedback were undertaken to enable extensive feedback on the final list of research questions and the grouping of lessons into narratives. However, longer, in-person workshops would have enabled a more in-depth discussion of the results, allowing and ensuring that all participants provided input to fill in existing gaps. For this reason, some dimensions are less prominent in our results, as they were not mentioned in the initial survey. Speculative reasons for why certain areas were less addressed could be ‘group-think’ dynamics during discussion rounds, bias, that participants did not think they relate to policy or simply that they do not conduct research on the topic. To account for this limitation, feedback on observations and statements resulting from the survey was integrated in the results and discussion sections of the manuscript but not in the raw data in order not to alter the original lessons formulated by the experts.