The final coding manual consisted of 12 primary NPT concepts, and 16 subsidiary concepts. Working through the procedures described above led to highly simplified form of NPT. Although it was not intended to do so, the final structure of the coding manual conforms to the Context-Mechanism-Outcomes configuration of realist evaluation studies . It is organized thus:
- Contexts—events in systems unfolding over time within and between settings—in which implementation work is done. (Concepts are described in Table 2: strategic intentions; adaptive execution; negotiating capability; reframing organizational logic.)
- Mechanisms—made visible through collaborative work and collective action—that motivate and shapethe work that people do when they participate in implementation processes. (Concepts are described in Table 3: coherence-building; cognitive participation; collective action; reflexive monitoring.)
- Outcomes—the effects of implementation work in context—that make visible how things change as implementation processes proceed. (Concepts are fully described in Table 4: intervention performance; normative restructuring; relational restructuring; sustainment.)
Coding is a centrally important procedure in qualitative analysis, but it needs to be emphasized that it is only one part of a whole bundle of cognitive processes through which researchers make meanings in the data. For this reason, we recommend a layered approach to applying the coding manual. The coding manual itself is presented in Tables 2, 3, 4 and 5. It can be used in the following way.
Familiarization and concept identification
Familiarization. All qualitative analysis calls on researchers to become completely familiar with that part of the data-set that they wish to analyze. This may consist of any combination of original video or audio-recordings, fieldnotes, transcripts, social media posts, other documents, or already published texts.
Managing data. The organization of data analysis is of paramount importance. A set of agreed procedures needs to be developed for identifying, sharing and retrieving coded data. This may be done using software such as Atlas Ti, NVivo, or Dedoose); or by using a ‘manual’ method such as framework analysis on Microsoft Excel, or commentary using ‘Track Changes’ comments boxes in Microsoft Word. Both software and ‘manual’ methods can provide highly effective scaffolding for qualitative analysis.
Concept identification calls on the researcher to work through individual items in the data-set line by line and to link each item (which may be a phrase, sentence or paragraph in text, or an utterance or scene in untranscribed video or audio), either to one of the concepts included in the coding manual, or to a concept arising from some other source. In NPT studies these may be associated with constructs of the theory, and they may be related to the contexts, mechanisms, or outcomes of processes of adoption, implementation, and sustainment over time and between settings.
Characterization and category-building
Category-building calls on the researcher to use the definitions in the coding manual to identify the specific concepts or sub-concepts within each item in the data-set associated with Context, Mechanism, or Outcomes.
Characterization calls on the researcher to consider the relationship between, and significance of, concepts and sub-concepts. For example, are some constructs preconditions or barriers for others? Do others represent a temporal flow of events? Do the data suggest causal processes?
Identification of special cases. Within any qualitative data-set it is likely that there will be special cases. These may be particularly well-formed examples of phenomena that are typical of the data. Or, they may be what Glaser and Strauss call, ‘deviant cases’ , or what Tavory and Timmermans  call ‘surprises’, phenomena that are unexpected or very different within a data-set.
Coding outside the manual.
Familiarization and line by line analysis will inevitably reveal items in a data-set that cannot be easily categorized to the Context-Mechanism-Outcome model set out in the NPT Coding manual. Here, new categories can be also be created.
Associations between new and prior categories. Where new categories are associated with the categories already set out in the coding manual, it may be possible to treat them as sub-categories of these. For example, participants in implementation studies often talk about time as a personal or corporate resource, but these accounts are often closely associated to material already coded under constructs such as Cognitive Participation or Relational Restructuring.
Differences between topics. Where new categories relate to a different topic, an extended coding manual that includes new categories can be created. These may be driven by engagement with data (e.g. an implementation study shows how the implementation of an intervention leads to experiences of gender-inequity and this is made into a parallel analytic track); or they may be driven by theory, (e.g. constructs from other theories can be used to identify and explore the whole range of inequities that can be created through an implementation process).
Interpretation and explanation.
Four NPT concepts related to mechanisms (Coherence-building, Cognitive Participation, Collective Action, Reflexive Monitoring), each possess four associated sub-concepts (described in Table 5). Users of NPT have often tended to treat these as discretionary in their coding and analysis [12-14], and their use is not mandatory. They do, however, add a layer of nuance to analysis of implementation mechanisms, and they remain highly usable and workable where the data support interpretation at that level of detail.
The act of coding is descriptive work that is a foundation for the interpretation of data, and is not a proxy for it. In each stage of coding, theoretical interpretations of the data can be recorded in what Glaser and Strauss  called Memos. These are commentaries on interpretations of the data and its contexts that the researcher makes as they perform coding.
It is not the purpose of a coding framework to verify the underpinning theory. The whole purpose of coding, and working abductively within theory, is to build and inform interpretation and understanding. This is not a discrete stage in data analysis but is continuous throughout. Researchers can hold joint ‘data clinics’ to surface these interpretive understandings, make necessary adjustments to the coding rules to capture the specificities of their data, work through their implications, and develop analytic propositions that add to, or extend, theory in relation to the specific phenomena of interest that their studies focus upon.