Analysis of the S1-guideline “Urothelkarzinom”
This guideline provides instructions on pathological-anatomical diagnostics of tumors of the renal pelvis, the ureter and the urinary bladder. It is authored by the Bundesverband Deutscher Pathologen e.V. (Professional Association of German Pathologists) and distributed free of charge as download at www.pathologie.de. For the study presented in this work, relevant terms were identified, collected in a respective spreadsheet using MS Excel 2016 and grouped into two categories:
· Concepts: Representing clinical questions.
· Values: Representing possible answers.
In this context, terms were considered as relevant when required for the grading determined by the WHO[5], the TNM-classification or to specify the location of the tumor[6], give information on previous therapies, etc..
Mapping
Prior to mapping, all terms were translated into English and an online research was performed for each one to confirm that the translation was valid, i.e. standardly used in clinical publications or reports. In case that an expression was unique to German language, the most appropriate translation was used according to the best of the authors’ knowledge and belief.
Mapping to SNOMED CT, ICD-11 and LOINC then was performed online using the browser provided by SNOMED International, the World Health Organization International and the Regenstrief Institute, respectively[7–9].
Initial mappings were performed by three of the authors, all with different background regarding terminologies/standards and expertise in pathology/medicine. Author A had knowledge on using all three terminologies, but no significant expertise related to pathology, author B is an expert in pathology with basic knowledge on terminologies and author C is an expert in terminologies with laboratory-medical background. These mappings were used to assess the accessibility of each terminology, i.e. whether the authors identified identical codes or differed (the respective assessment considered only terms where at least one of the authors proposed a code).
Lastly, the authors consented on the final mapping, which then was used for further analysis.
Equivalence evaluation and inter-rater reliability
In brief, to each mapping a number between 0 and 4 was assigned, according to equivalence and considering determinants specified in the standard ISO/TS 21564 provided by the International Organization of Standardization. The respective classification was as follows:
· 0 : Exact semantic matching (code equals term)
· 1 : Complete overlap of the semantic domain (code covers term, but also more)
· 2 : Incomplete overlap of the semantic domain (code partially covers term)
· 3 : Rather a comparison than overlap of the semantic domain (code represents similar domain or term)
· 4 : No overlap of the semantic domain (no appropriate code found)
An example for an exact match (ISO 0) would be the term “Klinische T-Kategorie”, translated as “clinical t-category”, for which both, SNOMED CT and LOINC, provided equivalent codes (“399504009 | cT category (observable entity) |” and “21905-5 Primary tumor.clinical [Class] Cancer”, respectively).
For ISO 1, an example would be “Vorangegangene endoluminale Chemotherapie” (previous endoluminal chemotherapy). Here, LOINC provided the code “81167-9 Cancer treatment --preoperative |”, which has a broader scope than the original term.
Furthermore, for “Urothelkarzinom” (urothelial carcinoma), the only applicable code found in LOINC was “66125-6 Urinary bladder Pathology biopsy report”. Since this only covers a part of the original term, it was considered as ISO 2.
Finally, for “Andere Angaben zum Tumortyp” (Other Information on tumor type) a single, partially suitable code was found in LOINC: “52535-2 Other useful information”. As this only represents a comparable concept, it accordingly was classified as ISO 3.
Consequently, the lower the average equivalence figure, the better the general usefulness in appropriate clinical environments.
To ensure validity of the ISO-rating, three of the authors performed equivalence evaluations independently and Fleiss’ Kappa, as measurement of the inter-rater correlations, was calculated for each terminology (note that only appropriate codes, i.e. ISO 0-3, were considered). Afterwards, terms were balloted for a definitive ISO-classification if necessary, i.e. if evaluations varied between the raters.