To enable reliable software releases, automated concepts are increasingly being sought as part of the necessary test processes to be able to detect potential errors at an early stage. Static code analysis tools (SCATs) are especially suited for this purpose, as these testing tools perform their checks without actually executing the software. Thus, they represent an important part in the test suite. For the problem of carefully selecting such a tool for project use, a method for the construction of SCAT comparison catalogs is developed in this article to allow to contrast different tools and to evaluate them with respect to derived criteria. While the comparison categories are derived from established software quality models by taking into account the specifics of SCATs, multi criteria decision making procedures employing linguistic predicates are used to determine the most suitable test tool for each case. The artifact is demonstrated and evaluated in an artificial SCAT selection process which involves FindBugs, checkstyle, and PMD. A Method for Comparing and Selecting Static Code Analysis Tools Finally, potential extensions of the proposed method are outlined and its simple modifiability to other software decision situations is set forth.