This an an archived version of the documentation for SonarQube version 4.4.
See https://docs.sonarqube.org/display/SONAR/Documentation for current functionality
It is the cyclomatic complexity, also known as McCabe metric. Whenever the control flow of a function splits, the complexity counter gets incremented by one. Each function has a minimum complexity of 1.
|Complexity /class||class_complexity||Average complexity by class.|
|Complexity /file||file_complexity||Average complexity by file.|
|Complexity /method||function_complexity||Average complexity by function.|
Minimal number of file cycles detected inside a directory to be able to identify all undesired dependencies. This metric is available at directory level.
File edges weight
Number of file dependencies inside a directory. This metric is available at directory level.
File dependencies to cut
Number of file dependencies to cut in order to remove all cycles between directories. This metric is available at directory, module and project level.
File tangle = Suspect file dependencies
This metric is available at directory level.
File tangle index
File tangle index = 2 * (File tangle / File edges weight) * 100.
This metric is available at directory level.
Minimal number of directory cycles detected to be able to identify all undesired dependencies. This metric is available at directory, module and project level.
Package dependencies to cut
Number of directory dependencies to cut in order to remove all cycles between directories. This metric is available at the package, module and program levels.
Package tangle index
Level of directory interdependency. Best value (0%) means that there is no cycle and worst value (100%) means that directories are really tangled. This metric is computed with the following formula: 2 * (File dependencies to cut / Number of file dependencies between directories) * 100. This metric is available at directory, module and project level.
Package edges weight
Number of file dependencies between directories. This metric is available at directory, module and project level.
Suspect file dependencies
File dependencies to cut in order to remove cycles between files inside a directory. Note that cycles between files inside a directory does not always mean a bad quality architecture. This metric is available at directory level.
Number of lines containing either comment or commented-out code.
Non-significant comment lines (empty comment lines, comment lines containing only special characters, etc.) do not increase the number of comment lines.
The following piece of code contains 9 comment lines:
Density of comment lines = Comment lines / (Lines of code + Comment lines) * 100
With such a formula:
|Comments in Procedure Divisions||Comments in Procedure divisions (Cobol only)|
|Public documented API (%)||public_documented_api_density||Density of public documented API = (Public API - Public undocumented API) / Public API * 100|
|Public undocumented API||public_undocumented_api||Public API without comments header.|
Number of duplicated blocks of lines.
For a block of code to be considered as duplicated:
Differences in indentation as well as in string literals are ignored while detecting duplications.
|Duplicated files||duplicated_files||Number of files involved in a duplication.|
|Duplicated lines||duplicated_lines||Number of lines involved in a duplication.|
|Duplicated lines (%)||duplicated_lines_density|
Density of duplication = Duplicated lines / Lines * 100
Number of new issues.
New xxxxx issues
Number of new issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.
Number of issues.
Number of issues with severity xxxxx, xxxxx being blocker, critical, major, minor or info.
|False positive issues||false_positive_issues||Number of false positive issues|
|Open issues||open_issues||Number of issues whose status is Open|
|Confirmed issues||confirmed_issues||Number of issues whose status is Confirmed|
|Reopened issues||reopened_issues||Number of issues whose status is Reopened|
Sum of the issues weighted by the coefficient associated to each severity (Sum(xxxxx_violations * xxxxx_weight)).
Rules compliance index (RCI) = 100 - (Weighted issues / Lines of code * 100)
|Technical debt||sqale_index||Effort to fix all issues. The measure is stored in minutes in the DB.|
|Blocker||Operational/security risk: This issue might make the whole application unstable in production. Ex: calling garbage collector, not closing a socket, etc.|
|Critical||Operational/security risk: This issue might lead to an unexpected behavior in production without impacting the integrity of the whole application. Ex: NullPointerException, badly caught exceptions, lack of unit tests, etc.|
|Major||This issue might have a substantial impact on productivity. Ex: too complex methods, package cycles, etc.|
|Minor||This issue might have a potential and minor impact on productivity. Ex: naming conventions, Finalizer does nothing but call superclass finalizer, etc.|
|Info||Not known or yet well defined security risk or impact on productivity.|
Number of getter and setter functions used to get (reading) or set (writing) a class property.
|Classes||classes||Number of classes (including nested classes, interfaces, enums and annotations).|
|Directories||directories||Number of directories.|
|Files||files||Number of files.|
Number of lines generated by Cobol code generators like CA-Telon.
|Generated lines of code||generated_ncloc||Number of lines of code generated by Cobol code generators like CA-Telon.|
|Inside Control Flow Statements||cobol_inside_ctrlflow_statements||Number of inside (intra program) control flow statements (GOBACK, STOP RUN, DISPLAY, CONTINUE, EXIT, RETURN, PERFORM paragraph1 THRU paragraph2).|
|Lines||lines||Number of physical lines (number of carriage returns).|
|Lines of code||ncloc|
Number of physical lines that contain at least one character which is neither a whitespace or a tabulation or part of a comment.
|LOCs in Data Divisions||cobol_data_division_ncloc||Number of lines of code in Data divisions. Generated lines of code are excluded.|
|LOCs in Procedure Divisions||cobol_procedure_division_ncloc||Number of lines of code in Procedure divisions. Generated lines of code are excluded.|
Number of functions. Depending on the language, a function is either a function or a method or a paragraph.
|Outside Control Flow Statements||cobol_outside_ctrlflow_statements||Number of outside (inter programs) control flow statements (CALL, EXEC CICS LINK, EXEC CICS XCTL, EXEC SQL, EXEC CICS RETURN).|
|Projects||projects||Number of projects in a view.|
Number of public Classes + number of public Functions + number of public Properties
Number of statements.
On each line of code containing some boolean expressions, the branch coverage simply answers the following question: 'Has each boolean expression been evaluated both to true and false?'. This is the density of possible branches in flow control structures that have been followed during unit tests execution.
|Branch coverage on new code||new_branch_coverage|
Identical to Branch coverage but restricted to new / updated source code.
To be computed this metric requires the SCM Activity plugin.
|Branch coverage hits||branch_coverage_hits_data||List of covered branches.|
|Condition coverage||see Branch coverage|
|Conditions by line||conditions_by_line||Number of conditions by line.|
|Covered conditions by line||covered_conditions_by_line||Number of covered conditions by line.|
It is a mix of Line coverage and Branch coverage. Its goal is to provide an even more accurate answer to the following question: How much of the source code has been covered by the unit tests?
|Coverage on new code||new_coverage|
Identical to Coverage but restricted to new / updated source code.
To be computed this metric requires the SCM Activity plugin .
On a given line of code, Line coverage simply answers the following question: Has this line of code been executed during the execution of the unit tests?. It is the density of covered lines by unit tests:
|Line coverage on new code||new_line_coverage||Identical to Line coverage but restricted to new / updated source code.|
|Line coverage hits||coverage_line_hits_data||List of covered lines.|
|Lines to cover||lines_to_cover||Number of lines of code which could be covered by unit tests (for example, blank lines or full comments lines are not considered as lines to cover).|
|Lines to cover on new code||new_lines_to_cover||Identical to Lines to cover but restricted to new / updated source code.|
|Skipped unit tests||skipped_tests||Number of skipped unit tests.|
|Uncovered branches||uncovered_conditions||Number of branches which are not covered by unit tests.|
|Uncovered branches on new code||new_uncovered_conditions||Identical to Uncovered branches but restricted to new / updated source code.|
|Uncovered lines||uncovered_lines||Number of lines of code which are not covered by unit tests.|
|Uncovered lines on new code||new_uncovered_lines||Identical to Uncovered lines but restricted to new / updated source code.|
|Unit tests||tests||Number of unit tests.|
|Unit tests duration||test_execution_time||Time required to execute all the unit tests.|
|Unit test errors||test_errors||Number of unit tests that have failed.|
|Unit test failures||test_failures||Number of unit tests that have failed with an unexpected exception.|
|Unit test success density (%)||test_success_density||Test success density = (Unit tests - (Unit test errors + Unit test failures)) / Unit tests * 100|
The same kinds of metrics exist for Integration tests coverage and Overall tests coverage (Units tests + Integration tests).
Metrics on tests execution does not exist for Integration tests and Overall tests.