analyze_outcomes: useless ignores are now errors

Change from iterating on all available tests to iterating on tests with
an outcome: initially we were iterating on available tests but
immediately ignoring those that were always skipped. That last part
played poorly with the new error (we want to know if the test ignored
due to a pattern was skipped in the reference component), but when
removing it, we were left with iterating on all available tests then
skipping those that don't have outcomes entries: this is equivalent to
iterating on tests with an outcome entry, which is more readable.

Also, give an error if the outcome file contains no passing test from
the reference component: probably means we're using the wrong outcome
file.

Signed-off-by: Manuel Pégourié-Gonnard <manuel.pegourie-gonnard@arm.com>
This commit is contained in:
Manuel Pégourié-Gonnard 2023-10-18 12:44:54 +02:00
parent 881ce01db3
commit 371165aec0

View file

@ -118,25 +118,24 @@ def analyze_driver_vs_reference(results: Results, outcomes,
- only some specific test inside a test suite, for which the corresponding
output string is provided
"""
available = check_test_cases.collect_available_test_cases()
for key in available:
# Continue if test was not executed by any component
hits = outcomes[key].hits() if key in outcomes else 0
if hits == 0:
continue
seen_reference_passing = False
for key in outcomes:
# key is like "test_suite_foo.bar;Description of test case"
(full_test_suite, test_string) = key.split(';')
test_suite = full_test_suite.split('.')[0] # retrieve main part of test suite name
# Skip fully-ignored test suites
# Immediately skip fully-ignored test suites
if test_suite in ignored_suites or full_test_suite in ignored_suites:
continue
# Skip ignored test cases inside test suites
# For ignored test cases inside test suites, just remember and:
# don't issue an error if they're skipped with drivers,
# but issue an error if they're not (means we have a bad entry).
ignored = False
if full_test_suite in ignored_tests:
for str_or_re in ignored_tests[full_test_suite]:
if name_matches_pattern(test_string, str_or_re):
continue
ignored = True
# Search for tests that run in reference component and not in driver component
driver_test_passed = False
@ -146,8 +145,16 @@ def analyze_driver_vs_reference(results: Results, outcomes,
driver_test_passed = True
if component_ref in entry:
reference_test_passed = True
seen_reference_passing = True
if(reference_test_passed and not driver_test_passed):
results.error("Did not pass with driver: {}", key)
if not ignored:
results.error("PASS -> SKIP/FAIL: {}", key)
else:
if ignored:
results.error("uselessly ignored: {}", key)
if not seen_reference_passing:
results.error("no passing test in reference component: bad outcome file?")
def analyze_outcomes(results: Results, outcomes, args):
"""Run all analyses on the given outcome collection."""