Hi Helen,
On Mon, 2023-09-25 at 16:55 -0300, Helen Koike wrote:
Hello,
This script is being very handy for me, so I suppose it could be
handy
to others, since I'm publishing it in the xfails folder.
Let me know your thoughts.
I have two suggestions and one comment. Before that, I would like to
highlight the importance of tools like that to help with repetitive
tedious work, it is great to have this script.
+def get_xfails_file_path(canonical_name, suffix):
+ name = canonical_name.replace(":", "-")
+ script_dir = os.path.dirname(os.path.abspath(__file__))
+ return os.path.join(script_dir, f"{name}-{suffix}.txt")
I appreciate the correspondence between job name and expectation file
names.
+def get_unit_test_name_and_results(unit_test):
+ if "Artifact results/failures.csv not found" in unit_test:
+ return None, None
+ unit_test_name, unit_test_result = unit_test.strip().split(",")
+ return unit_test_name, unit_test_result
Suggestion: it is not managing empty lines or comments. By now, there
aren't, but they could be found.
+def main(namespace, project, pipeline_id):
+ xfails = (
+ Collate(namespace=namespace, project=project)
+ .from_pipeline(pipeline_id)
+ .get_artifact("results/failures.csv")
+ )
+
+ split_unit_test_from_collate(xfails)
+
+ for job_name in xfails.keys():
+ canonical_name = get_canonical_name(job_name)
+ fails_txt_path = get_xfails_file_path(canonical_name,
"fails")
+ flakes_txt_path = get_xfails_file_path(canonical_name,
"flakes")
+
+ fails_txt = read_file(fails_txt_path)
+ flakes_txt = read_file(flakes_txt_path)
+
+ for job_id in xfails[job_name].keys():
+ for unit_test in xfails[job_name][job_id]:
+ unit_test_name, unit_test_result =
get_unit_test_name_and_results(unit_test)
+
+ if not unit_test_name:
+ continue
+
+ if is_test_present_on_file(flakes_txt,
unit_test_name):
+ remove_unit_test_if_present(flakes_txt,
unit_test_name, flakes_txt_path)
+ print("WARNING: unit test is on flakes list but
a job failed due to it, "
+ "is your tree up to date?",
+ unit_test_name, "DROPPED FROM",
os.path.basename(flakes_txt_path))
+
+ if unit_test_result == "UnexpectedPass":
+ remove_unit_test_if_present(fails_txt,
unit_test_name, fails_txt_path)
+ # flake result
+ if not
is_unit_test_present_in_other_jobs(unit_test, xfails[job_name]):
+ add_unit_test_if_not_present(flakes_txt,
unit_test_name, flakes_txt_path)
+ continue
Suggestion: Sometimes tests fails with different status ("Fail" to
"Crash" for example) and the expectations should be updated with the
newer status.
+
+ # flake result
+ if not is_unit_test_present_in_other_jobs(unit_test,
xfails[job_name]):
+ add_unit_test_if_not_present(flakes_txt,
unit_test_name, flakes_txt_path)
+ continue
+
+ # consistent result
+
add_unit_test_or_update_result_to_fails_if_present(fails_txt,
unit_test,
+
fails_txt_path)
+
+ save_file(fails_txt, fails_txt_path)
+ save_file(flakes_txt, flakes_txt_path)
Regards,