When running valgrind on an executable that generates aborts (google test "death" tests), the resulting xml output contains redundant `</valgrindoutput>` closing tags. This breaks downstream tools that ingest this xml. OBSERVED RESULT $ grep valgrindoutput exec.memcheck <valgrindoutput> </valgrindoutput> </valgrindoutput> </valgrindoutput> </valgrindoutput> EXPECTED RESULT $ grep valgrindoutput exec.memcheck <valgrindoutput> </valgrindoutput> SOFTWARE/OS VERSIONS Ubuntu 20.04
Loooking at the source there are several calls to VG_(printf_xml)("</valgrindoutput>\n"); 1. m_main.c shutdown_actions_NORETURN. This should exit or panic. 2. m_libcassert.c VG_(assert_fail) and panic and VG_(unimplemented). panic doesn't exit immediately (the others should) 3. m_errormgr.c do_actions_on_error and load_one_suppressions_file (x3). do_actions_on_error doesn't exit immediately. That shouldn't be too hard to debug. Can you provide a small testcase to reproduce?
(In reply to Paul Floyd from comment #1) [...] > Can you provide a small testcase to reproduce? #include <unistd.h> #include <stdlib.h> int main(int argc, char** argv) { if (!fork()) { abort(); } malloc(10); // something for memcheck } $ valgrind --trace-children=yes --xml=yes --xml-file=test.memcheck ./a.out $ grep valgrindoutput test.memcheck <valgrindoutput> </valgrindoutput> </valgrindoutput> ```
Hmm. For this example, the /valgrindout in both cases comes from shutdown_actions_NORETURN in m_main.c The problem is that both processes write their summaries and </valgrindout> to the same file. So in addition to the ill-formed xml end tags there is also one <status> RUNNING and 2 <status> FINISHED, only 1 <pid> and <ppid> The best thing to do is to use %p to have a per-process xml file. It's going to be tricky for the forked Valgrind processes to know which is the last standing and for only that process to write the end tag
Is using %p in the filename a satisfactory workaround?
> Is using %p in the filename a satisfactory workaround? That's probably fine for cases where the multiple xml files can be easily post-processed, but I suspect there are a lot of consumers of this data that expect it to be in one file (Jenkins plugins, for one). Perhaps both options could be provided (legacy [single-file] and new [multi-file]) with a push to getting downstream consumers to use the new multi-file output.
%p has been there for a long time. I've used it for nightly / weekend tests with post-process scripts to parse all logs.