For security reasons I close all files in /proc/self/fd except ones I explicitly whitelist (closing all files under sysconf(_SC_OPEN_MAX) is incorrect because one can open a file and then lower resource limits). So, my process crashes upon closing a file in /proc/self/fd that Valgrind won't let it. I understand if this might be too much work for the developers to fix. Reproducible: Always Steps to Reproduce: 1. Read from /proc/self/fd 2. One reads incorrect data Actual Results: My program reads incorrect data. Expected Results: My program should only read the files it opens.
It's not at all clear what files you are referring to, but my guess is that you're trying to close a file that is reserved for valgrind (such as it's output channel) which it will refuse to let you do because as far as your program is concerned those descriptors are not supposed to exist. The descriptor limits reported to the client program by valgrind when getrlimit is called are deliberately adjusted to reserve a set of file descriptors for valgrind's use and you shouldn't try and touch those at all and if you do then valgrind will simply pretend the descriptor isn't open and return EBADF to your program. Yes, because we don't (except for a few things) virtualise /proc it is possible to discover that those descriptors exist but there's isn't much we can easily do about that.
> The descriptor limits reported to the client program by valgrind when getrlimit is called are deliberately adjusted to reserve a set of file descriptors for valgrind's use and you shouldn't try and touch those at all and if you do then valgrind will simply pretend the descriptor isn't open and return EBADF to your program. As I mentioned earlier using getrlimit to test which files to use is a terrible idea because the file limit can be lowered after already allocating file descriptors over the limit. I'm not sure what you are trying to say that is new. It might actually be a good idea not to wrap getrlimit that way so that insecure and broken programs that misuse getrlimits that way break under Valgrind instead of giving users a false sense of security. Anyways, I'll believe you that it'd be difficult and a lot of work to add the feature of virtualizing /proc. This really didn't show up in the documentation though so I think this is at the very least a documentation bug. So, I would suggest that Valgrind documents better the limitation of not virtualising very much information in /proc and that it is considered to NOT wrap getrlimits and report a lower file limit. I'm not sure if there is a legitimate use of getrlimit that would be served by artificially lowering it other than possibly being able to run programs like Valgrind itself (which allocate file descriptors at the end of the limit deliberately to hide them) under Valgrind.
Hi. One possible mitigation for this would be to add a Valgrind API call in valgrind.h for testing if a file descriptor is Valgrind's.
Note that the "test" program for bug https://bugs.kde.org/show_bug.cgi?id=337388 makes use of this bug. It would be nice if we could "virtualize" /proc/self/fd and /proc/self/fdinfo But up to now we have only "virtualized" proc files, not proc directories.