Version: (using Devel) OS: Linux Installed from: Compiled sources Valgrinding chromium on both Mac and Linux, we saw errors of the sort ==74890== Warning: invalid file descriptor 256 in syscall close() (see http://crbug.com/11412). Turns out we were looping over all open file descriptors, as reported by the system, and some of those were valgrind's! Here's a little test program demonstrating the issue. It would be sweet if valgrinding this test program did not produce any warnings about invalid fds. Valgrind could presumably hide them from being visible via /proc/self/fd or /dev/fd with a little effort. We've now worked around this in our app by ignoring any fds above the system limit for our process. #include <dirent.h> #include <errno.h> #include <fcntl.h> #include <stdlib.h> #include <sys/resource.h> #include <sys/types.h> #include <unistd.h> #include <limits> #include <iostream> #include <set> int main(int argc, char **argv) { #if defined(linux) static const rlim_t kSystemDefaultMaxFds = 8192; static const char fd_dir[] = "/proc/self/fd"; #elif defined(__APPLE__ & __MACH__) static const rlim_t kSystemDefaultMaxFds = 256; static const char fd_dir[] = "/dev/fd"; #endif std::set<int> saved_fds; // Get the maximum number of FDs possible. struct rlimit nofile; rlim_t max_fds; if (getrlimit(RLIMIT_NOFILE, &nofile)) { // getrlimit failed. Take a best guess. max_fds = kSystemDefaultMaxFds; std::cerr << "getrlimit(RLIMIT_NOFILE) failed: " << errno; } else { max_fds = nofile.rlim_cur; } std::cout << "max_fds is " << max_fds << "\n"; if (max_fds > INT_MAX) max_fds = INT_MAX; // Don't close stdin, stdout and stderr saved_fds.insert(STDIN_FILENO); saved_fds.insert(STDOUT_FILENO); saved_fds.insert(STDERR_FILENO); DIR *dir = opendir(fd_dir); if (NULL == dir) { std::cerr << "Unable to open " << fd_dir; exit(1); } struct dirent *ent; while ((ent = readdir(dir))) { // Skip . and .. entries. if (ent->d_name[0] == '.') continue; char *endptr; errno = 0; const long int fd = strtol(ent->d_name, &endptr, 10); if (ent->d_name[0] == 0 || *endptr || fd < 0 || errno) continue; // When running under Valgrind, Valgrind opens several FDs for its // own use and will complain if we try to close them. All of // these FDs are >= |max_fds|, so we can check against that here // before closing. if (fd >= static_cast<int>(max_fds)) std::cout << "Found fd " << fd << " outside the normal range -- is it valgrind's?\n"; // don't check - let valgrind warn close(fd); } }
> Turns out we were looping over all open file descriptors, as reported by the > system, and some of those were valgrind's! By "reported by the system", you mean you were looking at /dev/fd? Ie. the problem is there's an inconsistency between what's seen in /dev/fd (where the Valgrind fds are visible) and what getrlimit() returns (where the Valgrind fds are effectively hidden)?
That's right. If valgrind's fds were hidden from /dev/fd (or /proc/self/fd on linux) as well, we wouldn't have had to change our program to work around them. It's not like this is a huge deal for us, but if somebody didn't have freedom to change their code, it could be a real bear.
Masking out directory entries would be a lot of work for fairly small gain so I'm not sure it's worth it personally. The traditional way of looping over all the file descriptors on Unix systems is simply to loop from zero to the rlimit (which we do adjust as you say) rather than doing funny business with /dev/fd which is highly non-portable.
Then go ahead and mark LATER or WONTFIX or something (I don't know what KDE uses in its bugzilla). Maybe this bug report will help the next guy who stumbles into this.
I'll mark it WONTFIX. Valgrind tries to provide a faithful emulation of the underlying environment, but there will always be some infelicities, and I agree with Tom that the cost/benefit ratio on this one isn't good. Thanks for the report, though.