Bug 191758 - Valgrind's private fds are not hidden from user process
Summary: Valgrind's private fds are not hidden from user process
Status: RESOLVED INTENTIONAL
Alias: None
Product: valgrind
Classification: Developer tools
Component: general (show other bugs)
Version: unspecified
Platform: Compiled Sources Linux
: NOR wishlist
Target Milestone: ---
Assignee: Julian Seward
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-05-06 00:53 UTC by Dan Kegel
Modified: 2009-05-08 00:46 UTC (History)
2 users (show)

See Also:
Latest Commit:
Version Fixed In:
Sentry Crash Report:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Dan Kegel 2009-05-06 00:53:58 UTC
Version:            (using Devel)
OS:                Linux
Installed from:    Compiled sources

Valgrinding chromium on both Mac and Linux, we saw errors of the sort

==74890== Warning: invalid file descriptor 256 in syscall close()

(see http://crbug.com/11412).  Turns out we were looping over all
open file descriptors, as reported by the system, and some of
those were valgrind's!  Here's a little test program demonstrating
the issue.  It would be sweet if valgrinding this test program
did not produce any warnings about invalid fds.  Valgrind could
presumably hide them from being visible via /proc/self/fd or /dev/fd
with a little effort.  We've now worked around this in our app
by ignoring any fds above the system limit for our process.

#include <dirent.h>
#include <errno.h>
#include <fcntl.h>
#include <stdlib.h>
#include <sys/resource.h>
#include <sys/types.h>
#include <unistd.h>

#include <limits>
#include <iostream>
#include <set>

int main(int argc, char **argv)
{
#if defined(linux)
  static const rlim_t kSystemDefaultMaxFds = 8192;
  static const char fd_dir[] = "/proc/self/fd";
#elif defined(__APPLE__ & __MACH__)
  static const rlim_t kSystemDefaultMaxFds = 256;
  static const char fd_dir[] = "/dev/fd";
#endif
  std::set<int> saved_fds;

  // Get the maximum number of FDs possible.
  struct rlimit nofile;
  rlim_t max_fds;
  if (getrlimit(RLIMIT_NOFILE, &nofile)) {
    // getrlimit failed. Take a best guess.
    max_fds = kSystemDefaultMaxFds;
    std::cerr << "getrlimit(RLIMIT_NOFILE) failed: " << errno;
  } else {
    max_fds = nofile.rlim_cur;
  }
  std::cout << "max_fds is " << max_fds << "\n";

  if (max_fds > INT_MAX)
    max_fds = INT_MAX;

  // Don't close stdin, stdout and stderr
  saved_fds.insert(STDIN_FILENO);
  saved_fds.insert(STDOUT_FILENO);
  saved_fds.insert(STDERR_FILENO);

  
  DIR *dir = opendir(fd_dir);
  if (NULL == dir) {
    std::cerr << "Unable to open " << fd_dir;
    exit(1);
  }

  struct dirent *ent;
  while ((ent = readdir(dir))) {
    // Skip . and .. entries.
    if (ent->d_name[0] == '.')
      continue;

    char *endptr;
    errno = 0;
    const long int fd = strtol(ent->d_name, &endptr, 10);
    if (ent->d_name[0] == 0 || *endptr || fd < 0 || errno)
      continue;

    // When running under Valgrind, Valgrind opens several FDs for its
    // own use and will complain if we try to close them.  All of
    // these FDs are >= |max_fds|, so we can check against that here
    // before closing.
    if (fd >= static_cast<int>(max_fds))
      std::cout << "Found fd " << fd << " outside the normal range -- is it valgrind's?\n";
    // don't check - let valgrind warn
    close(fd);
  }
}
Comment 1 Nicholas Nethercote 2009-05-07 06:06:15 UTC
> Turns out we were looping over all open file descriptors, as reported by the
> system, and some of those were valgrind's!

By "reported by the system", you mean you were looking at /dev/fd?  Ie. the problem is there's an inconsistency between what's seen in /dev/fd (where the Valgrind fds are visible) and what getrlimit() returns (where the Valgrind fds are effectively hidden)?
Comment 2 Dan Kegel 2009-05-07 06:45:19 UTC
That's right.  If valgrind's fds were hidden from /dev/fd (or /proc/self/fd on linux) as well, we wouldn't have had to change our program to work around them.

It's not like this is a huge deal for us, but if somebody didn't have freedom to change their code, it could be a real bear.
Comment 3 Tom Hughes 2009-05-07 09:41:03 UTC
Masking out directory entries would be a lot of work for fairly small gain so I'm not sure it's worth it personally.

The traditional way of looping over all the file descriptors on Unix systems is simply to loop from zero to the rlimit (which we do adjust as you say) rather than doing funny business with /dev/fd which is highly non-portable.
Comment 4 Dan Kegel 2009-05-07 15:41:19 UTC
Then go ahead and mark LATER or WONTFIX or something (I don't know what 
KDE uses in its bugzilla).
Maybe this bug report will help the next guy who stumbles into this.
Comment 5 Nicholas Nethercote 2009-05-08 00:46:05 UTC
I'll mark it WONTFIX.  Valgrind tries to provide a faithful emulation of the underlying environment, but there will always be some infelicities, and I agree with Tom that the cost/benefit ratio on this one isn't good.  Thanks for the report, though.