| Summary: | Opening / Loading PDF from Samba share (SMB / CIFS) is very slow | ||
|---|---|---|---|
| Product: | [Applications] okular | Reporter: | Sebastian <sebastian_steiner> |
| Component: | PDF backend | Assignee: | Okular developers <okular-devel> |
| Status: | REPORTED --- | ||
| Severity: | normal | CC: | kde |
| Priority: | NOR | ||
| Version First Reported In: | 22.12.3 | ||
| Target Milestone: | --- | ||
| Platform: | Debian stable | ||
| OS: | Linux | ||
| Latest Commit: | Version Fixed/Implemented In: | ||
| Sentry Crash Report: | |||
| Attachments: |
Excerpt from Samba file server log file
Search results from gitlab.freedesktop.org |
||
|
Description
Sebastian
2025-08-03 08:32:29 UTC
Created attachment 183771 [details] Search results from gitlab.freedesktop.org See attachment 2025 [details]-08-04_poppler_buffer.png - might this be the critical thing? Any chance for Okular to have Poppler operate on an in-memory buffer instead of seeking in 256 byte portions over the Samba share? MuPDF (https://packages.debian.org/bookworm/mupdf) - which is not based on Poppler, as far as I know - opens the same PDF file over the same Samba remote share instantly in fractions of a second! No delay! But it lacks so many features of Okular. :( (In reply to Sebastian from comment #1) > Created attachment 183771 [details] > Search results from gitlab.freedesktop.org > > See attachment 2025 [details]-08-04_poppler_buffer.png - might this be the > critical thing? > > Any chance for Okular to have Poppler operate on an in-memory buffer instead > of seeking in 256 byte portions over the Samba share? Try change that constant in poppler to something fairly larger and see what happens ? I do think that working on 256 bytes increments might have been the right thing back in the days of floppy drives ... ? First of all, I've filed a bug at Poppler: https://gitlab.freedesktop.org/poppler/poppler/-/issues/1616 Even a command line tool like pdftoppm suffers from the same problem. Takes a minute when reading from Samba share and only a few seconds when reading from local disk. (In reply to Sune Vuorela from comment #3) > Try change that constant in poppler to something fairly larger and see what > happens ? Unfortunately, I don't really know how to build Okular with a custom Poppler library so that it doesn't use the libraries installed from the package repository. In Debian, Okular seems to depend on libpoppler-qt5-1 which depends on libpoppler126 and I guess the last one is the library I had to patch? (In reply to Sune Vuorela from comment #3) > I do think that working on 256 bytes increments might have been > the right thing back in the days of floppy drives ... ? I totally agree! Even for a local disk 256 bytes seem very small, given that HDD/SSD sectors nowadays are 4K / 4096 bytes at once. And it seems to be fatal for remote mounts with some latency. Hi, I hope I have solved the problem:
When loading files from a network share, Poppler is very slow as it iterates over the file in small chunks which causes a lot of network round trips.
I changed PDFGenerator::loadDocumentWithPassword in Okular's generator_pdf.cpp like this:
// BEGIN Generator inherited functions
Okular::Document::OpenResult PDFGenerator::loadDocumentWithPassword(const QString &filePath, QVector<Okular::Page *> &pagesVector, const QString &password)
{
#ifndef NDEBUG
if (pdfdoc) {
qCDebug(OkularPdfDebug) << "PDFGenerator: multiple calls to loadDocument. Check it.";
return Okular::Document::OpenError;
}
#endif
QFile file(filePath);
if( !file.open(QIODevice::ReadOnly) ) {
// Error: File not readable.
return Okular::Document::OpenError;
}
QByteArray fileContent = file.readAll();
// create PDFDoc for the given file
pdfdoc = Poppler::Document::loadFromData(fileContent, nullptr, nullptr);
...
This should load the entire fill into a QByteArray at once and then pass this in-memory QByteArray buffer to Poppler.
With this change, Okular seems to display PDF files from network shares faster by factor >100 I guess. :D Amazing!
Could you maybe implement the change in Okular? <3
I guess it might be reasonable to implement some limit for this in-memory loading, e.g. allow to take up to 20% of system memory available and otherwise fall back to the current approach?
While opening a PDF file of 2,4 MB from a Samba share could take almost an hour before the change, it now opens within seconds! :D
And nowadays, it should be no problem to load 2,4 MB into memory.
|