Version: 0.6.3 (using KDE 4.0.3)
Installed from: Ubuntu Packages
When searching for uncommon text using the "Find" function in large PDF files such as:
I experience extreme memory usage.
For example when searching for the word "abracadabra" the virtual and resident memory increase from approximately 100 and 32 MB respectively, to greater than 550 and 450 MB (I stopped the test at that point otherwise my computer would become unresponsive.)
SVN commit 803048 by pino:
Internally replace a TextEntity with a "lighter version", that stores the raw UTF-16 data of the text.
This way, we can save about 4 int's for each text entity; this is not much for small documents,
but with big documents with lots of text (eg, the PDF specs) we can save a lot (more than 50MB!).
M +84 -29 textpage.cpp
M +8 -8 textpage_p.h
WebSVN link: http://websvn.kde.org/?view=rev&revision=803048
SVN commit 803949 by aacid:
limit the number of text pages we keep in memory so that searching does not bring your system to its knees
M +46 -0 core/document.cpp
M +5 -0 core/document_p.h
M +18 -2 core/generator.cpp
M +5 -0 core/generator.h
M +13 -3 generators/poppler/generator_pdf.cpp
WebSVN link: http://websvn.kde.org/?view=rev&revision=803949