Bug 388854 - Okular uses a very large amount of RAM for caching
Summary: Okular uses a very large amount of RAM for caching
Status: CLOSED NOT A BUG
Alias: None
Product: okular
Classification: Applications
Component: PDF backend (show other bugs)
Version: 1.4.2
Platform: Manjaro Linux
: NOR normal
Target Milestone: ---
Assignee: Okular developers
URL:
Keywords:
: 390477 400246 421091 (view as bug list)
Depends on:
Blocks:
 
Reported: 2018-01-12 11:09 UTC by Sasha
Modified: 2020-05-07 15:24 UTC (History)
9 users (show)

See Also:
Latest Commit:
Version Fixed In:


Attachments
Here is a file (3.79 MB, application/pdf)
2018-01-14 08:20 UTC, Sasha
Details
Memory chunk, dumped and visualized as raw image in Gimp (69.19 KB, image/png)
2018-07-02 22:30 UTC, Tobias Deiminger
Details

Note You need to log in before you can comment on or make changes to this bug.
Description Sasha 2018-01-12 11:09:53 UTC
Sorry for my English. A couple days ago i found that ocular uses more than 1 gigabyte of RAM. The PDF document was 3.2 megabytes at all and consists of about 200 pages. I don't know is it normal. So may be you have some memory leaks.I'm sorry if not. Thanks :).
Comment 1 Nate Graham 2018-01-12 23:41:12 UTC
Please attach the PDF file, and also upgrade Okular and see if the issue reproduces. Okular 0.2.5 is pretty old.
Comment 2 Sasha 2018-01-14 08:20:50 UTC
Created attachment 109856 [details]
Here is a file
Comment 3 Nate Graham 2018-01-15 21:00:57 UTC
I can reproduce using Okular 1.1.3 and poppler 0.57.0. As I use the arrow key to scroll all the way to the end of the document and back three times, Okular's memory usage rapidly rises, topping out at 1.17 Gb.
Comment 4 Albert Astals Cid 2018-01-15 22:28:12 UTC
Nate, how is this a bug?

There's 200 rendered pages to cache, so yes obviously okular uses memory.
Comment 5 Nate Graham 2018-01-15 22:36:02 UTC
Yes, but is the amount reasonable?

1.17 Gb RAM / 257 pages = 4.5 Mb per page, for a document that's only 3.2 Mb on disk. This PDF is mostly text and a few graphics.

When I perform the same procedure in Evince, it tops out at 93 Mb and doesn't feel any slower. So Okular is using 11 times as much RAM for its caching, but doesn't seem notably faster.
Comment 6 Albert Astals Cid 2018-01-15 22:52:54 UTC
(In reply to Nate Graham from comment #5)
> Yes, but is the amount reasonable?

You have it available, do you prefer the memory to lay around being unused?

> 1.17 Gb RAM / 257 pages = 4.5 Mb per page, for a document that's only 3.2 Mb
> on disk. This PDF is mostly text and a few graphics.

First, what does it matter how much it takes on disk regarding to what the actual representation will be? https://en.wikipedia.org/wiki/Zip_bomb

And second, we're caching the rendered pixmap, so yeah the file size matters nothing against what file rendering caching will occupy.

 
> When I perform the same procedure in Evince, it tops out at 93 Mb and
> doesn't feel any slower. So Okular is using 11 times as much RAM for its
> caching, but doesn't seem notably faster.

Sure, but the page may have taken 3 minutes to render, do you really want to add  code that decides whether to cache the rendered pixmap depending on how much it took to render?
Comment 7 Albert Astals Cid 2018-01-15 22:54:00 UTC
I don't agree this is a bug, so i'm removing the confirmed flag.
Comment 8 Nate Graham 2018-01-15 22:55:48 UTC
If you don't see any problems with Okular's memory usage, then go ahead and close the bug. No reason to leave it open if we don't intend to make any changes.
Comment 9 Albert Astals Cid 2018-01-15 23:08:17 UTC
Question since you changed the subject of the bug, have you verified that there is actually no memory bug?
Comment 10 Albert Astals Cid 2018-01-15 23:08:36 UTC
(In reply to Albert Astals Cid from comment #9)
> Question since you changed the subject of the bug, have you verified that
> there is actually no memory bug?

memory bug -> memory leak
Comment 11 Nate Graham 2018-01-15 23:13:15 UTC
Yes, I opened Okular with the document and left it open for a few hours. I didn't observe any memory leak; usage was static. So I assumed that the reporter was referring to memory consumed by actually using Okular.
Comment 12 Nate Graham 2018-01-18 15:33:17 UTC
So what are we doing with this bug? Do we consider this to be normal behavior?
Comment 13 Sasha 2018-01-18 20:14:54 UTC
So thank you for your responding! I catched the reason of this behavior. But what will be with the program if i want to read for example 500 pages or work with some documents with lower number of pages. Are there any limits in the programm for this case?
Comment 14 Sasha 2018-01-18 20:22:43 UTC
And some about memory leaks. I tried to move up and down thrue the ocument from side by side and after a couple of times i have about 50 additional megabites in my RAM. But couple of minutes ago the volume of memory become +- 10Mb normal again. So, yes, i don't think thap there are any leaks.
Comment 15 Albert Astals Cid 2018-01-21 21:09:27 UTC
(In reply to Sasha from comment #13)
> So thank you for your responding! I catched the reason of this behavior. But
> what will be with the program if i want to read for example 500 pages or
> work with some documents with lower number of pages. Are there any limits in
> the programm for this case?

Yes, the program is well behaved and won't use more memory than you have. If you want it to use less or more memory you can tweak that from the performance settings.
Comment 16 Sasha 2018-01-22 09:26:30 UTC
Thank you!
Comment 17 Nate Graham 2018-02-14 20:58:13 UTC
*** Bug 390477 has been marked as a duplicate of this bug. ***
Comment 18 Krešimir Čohar 2018-06-30 22:06:15 UTC
it's preposterous that a 250 page document be allowed take up 1 GB of RAM, especially considering the document itself doesn't take up that much disk space

if you're not going to call it a bug, then a design flaw

also @Albert, we've all got RAM to spare, that doesn't justify inefficiency, especially when eclipsed in that respect by evince/gnome of all things
Comment 19 Brennan Kinney 2018-07-01 06:24:30 UTC
I can confirm a memory leak on Manjaro KDE with Okular 1.4.2. I've come here from the reddit discussion: https://www.reddit.com/r/kde/comments/8v4g5y/extremely_high_ram_usage_by_okular/

4 page document(90KB in size, e-mail with few small images like logos and icons) using about 28MB RAM when opened. 

Repeatedly scrolling up and down this document increases memory usage, I stopped at 60MB, it's not freeing or re-using existing memory, that doesn't seem like it's caching content properly. 

I imagine one could continue this process until all RAM is used or something triggers a cleanup. I'm not seeing any noticeable or major memory increase while the document is idle/inactive, nor CPU usage. When scrolling rapidly (drag scrollbar top to bottom) CPU usage was initially displaying 9%  up to 14% when I reached 60MB, this value was consistent at increasing steadily with the RAM. This is probably single-threaded and would cap at 25% requiring more time to process what should be linear time.

Additional memory should not be allocated like this, there would be a limit if it were caching properly and utilizing that, the CPU activity is likely associated with the memory leak.
Comment 20 Nate Graham 2018-07-02 00:52:12 UTC
Thanks for the additional information, Brennan. However, as previously noted, high memory usage of the sort described by you and others in this ticket is not considered a bug because:
1. it's only done when there's actually unused memory available
2. Okular should give it up when the system is under memory pressure
3. you can change the memory caching aggressiveness in the settings if this sort of thing unnerves you

If you're seeing that any of the above conditions are not working properly, please file a new bug report to track that issue. Thanks!
Comment 21 Brennan Kinney 2018-07-02 02:35:51 UTC
(In reply to Nate Graham from comment #20)
> Thanks for the additional information, Brennan. However, as previously
> noted, high memory usage of the sort described by you and others in this
> ticket is not considered a bug because:
> 1. it's only done when there's actually unused memory available
> 2. Okular should give it up when the system is under memory pressure
> 3. you can change the memory caching aggressiveness in the settings if this
> sort of thing unnerves you
> 
> If you're seeing that any of the above conditions are not working properly,
> please file a new bug report to track that issue. Thanks!

Nate, it's not just caching, there is an obvious leak/growth, a 90kb PDF of 4 pages, 30MB on open, steadily rising memory usage by scrolling repeatedly through the pages up/down. This growth only stops once the scrolling does, and will resume again by scrolling. The cache is not being used, memory is just being allocated for the content again, and again, instead of reused or freed(until the mentioned memory pressure). 

I could continue that process for the same file and bring the memory usage up for the single document from the 60MB I got it to 1GB or 10GB without issue from the looks of it. That's not good behaviour, even if it is freed at a later point, the application shouldn't balloon memory like that pointlessly. If others think that is appropriate, it's a bit of a worry. Don't claim it as working as intended when it's clearly poor memory handling. It's ok to admit that this behaviour is happening and that it is not correct, but not see any value or importance in investing time to correct it, just don't pretend that it's allocating hundreds to thousands of MB in memory for small documents from scrolling pages when that growth is constant and does not stop, Okular is not using that memory in it's entirerity, it's not an optimization, it's a bug.
Comment 22 Krešimir Čohar 2018-07-02 07:12:47 UTC
(In reply to Brennan Kinney from comment #21)
> (In reply to Nate Graham from comment #20)
> > Thanks for the additional information, Brennan. However, as previously
> > noted, high memory usage of the sort described by you and others in this
> > ticket is not considered a bug because:
> > 1. it's only done when there's actually unused memory available
> > 2. Okular should give it up when the system is under memory pressure
> > 3. you can change the memory caching aggressiveness in the settings if this
> > sort of thing unnerves you
> > 
> > If you're seeing that any of the above conditions are not working properly,
> > please file a new bug report to track that issue. Thanks!
> 
> Nate, it's not just caching, there is an obvious leak/growth, a 90kb PDF of
> 4 pages, 30MB on open, steadily rising memory usage by scrolling repeatedly
> through the pages up/down. This growth only stops once the scrolling does,
> and will resume again by scrolling. The cache is not being used, memory is
> just being allocated for the content again, and again, instead of reused or
> freed(until the mentioned memory pressure). 
> 
> I could continue that process for the same file and bring the memory usage
> up for the single document from the 60MB I got it to 1GB or 10GB without
> issue from the looks of it. That's not good behaviour, even if it is freed
> at a later point, the application shouldn't balloon memory like that
> pointlessly. If others think that is appropriate, it's a bit of a worry.
> Don't claim it as working as intended when it's clearly poor memory
> handling. It's ok to admit that this behaviour is happening and that it is
> not correct, but not see any value or importance in investing time to
> correct it, just don't pretend that it's allocating hundreds to thousands of
> MB in memory for small documents from scrolling pages when that growth is
> constant and does not stop, Okular is not using that memory in it's
> entirerity, it's not an optimization, it's a bug.

i use mostly chromium to read very large medical textbooks and medical research that has tons of imaging in it, it's absolute torture to have okular take up most of my RAM to open up a 200 MB PDF... this wouldn't be a problem if other software behaved the same way (so yeah, it might not be a bug, but it certainly is a design flaw)

if the memory caching settings in okular are changed, the pages become far less responsive, and it makes it that much harder to navigate through the document
Comment 23 Krešimir Čohar 2018-07-02 08:02:12 UTC
(In reply to Brennan Kinney from comment #21)
> (In reply to Nate Graham from comment #20)
> > Thanks for the additional information, Brennan. However, as previously
> > noted, high memory usage of the sort described by you and others in this
> > ticket is not considered a bug because:
> > 1. it's only done when there's actually unused memory available
> > 2. Okular should give it up when the system is under memory pressure
> > 3. you can change the memory caching aggressiveness in the settings if this
> > sort of thing unnerves you
> > 
> > If you're seeing that any of the above conditions are not working properly,
> > please file a new bug report to track that issue. Thanks!
> 
> Nate, it's not just caching, there is an obvious leak/growth, a 90kb PDF of
> 4 pages, 30MB on open, steadily rising memory usage by scrolling repeatedly
> through the pages up/down. This growth only stops once the scrolling does,
> and will resume again by scrolling. The cache is not being used, memory is
> just being allocated for the content again, and again, instead of reused or
> freed(until the mentioned memory pressure). 
> 
> I could continue that process for the same file and bring the memory usage
> up for the single document from the 60MB I got it to 1GB or 10GB without
> issue from the looks of it. That's not good behaviour, even if it is freed
> at a later point, the application shouldn't balloon memory like that
> pointlessly. If others think that is appropriate, it's a bit of a worry.
> Don't claim it as working as intended when it's clearly poor memory
> handling. It's ok to admit that this behaviour is happening and that it is
> not correct, but not see any value or importance in investing time to
> correct it, just don't pretend that it's allocating hundreds to thousands of
> MB in memory for small documents from scrolling pages when that growth is
> constant and does not stop, Okular is not using that memory in it's
> entirerity, it's not an optimization, it's a bug.

@Brennan, think we should report the pages not loading in a timely fashion as a bug?
Comment 24 Oliver Sander 2018-07-02 08:11:49 UTC
I don't think that this would help.  The argument would then be that the "use little memory"-option has its price, which is slower loading times.

What is needed is more precise information on what all that memory is actually used for, where it gets allocated, etc.  You could try to figure that out using memory profiling, code auditing, experimenting etc.  Only then one can really tell whether we are talking about a bug, a design flaw, or a feature.
Comment 25 Krešimir Čohar 2018-07-02 11:17:06 UTC
(In reply to Oliver Sander from comment #24)
> I don't think that this would help.  The argument would then be that the
> "use little memory"-option has its price, which is slower loading times.
> 
> What is needed is more precise information on what all that memory is
> actually used for, where it gets allocated, etc.  You could try to figure
> that out using memory profiling, code auditing, experimenting etc.  Only
> then one can really tell whether we are talking about a bug, a design flaw,
> or a feature.

the idea it needs more RAM to load pages faster is a very tempting leap, but fairly moot. chromium and evince load pages blazingly quickly at very little memory cost (compared to Okular at low, not normal or aggressive) while at the same time rendering those very same pages (granted, evince does a stunning job of messing up the font rendering). okular scrolls faster but it doesn't render the pages as quickly (regardless of its RAM usage), and when set up at low, after a while just refuses to render them at all. we can try profiling its memory usage but the fact remains that regardless of where the RAM goes it still doesn't pass muster (which is very unlike most KDE software nowadays...)
Comment 26 Oliver Sander 2018-07-02 15:31:03 UTC
I am not saying that speed/memory consumption of Okular are great and cannot be improved. However, the path from to the rather vague "Okular uses lots of memory" to specific improvements to the code is long and windy. It would help a lot to know in more specific terms which memory allocations are problematic, and I was merely suggesting that you could try and help with that.
Comment 27 Krešimir Čohar 2018-07-02 19:38:48 UTC
(In reply to Oliver Sander from comment #26)
> I am not saying that speed/memory consumption of Okular are great and cannot
> be improved. However, the path from to the rather vague "Okular uses lots of
> memory" to specific improvements to the code is long and windy. It would
> help a lot to know in more specific terms which memory allocations are
> problematic, and I was merely suggesting that you could try and help with
> that.

i'll try to check it out, but seeing as my understanding of how okular exactly works is really poor i wouldn't hold my breath, sorry :D but i can at the very least tell you that this shouldn't be swept under the rug
Comment 28 Albert Astals Cid 2018-07-02 21:28:29 UTC
I'm not going to answer further in this bug because i don't feel compelled to have a discussion with people that saying things like "no way a 90kb PDF can take XX Mb of memory" as if they knew anything about they are talking about.

*BUT* this may actually be a manifestation of "glibc is useless and doesn't actually free memory when you tell it to" that i workarounded at https://cgit.kde.org/okular.git/commit/?id=95bc29a76fc1f93eaabe5383d934644067dfc884

If this is the case one may need to add more malloc_trim around and not only when the document is closed, but i totally don't feel like being dragged to discuss with people that have no clue what they're talking about and have no respect for people that give them stuff for free.
Comment 29 Tobias Deiminger 2018-07-02 22:30:51 UTC
Created attachment 113727 [details]
Memory chunk, dumped and visualized as raw image in Gimp

Explanation follows...
Comment 30 Tobias Deiminger 2018-07-02 23:11:35 UTC
(In reply to Albert Astals Cid from comment #28)
> *BUT* this may actually be a manifestation of "glibc is useless and doesn't
> actually free memory when you tell it to" that i workarounded at
> https://cgit.kde.org/okular.git/commit/
> ?id=95bc29a76fc1f93eaabe5383d934644067dfc884

That's well possible. I did some memory forensics with attachment 109856 [details] loaded. The observations could be explained by glibc not freeing memory.

Important note in advance: Yes, Okular caches pages. But it stores them as QPixmaps (e.g., see PagePrivate::m_pixmaps). This means, at least on X11, the image payload is not kept in Okulars heap. QPixmap objects are just wrappers around X11 pixmap handles [0]. The raw image data is stored in memory managed by the X11 server, not in Okulars heap.

Following this theory, I shouldn't be able to find bitmaps in heap.

I iterated over all malloc chunks in a gdb session (using gdb gef with some customization), and checked for suspicious chunks (by vtable, by size). There were several chunks of size > 4 MiB, flagged as "in use". Such big chunks could be bitmaps, just guessing by their sheer size. I dumped the memory range of such a chunk into a file, like:

(gdb) dump binary memory /tmp/dump.bin 0x55a80d790560 0x55A80DB71BF0

When opening /tmp/dump.bin in Gimp as raw image, and playing around with width and height a bit, it looked like attachment 113727 [details]. Quite clear case. This is either the reminder of a page bitmap, or Maxwells Daemon did a quite good job :)

What does this tell us? Imo it tells there's the inner bitmap of a QImage hanging around in the heap, a thing that shouldn't be. QImages are only used temporarily, when they're sent from the rendering thread over a queued connection to the main thread. After transforming them into QPixmap (in Generator::generatePixmap), they should be freed. If they are not, this could for example mean there are some QImages stuck in the queue, or it could be some glibc behavior as mentioned by Albert.

Albert, Oliver: If you think I'm on the wrong track, please stop me from digging deeper.

[0] ftp://www.x.org/pub/X11R7.7/doc/man/man3/xcb_create_pixmap.3.xhtml
Comment 31 Krešimir Čohar 2018-07-02 23:36:36 UTC
(In reply to Albert Astals Cid from comment #28)
> I'm not going to answer further in this bug because i don't feel compelled
> to have a discussion with people that saying things like "no way a 90kb PDF
> can take XX Mb of memory" as if they knew anything about they are talking
> about.
> 
> *BUT* this may actually be a manifestation of "glibc is useless and doesn't
> actually free memory when you tell it to" that i workarounded at
> https://cgit.kde.org/okular.git/commit/
> ?id=95bc29a76fc1f93eaabe5383d934644067dfc884
> 
> If this is the case one may need to add more malloc_trim around and not only
> when the document is closed, but i totally don't feel like being dragged to
> discuss with people that have no clue what they're talking about and have no
> respect for people that give them stuff for free.

"no respect for people that give them stuff for free"? "don't know what you're talking about"? for real? i meant no disrespect, i don't even use okular but i was trying to direct you to something that clearly bears closer examination. there are a myriad of other PDF readers out there we could be using but we're still here, reporting bugs/problems in okular. think long and hard about what that means with respect to the appreciation we show for the software you make.
that being said, all of this is a moot point. it's taking up a lot more RAM than any other PDF reader (i'll leave you to decide whether that's appropriate).

i'm sorry i can't help you out more, i know how to report a bug, but the diagnostics is beyond me (not an IT guy).

thanks @Tobias for fleshing it out
Comment 32 Christoph Feck 2018-07-03 02:20:08 UTC
> Following this theory, I shouldn't be able to find bitmaps in heap.

Your theory is wrong. Read QPixmap source.
Comment 33 Tobias Deiminger 2018-07-03 05:18:46 UTC
(In reply to Christoph Feck from comment #32)
> > Following this theory, I shouldn't be able to find bitmaps in heap.
> 
> Your theory is wrong. Read QPixmap source.

I did and found http://code.qt.io/cgit/qt/qtbase.git/tree/src/plugins/platforms/xcb/nativepainting/qpixmap_x11.cpp#n1167 (but tbh am not sure if we take that path).

Also, in a gdb session with same document loaded but no scrolling yet, I can't find memory chunks large enough to hold contiguous range of memory for page bitmaps, which should be about 4MB for 1000 x 1000 px at 32 bit depth. The largest chunk there is about 70kB.

Both made me believe in that theory. Could you explain where I'm wrong?
Comment 34 Albert Astals Cid 2018-07-03 09:58:33 UTC
(In reply to Tobias Deiminger from comment #33)
> (In reply to Christoph Feck from comment #32)
> > > Following this theory, I shouldn't be able to find bitmaps in heap.
> > 
> > Your theory is wrong. Read QPixmap source.
> 
> I did and found
> http://code.qt.io/cgit/qt/qtbase.git/tree/src/plugins/platforms/xcb/
> nativepainting/qpixmap_x11.cpp#n1167 (but tbh am not sure if we take that
> path).

That code is only used if you have QT_XCB_NATIVE_PAINTING environment variable set that i hope you're not since it's experimental
Comment 35 Tobias Deiminger 2018-07-03 23:51:29 UTC
(In reply to Albert Astals Cid from comment #34)
> That code is only used if you have QT_XCB_NATIVE_PAINTING environment
> variable set that i hope you're not since it's experimental
The variable wasn't set. So QPixmap didn't delegate to QX11PlatformPixmap, but to QRasterPlatformPixmap.

> I can't find memory chunks large enough to hold contiguous range
> of memory for page bitmaps

Now I know why heap didn't contain expected chunks. The inner objects of PagePrivate::PixmapObject::m_pixmap were NOT in heap, but on stack.

@Albert: As I understand, the Page QPixmap is meant to live long. Is this correct? If yes, we have a bug here, because stack memory of course doesn't live long. 

That's what I did, maybe someone can repeat it for confirmation.

void PagePrivate::setPixmap( DocumentObserver *observer, QPixmap *pixmap, const NormalizedRect &rect, bool isPartialPixmap )
{
    if ( m_rotation == Rotation0 ) {
        // ...
        QMap< DocumentObserver*, PagePrivate::PixmapObject >::iterator it = m_pixmaps.find( observer );
        // ...
        it.value().m_pixmap = pixmap;  // break here, and observe inner objects of QPixmap* pixmap
        // ...
    }
}

(gdb) b page.cpp:557
Breakpoint 1 at 0x7fffdac68109: file /home/deiminge/git/okular/core/page.cpp, line 557.
(gdb) c
Thread 1 "okular" hit Breakpoint 1, Okular::PagePrivate::setPixmap (this=0x555555d2a610, observer=0x555555a46ee0, pixmap=0x555555df9d60, rect=..., isPartialPixmap=0x0) at /home/deiminge/git/okular/core/page.cpp:557
557             it.value().m_pixmap = pixmap;
(gdb) p *pixmap
$1 = {
...
  members of QPixmap: 
  data = {
    d = 0x555555d21950
  }
}
(gdb) p *(QRasterPlatformPixmap*)0x555555d21950 $2 = {
...
members of QRasterPlatformPixmap:
  image = {
    members of QImage:
    d = 0x7fffc00100d0
  }
}
(gdb) p *(QImageData*)0x7fffc00100d0
$9 = {
...
  width = 0x4f9, 
  height = 0x673, 
  depth = 0x20, 
  nbytes = 0x80476c, 
  devicePixelRatio = 1, 
...
  data = 0x7fffc6093010  // Here's the large raw pixmap
                         // 0x7fff... is in my stack memory range
...
}
Comment 36 Tobias Deiminger 2018-07-04 08:06:45 UTC
(In reply to Tobias Deiminger from comment #35)
> PagePrivate::PixmapObject::m_pixmap were NOT in heap, but on stack.

>   data = 0x7fffc6093010  // Here's the large raw pixmap
>                          // 0x7fff... is in my stack memory range

After sleeping over it, I can't believe my own words. How could uchar* QImageData::data happen to point to stack? Give me some time to double check and correlate with /proc/pid/maps.
Comment 37 Tobias Deiminger 2018-07-05 08:47:50 UTC
(In reply to Tobias Deiminger from comment #36)
> After sleeping over it, I can't believe my own words. How could uchar*
> QImageData::data happen to point to stack? Give me some time to double check
> and correlate with /proc/pid/maps.

Pixmap data was not on stack, nor in the [heap] mapping from /proc/pid/maps. That's why I couldn't find the chunk. It actually was in an separate anonymous memory mapping.

man 3 malloc gives hints why there may be more anonymous mappings (in addtion to [heap]):
- "When allocating blocks of memory larger than MMAP_THRESHOLD bytes, the glibc malloc() implementation  allocates  the  memory  as  a  private  anonymous  mapping...MMAP_THRESHOLD is 128 kB by default"
-  "in multithreaded applications, glibc creates additional memory allocation arenas if mutex contention is detected. Each arena is a large region of memory that is internally allocated by the system (using brk(2) or mmap(2))"

So, no bug with that.

Regarding the reported memory leak story, it looks like Albert is right.
1) I can observe a fast heap increase until page pixmap cache is fully populated (~ 142MB in my 20 page document test). => no bug, but intended caching
2) I can observer further continuous heap increase at slower rate on scrolling after that. This looks like memory leak at first sight. But when I inject a malloc_trim(0) with gdb in this situation, the memory usage immediately goes back to 142MB. => No bug, but libc "smartness".

Looks fine until here. If we find memory increase even with repeated malloc_trim(0), it would indeed be worrying. I'll continue to test and see if something like this occurs.
Comment 38 Nate Graham 2018-10-24 14:54:18 UTC
*** Bug 400246 has been marked as a duplicate of this bug. ***
Comment 39 Nate Graham 2020-05-07 15:24:57 UTC
*** Bug 421091 has been marked as a duplicate of this bug. ***