Application: kcachegrind (0.5.1kde) KDE Platform Version: 4.4.1 (KDE 4.4.1) Qt Version: 4.6.2 Operating System: Linux 2.6.33-1.fc13.i686 i686 Distribution: "Fedora release 13 (Goddard)" -- Information about the crash: When I try open 2Gb file generated by xdebug KCacheGrind always segfaulted. The crash can be reproduced every time. -- Backtrace: Application: KCachegrind (kcachegrind), signal: Segmentation fault [KCrash Handler] #6 FixPool::ensureSpace (this=0x9d9ed08, size=32) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/pool.cpp:104 #7 0x080956ef in FixPool::allocate (this=0x9d9ed08, size=32) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/pool.cpp:59 #8 0x0809403c in CachegrindLoader::loadTraceInternal (this=0xbfd9e830, part=0x9daa038) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/cachegrindloader.cpp:1162 #9 0x08094bdb in CachegrindLoader::loadTrace (this=0x9bb2bc8, p=0x9daa038) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/cachegrindloader.cpp:180 #10 0x080859d2 in TraceData::addPart (this=0x9db1188, dir=..., name=...) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/tracedata.cpp:3380 #11 0x0808c0af in TraceData::load (this=0x9db1188, base=...) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/libcore/tracedata.cpp:3302 #12 0x0806dde9 in TopLevel::loadTrace (this=0x9bb2db0, file=...) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/kcachegrind/toplevel.cpp:946 #13 0x0806e35e in TopLevel::loadTraceDelayed (this=0x9bb2db0) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/kcachegrind/toplevel.cpp:1013 #14 0x0806f52d in TopLevel::qt_metacall (this=0x9bb2db0, _c=QMetaObject::InvokeMetaMethod, _id=85, _a=0xbfd9ec6c) at /usr/src/debug/kdesdk-4.4.1/i686-redhat-linux-gnu/kcachegrind/kcachegrind/toplevel.moc:323 #15 0x07476efb in QMetaObject::metacall(QObject*, QMetaObject::Call, int, void**) () from /usr/lib/libQtCore.so.4 #16 0x07485d1f in QMetaObject::activate(QObject*, QMetaObject const*, int, void**) () from /usr/lib/libQtCore.so.4 #17 0x0748b7b8 in ?? () from /usr/lib/libQtCore.so.4 #18 0x0748b8dd in ?? () from /usr/lib/libQtCore.so.4 #19 0x074821c4 in QObject::event(QEvent*) () from /usr/lib/libQtCore.so.4 #20 0x0213bddc in QApplicationPrivate::notify_helper(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4 #21 0x02142836 in QApplication::notify(QObject*, QEvent*) () from /usr/lib/libQtGui.so.4 #22 0x07c63c5b in KApplication::notify (this=0xbfd9f508, receiver=0x9cbe238, event=0xbfd9f1a0) at /usr/src/debug/kdelibs-4.4.1/kdeui/kernel/kapplication.cpp:302 #23 0x07472523 in QCoreApplication::notifyInternal(QObject*, QEvent*) () from /usr/lib/libQtCore.so.4 #24 0x0749d45e in ?? () from /usr/lib/libQtCore.so.4 #25 0x0749a9a5 in ?? () from /usr/lib/libQtCore.so.4 #26 0x0076d585 in g_main_dispatch (context=0x9ab6690) at gmain.c:1960 #27 IA__g_main_context_dispatch (context=0x9ab6690) at gmain.c:2513 #28 0x007712c8 in g_main_context_iterate (context=0x6ad810, block=1, dispatch=1, self=0x9ab4480) at gmain.c:2591 #29 0x007714a9 in IA__g_main_context_iteration (context=0x9ab6690, may_block=1) at gmain.c:2654 #30 0x0749a6a6 in QEventDispatcherGlib::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQtCore.so.4 #31 0x021ea546 in ?? () from /usr/lib/libQtGui.so.4 #32 0x07470bfa in QEventLoop::processEvents(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQtCore.so.4 #33 0x07470f3a in QEventLoop::exec(QFlags<QEventLoop::ProcessEventsFlag>) () from /usr/lib/libQtCore.so.4 #34 0x07473607 in QCoreApplication::exec() () from /usr/lib/libQtCore.so.4 #35 0x0213be88 in QApplication::exec() () from /usr/lib/libQtGui.so.4 #36 0x0806044a in main (argc=1, argv=0xbfd9f654) at /usr/src/debug/kdesdk-4.4.1/kcachegrind/kcachegrind/main.cpp:91 Reported using DrKonqi
Hmm. If it segfaults at pool.cpp:104, a call to malloc() returned 0, ie. "out of memory". Some explanation: You can distinguish coarse-grained vs. fine-grained cost information in KCachegrind's data model. Fine-grained is everything inside of a function. Now, the current strategy for loading fine-grained data is to just store it away without any processing, and to parse/postprocess/aggregate the "fine-grained" information for a given symbol only if this symbol actually is selected, which usually saves memory. But if you have a huge number of "fine-grained" information, you can run into your "out-of-memory" problem. But for profile data, that should never happen. I suppose that xdebug is actually generating a trace of events instead of a profile, e.g. dumping a line for every executed call? Then, it misuses the loading mechanism in KCachegrind to do the convertion to a profile. IMHO you could as well send a bug report to xdebug... KCachegrind was not designed to load that huge data files. A work-around: in KCachegrinds sources in libcore/fixcost.h, change the line #define USE_FIXCOST 1 to set USE_FIXCOST to 0. That will do the parsing/aggregation of fine-grained profile information while the loading is done, which probably helps your case. Can you confirm this?
Oh, Josef, actually I doubt what really understand what you say :( If you wish I can provide this file full or part. In any case I think if KCacheGrind can't read file it must say about it and handle error properly instead off segfault.
Can you confirm that this is an out-of-memory condition, e.g. by running "top" in a terminal? "if KCacheGrind can't read file it must say about it and handle error properly instead off segfault" It is not that easy if this is an out-of-memory condition. If malloc returns 0, the process usually is already screwed, and has no other option than just to exit right away. Even printing something to the console at this point of time may fail. And if you have swap space, the machine is probably already trashing the hard disk quite some time before malloc() returns 0.
I think if all physical memory and swap exhausted machine should behave in another way - swap circle must be in action. Also, it is very strange why only KcacheGring got OutOfMemmory denied? But in any case, ho I can through top ensure what error here?
> If malloc returns 0, the process usually is > already screwed, and has no other option than just to exit right away. Even > printing something to the console at this point of time may fail. > And if you have swap space, the machine is probably already trashing the > hard disk quite some time before malloc() returns 0. Hm. I think there is not tries allocate all memory at once? So, if malloc return 0, we can at least free before allocated memory, then try allocate small portion to, f.e. send standard notification though dbus.
Hmm.. malloc returning 0 also can mean that just the virtual address space of the process is filled up. How much memory do you have in your system? Regarding "top": run it in a terminal nearby and observe the memory usage while KCachegrind is loading this huge file. What is the maximum number e.g. for resident size directly before KCachegrind crashes?
I have 3Gb physical memory and 4,8Gb swap. Near to crash top looks like: top - 04:08:57 up 7:37, 9 users, load average: 4.59, 3.74, 3.28 Tasks: 1 total, 0 running, 0 sleeping, 1 stopped, 0 zombie Cpu(s): 25.3%us, 28.5%sy, 0.0%ni, 34.4%id, 8.9%wa, 0.0%hi, 2.9%si, 0.0%st Mem: 2815812k total, 2733320k used, 82492k free, 22644k buffers Swap: 4915192k total, 355308k used, 4559884k free, 658796k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 2991 pasha 20 0 3039m 1.2g 357m T 0.0 45.5 1:04.55 kcachegrind There almost free swap as you can see and even some free physical memory.
*** Bug 236441 has been marked as a duplicate of this bug. ***
malloc() returning 0 obviously just means that address space is exhausted, nothing about usage of real memory. And the solution for that is to run a 64bit OS nowadays... Just found a nice email about OOM handling by Lennart Poettering: http://article.gmane.org/gmane.comp.audio.jackit/19998 He suggests to just write a short message to the console and to quit; there really is nothing else to do. There seem to be more people trying to load huge files into KCachegrind, see bug 236441, with the same backtrace. So it seems fine to handle OOM in FixPool::ensureSpace(). I'll keep the bug open for reminding me to do aggregation while loading data (assuming that your use case is not about profiling data with millions of different, but same data points - if the first, the only way out is to use a 64-bit OS).
*** Bug 245548 has been marked as a duplicate of this bug. ***
*** Bug 303073 has been marked as a duplicate of this bug. ***
*** Bug 312486 has been marked as a duplicate of this bug. ***
Thank you for the crash reports. As it has been a while since this was reported, can you please test and confirm if this issue is still occurring or if this bug report can be marked as resolved. I have set the bug status to "needsinfo" pending your response, please change back to "reported" or "resolved/worksforme" when you respond, thank you.
Dear Bug Submitter, This bug has been in NEEDSINFO status with no change for at least 15 days. Please provide the requested information as soon as possible and set the bug status as REPORTED. Due to regular bug tracker maintenance, if the bug is still in NEEDSINFO status with no change in 30 days the bug will be closed as RESOLVED > WORKSFORME due to lack of needed information. For more information about our bug triaging procedures please read the wiki located here: https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging If you have already provided the requested information, please mark the bug as REPORTED so that the KDE team knows that the bug is ready to be confirmed. Thank you for helping us make KDE software even better for everyone!
This bug has been in NEEDSINFO status with no change for at least 30 days. The bug is now closed as RESOLVED > WORKSFORME due to lack of needed information. For more information about our bug triaging procedures please read the wiki located here: https://community.kde.org/Guidelines_and_HOWTOs/Bug_triaging Thank you for helping us make KDE software even better for everyone!