here is an intermediary report of my understanding of (and progress with) the issue.
The high memory usage of the compiler is caused by incremental inlining and optimizations executed after incremental inlining. The impact of incremental inlining on memory usage is around 5X as large as the impact of the optimizations following it. As a workaround, the memory usage of the compiler can be reduced by changing the LiveNodeCountInliningCutoff threshold from 40'000 to 20'000 (the value used by JDK7).
I executed the reproducer as suggested by Vladimir K. I used a locally-built OpenJDK build that is based on a recent checkout from:
For the investigation, I traced the memory usage of the following two methods while the methods are compiled with C2:
(1) jdk.nashorn.internal.scripts.Script$Recompilation$190$341313AZ$r::L:100$loadLib$L:8141$parse$subscripts (no on the methods compiled)
(2) jdk.nashorn.internal.scripts.Script$Recompilation$181$345772AA$r::L:100$loadLib$L:8141$prog1 (no restrictions on the methods compiled)
For both methods I observe two "jumps" in memory usage:
(1) during performing incremental inlining (for the first time) and
(2) after escape analysis, but still in the optimization phase (I have not precisely identified the location yet).
Here are some numbers about the jumps in the memory usage:
| Usage (RSS in MB)
Method | Before incremental inlining | After incremental inlining | Difference
(1) | 524 | 779 | 255
(2) | 204 | 470 | 266
| Usage (RSS in MB)
Method | Just after escape analysis | At the end of the | Difference
(1) | 779 | 825 | 46
(2) | 484 | 528 | 44
Jump 1 (the jump during incremental inlining) is around 5X larger than Jump 2 (the jump that happens after escape analysis).
As Vladimir pointed out, the compiler relases memory it uses. The problem is that most memory pages are returned to the memory allocator, but not to the OS. So it seems we are not leaking (a lot of) memory. Here is some output that was generated using the mallinfo() call after the last compilation (in CompileBroker::invoke_compiler_on_method):
Looking into mallinfo
Non-mmapped space allocated: 907 407 360 bytes
Space allocated in mmapped regions: 18 440 192 bytes
The total number of bytes in fastbin free blocks: 73552 bytes
The total number of bytes used by in-use allocations: 51 851 344 bytes
The total number of bytes in free blocks: 855 556 016 bytes
The total amount of releasable free space at the top of the heap: 38 162 144 bytes
There are two problems that must be addressed:
(1) the high memory usage of incremental inlining;
(2) the high memory usage of the optimization phases following escape analysis.
Problem (1) and (2) are closely related. As I understand it, incremental inlining produces a large number of nodes. Subsequent phases occupy an amount of memory that is proportional to the number of nodes generated by incremental inlining.
Problem 1 is currently tracked by
. The fix earlier proposed by Vladimir (see above) should reduce the impact of Problem 2.
The LiveNodeCountInliningCutoff flag sets a bound for incremental inlining. In JDK8, the flag is set by default to 20'000. In JDK8 and JDK9 the flag is set to 40'000 (see
I ran the reproducer with 7u80 and 8u60. In case of 8u60 I used both LiveNodeCountInliningCutoff=20'000 and LiveNodeCountInliningCutoff=40'000. Here are the results:
JDK version | LiveNodeCountInliningCutoff | RSS (MB) | Total runtime | Compilation time
7u80 | 20'000 | 163 | 1m0s | 11s
8u60 | 20'000 | 522 | 2m46s | 127s
8u60 | 40'000 | 976 | 6m54s | 371s
Reducing the LiveNodeCountInliningCutoff might be a workaround in cases when memory limitations cause the compiler to exit with an out of memory error.
I'll continue the investigation by looking into ways to reduce the memory usage of C2.
Thank you and best regards,