Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8132849

Increased stop time in cleanup phase because of single-threaded walk of thread stacks in NMethodSweeper::mark_active_nmethods()

    Details

    • Subcomponent:
    • Introduced In Version:
      8
    • CPU:
      generic
    • OS:
      generic

      Description

      One of our services that runs with a few thousand threads recently noticed an increase in safepoint stop times, but not gc times, upon transitioning to JDK 8 from JDK 7.

      Further investigation revealed that most of the delta was related to the so-called pre-gc/vm-op "cleanup" phase when various book-keeping activities are performed, and more specifically in the portion that walks java thread stacks single-threaded (!) and updates the hotness counters for the active nmethods. This code appears to
      be new to JDK 8 (in jdk 7 one would walk the stacks only during code cache sweeps).

      This code should be fixed by either:
      (a) doing these updates by walking thread stacks in multiple worker threads in parallel, or best of all:
      (b) doing these updates when we walk the thread stacks during GC, and skipping this phase entirely
            for non-GC safepoints (with attendant loss in frequency of this update in low GC frequency
            scenarios).

      I also wonder how code cache sweep/eviction quality might be compromised if one were to do these hotness updates at a much lower frequency, as a temporary workaround to the performance problem.

      This issue was discussed at some length on the hotspot-compiler-dev@openjdk.java.net mailing list in the thread starting at:

             http://mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2015-July/018618.html

      where further detail can be obtained.

      The same issue exists in JDK 9, but not in JDK 7.

      More specifically, as stated in one of the email messages in that thread:

      ''
      I noticed the increase even with Initial and Reserved set to the default of 240 MB, but actual usage much lower (less than a quarter).

      Look at this code path. Note that this is invoked at every safepoint (although it says "periodically" in the comment). In the mark_active_nmethods() method, there's a thread iteration in both branches of the if.
      ....
      PS: for comparison, here's data with +TraceSafepointCleanup from JDK 7 (first, where this isn't done)
      vs JDK 8 (where this is done) with a program that has a few thousands of threads:



      JDK 7:
      ...
      2827.308: [sweeping nmethods, 0.0000020 secs]
      2828.679: [sweeping nmethods, 0.0000030 secs]
      2829.984: [sweeping nmethods, 0.0000030 secs]
      2830.956: [sweeping nmethods, 0.0000030 secs]
      ...

      JDK 8:
      ...
      7368.634: [mark nmethods, 0.0177030 secs]
      7369.587: [mark nmethods, 0.0178305 secs]
      7370.479: [mark nmethods, 0.0180260 secs]
      7371.503: [mark nmethods, 0.0186494 secs]
      ...
      Here's a snapshot of the code cache related counters:

      sun.ci.codeCacheCapacity=251658240
      sun.ci.codeCacheMaxCapacity=251658240
      sun.ci.codeCacheMethodsReclaimedNum=3450
      sun.ci.codeCacheSweepsTotalNum=58
      sun.ci.codeCacheSweepsTotalTimeMillis=1111
      sun.ci.codeCacheUsed=35888704

      Notice that the code cache usage is less that 35 MB, for the 240 MB capacity, yet it seems we have had 58 sweeps already, and safepoint cleanup says:

      [mark nmethods, 0.0165062 secs]

      Even if the two closures do little or no work, the single-threaded walk over deep stacks of a thousand threads will cost time for applications with many threads, and this is now done at each safepoint irrespective of the sweeper activity as far as I can tell. It seems as if this work should be somehow rolled up (via a suitable injection) into GC's thread walks that are done in parallel, rather than doing this in a pre-GC phase (unless I am mssing some reason that the sequencing is necessary, which it doesn't seem to be here).
      ''


      // Scans the stacks of all Java threads and marks activations of not-entrant methods.
      // No need to synchronize access, since 'mark_active_nmethods' is always executed at a
      // safepoint.
      void NMethodSweeper::mark_active_nmethods() {
        assert(SafepointSynchronize::is_at_safepoint(), "must be executed at a safepoint");
        // If we do not want to reclaim not-entrant or zombie methods there is no need
        // to scan stacks
        if (!MethodFlushing) {
          return;
        }

        // Increase time so that we can estimate when to invoke the sweeper again.
        _time_counter++;

        // Check for restart
        assert(CodeCache::find_blob_unsafe(_current) == _current, "Sweeper nmethod cached state invalid");
        if (!sweep_in_progress()) {
          _seen = 0;
          _sweep_fractions_left = NmethodSweepFraction;
          _current = CodeCache::first_nmethod();
          _traversals += 1;
          _total_time_this_sweep = Tickspan();

          if (PrintMethodFlushing) {
            tty->print_cr("### Sweep: stack traversal %d", _traversals);
          }
          Threads::nmethods_do(&mark_activation_closure);

        } else {
          // Only set hotness counter
          Threads::nmethods_do(&set_hotness_closure);
        }

        OrderAccess::storestore();
      }

        Issue Links

          Activity

          Hide
          thartmann Tobias Hartmann added a comment -
          Yes, that's a good question. It may be worth adding a flag to switch between scanning the entire thread stacks and only looking at the top method and perform some benchmarking.
          Show
          thartmann Tobias Hartmann added a comment - Yes, that's a good question. It may be worth adding a flag to switch between scanning the entire thread stacks and only looking at the top method and perform some benchmarking.
          Hide
          neliasso Nils Eliasson added a comment - - edited
          EDIT: The linked bug is already pushed. We should verify if this regression is gone.
          Show
          neliasso Nils Eliasson added a comment - - edited EDIT: The linked bug is already pushed. We should verify if this regression is gone.
          Hide
          rkennke Roman Kennke added a comment -
          Nils: very likely not. The linked bug implemented the "framework" for parallelizing safepoint cleanup, but it asks the GC for parallel worker threads to do that. No GC in OpenJDK currently provides a thread pool. There is JDK-8184751 to implement this missing piece, and I have a patch ( http://cr.openjdk.java.net/~rkennke/8184751/webrev.00/ ) but I was told (in private) by some guys that this is not going to get accepted. I'll be happy to re-open the discussion though. The patch seems simple enough, and it's present and active in Shenandoah since half a year without issues.
          Show
          rkennke Roman Kennke added a comment - Nils: very likely not. The linked bug implemented the "framework" for parallelizing safepoint cleanup, but it asks the GC for parallel worker threads to do that. No GC in OpenJDK currently provides a thread pool. There is JDK-8184751 to implement this missing piece, and I have a patch ( http://cr.openjdk.java.net/~rkennke/8184751/webrev.00/ ) but I was told (in private) by some guys that this is not going to get accepted. I'll be happy to re-open the discussion though. The patch seems simple enough, and it's present and active in Shenandoah since half a year without issues.
          Hide
          dlong Dean Long added a comment -
          Can we also take advantage of the pool of JavaThreads waiting at safepoints doing nothing?
          Show
          dlong Dean Long added a comment - Can we also take advantage of the pool of JavaThreads waiting at safepoints doing nothing?
          Hide
          rkennke Roman Kennke added a comment -
          Dean: I suppose we could. But hi-jacking the contexts of those threads seems complicated and risky and not worth, it should be cheap to keep a dedicated small thread pool for this purpose instead. Depending on how the GC works, we *may* use the existing GC thread pool. In my experiments it didn't seem worth to use around 4-8 threads for this.
          Show
          rkennke Roman Kennke added a comment - Dean: I suppose we could. But hi-jacking the contexts of those threads seems complicated and risky and not worth, it should be cheap to keep a dedicated small thread pool for this purpose instead. Depending on how the GC works, we *may* use the existing GC thread pool. In my experiments it didn't seem worth to use around 4-8 threads for this.

            People

            • Assignee:
              rkennke Roman Kennke
              Reporter:
              ysr Y. Ramakrishna
            • Votes:
              0 Vote for this issue
              Watchers:
              11 Start watching this issue

              Dates

              • Created:
                Updated: