Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8172393

Calculating count from sized double stream does not perform well in parallel streams

    Details

      Description

      FULL PRODUCT VERSION :
      java version "1.8.0_111"
      Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
      Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)


      ADDITIONAL OS VERSION INFORMATION :
      Linux y700 4.8.0-32-generic #34-Ubuntu SMP Tue Dec 13 14:30:43 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux


      A DESCRIPTION OF THE PROBLEM :
      Creating stream of random value from java.util.Random class returns unsized stream.

      As result count() operation on large sets of data take a long time, using parallel streams give no performance improvemn due to bad splitting streams.

      STEPS TO FOLLOW TO REPRODUCE THE PROBLEM :
      DoubleStream doubles = random.doubles(SIZE, 0, 1);

      doubles.count() <- TAKE A LONG TIME TO EXECUTE
      doubles.parallel.count() <- gives no benefits



      EXPECTED VERSUS ACTUAL BEHAVIOR :
      EXPECTED -
      stream size is known and flag should be present
      but double pipeline doesn't rely on size flag

      public final long count() {
              return mapToLong(e -> 1L).sum();
      }
      ACTUAL -
      for sized stream count could return size immediately

      REPRODUCIBILITY :
      This bug can be reproduced always.

      ---------- BEGIN SOURCE ----------
      DoubleStream doubles = random.doubles(SIZE, 0, 1);
      double count = doubles.count();
       
      ---------- END SOURCE ----------

        Attachments

          Issue Links

            Activity

              People

              • Assignee:
                psonal Pallavi Sonal (Inactive)
                Reporter:
                webbuggrp Webbug Group
              • Votes:
                0 Vote for this issue
                Watchers:
                3 Start watching this issue

                Dates

                • Created:
                  Updated:
                  Resolved: