Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-8229857

scope inference on pointer to pointer is not friendly



    • Type: Bug
    • Status: Closed
    • Priority: P4
    • Resolution: Fixed
    • Affects Version/s: repo-panama
    • Fix Version/s: repo-panama
    • Component/s: tools
    • Labels:


      Given a native function that returns pointers, scope inference to make those dereferenced pointers to have the same scope as the buffer to receive those pointers is a usability problem.

      Consider following native function,

      void allocateDots(int number, point_t** dots);
      int getDotX(point_t *dot);

      where the dots are output buffers to receive multiple pointers to point_t, it is translated as following Java function,

      void allocateDots(int number, Pointer<? extends Pointer<OpaquePoint>> dots);
      int getDotX(Pointer<OpaquePoint> dot);

      Typically, when we receive an opaque pointer from native code, we save it for later use. Thus, the code is likely as following,

              Pointer<TestHelper.OpaquePoint>[] dots = new Pointer[3];
              try (Scope scope = Scope.globalScope().fork()) {
                  Array<Pointer<TestHelper.OpaquePoint>> ar = scope.allocateArray(
                          LayoutType.ofStruct(TestHelper.OpaquePoint.class).pointer(), dots.length);
                  lib.allocateDots(dots.length, ar.elementPointer());
                  for (int i = 0; i < dots.length; i++) {
                      dots[i] = ar.get(i);
              int x = lib.getDotX(dots[i]);

      In case of current scope inference implementation, we make the dereferenced pointers as in dots[i] to have same scope, which would be closed when we try to access it.

      Current implementation is not providing any means to keep the pointer for another scope, which means we have to make the buffer for receiving those pointers in a scope matching those pointers we received, which is a waste of resources.

      Above examples is simplified, some real APIs can be seen TensorFlow API like following,

      TF_CAPI_EXPORT extern void TF_SessionRun(
          TF_Session* session,
          // RunOptions
          const TF_Buffer* run_options,
          // Input tensors
          const TF_Output* inputs, TF_Tensor* const* input_values, int ninputs,
          // Output tensors
          const TF_Output* outputs, TF_Tensor** output_values, int noutputs,
          // Target operations
          const TF_Operation* const* target_opers, int ntargets,
          // RunMetadata
          TF_Buffer* run_metadata,
          // Output status




            henryjen Henry Jen
            henryjen Henry Jen
            0 Vote for this issue
            1 Start watching this issue