Uploaded image for project: 'JDK'
  1. JDK
  2. JDK-6625723

Excessive ThreadLocal storage used by ReentrantReadWriteLock




        As of jdk6, each ReentrantReadWriteLock stores a value into a ThreadLocal
        for each Thread where that lock acquires a read lock.
        If m is the number of locks, and n is the number of threads each lock is used with,
        then the memory overhead is O(m*n).
        Although annoying, this may be acceptable when either m or n is small,
        but some users have large values for m and n, and for them
        this memory overhead is a showstopper. E.g. in


        (rest of description is text of user complaint)

        I'm happy user of java 5 concurrency utilities - especially read/write locks. We have a system with hundreds of thousands of objects (each protected by read/write lock) and hundreds of threads. I have tried to upgrade system to jdk6 today and to my surprise, most of the memory reported by jmap -histo was used by thread locals and locks internal objects...

        As it turns out, in java 5 every lock had just a counter of readers and writers. In java 6, it seems that every lock has a separate thread local for itself - which means that there are 2 objects allocated for each lock for each thread which ever tries to touch it... In our case, memory usage has gone up by 600MB just because of that.

        I have attached small test program below. Running it under jdk5 gives following results:

        Memory at startup 114
        After init 4214
        One thread 4214
        Ten threads 4216

        With jdk6 it is

        Memory at startup 124
        After init 5398
        One thread 8638
        Ten threads 39450

        This problem alone makes jdk6 completly unusable for us. What I'm considering is taking ReentranceReadWriteLock implementation from JDK5 and using it with rest of JDK6. There are two basic choices - either renaming it and changing our code to allocate the other class (cleanest from deployment point of view) or putting different version in bootclasspath. Will renaming the class (and moving it to different package) work correctly with jstack/deadlock detection tools, or they are expecting only JDK implementation of Lock ? Is there any code in new jdk depending on particular implementation of RRWL ?

        Why this change was made btw ? Only reason I can see is to not allow threads to release read lock taken by another threads. This is a nice feature, but is it worth wasting gigabyte of heap ? How this would scale to really big number of threads ?

        Test program

        import java.util.concurrent.atomic.AtomicInteger;
        import java.util.concurrent.locks.*;
        public class LockTest {
          static AtomicInteger counter = new AtomicInteger(0);
          static Object foreverLock = new Object();
          public static void main(String[] args) throws Exception {
            dumpMemory("Memory at startup ");
            final ReadWriteLock[] locks = new ReadWriteLock[50000];
            for ( int i =0; i < locks.length; i++ ) {
              locks[i] = new ReentrantReadWriteLock();
            dumpMemory("After init ");
            Runnable run = new Runnable() {
              public void run() {
                for ( int i =0; i< locks.length; i++ ) {
                synchronized(foreverLock) {
                  try {
                  } catch (InterruptedException e) {
            new Thread(run).start();
            while ( counter.get() != 1 ) {
            dumpMemory("One thread ");
            for ( int i =0; i < 9; i++ ) {
              new Thread(run).start();
            while ( counter.get() != 10 ) {
            dumpMemory("Ten threads ");
          private static void dumpMemory(String txt ) {
            System.out.println(txt + (Runtime.getRuntime().totalMemory()-Runtime.getRuntime().freeMemory())/1024);


            Issue Links



                • Assignee:
                  martin Martin Buchholz
                  martin Martin Buchholz
                • Votes:
                  0 Vote for this issue
                  1 Start watching this issue


                  • Created: