Introduction: The Deceptive Simplicity of Intrinsic Locks
When developers first encounter concurrent programming in Java, the synchronized keyword often appears as a magical solution to all threading anomalies. By simply slapping this keyword onto method declarations or wrapping critical sections in synchronized blocks, it seems that race conditions are instantly eradicated. However, concurrent programming is rarely about merely applying keywords; it is fundamentally about managing access to shared, mutable state [1, 2].
While the synchronized keyword provides an accessible entry point into thread safety, relying on it blindly introduces a labyrinth of hidden complexities. Excessive or improper use of synchronization can lead to severe performance bottlenecks, diminished application scalability, and catastrophic liveness failures such as deadlocks [3]. Because threads share the memory address space of their owning process, they have access to the same variables, which is convenient but highly dangerous without strict coordination protocols [4].
In this comprehensive guide, we will explore the critical pitfalls developers must be careful about when using synchronized in Java. We will dissect the dual nature of intrinsic locks, analyze the architectural nightmares of lock-ordering deadlocks, explore the performance penalties imposed by serialized execution, and discuss advanced optimization strategies like lock splitting and lock striping to help you engineer robust, high-performance concurrent applications.
The Dual Nature of Synchronization: Mutual Exclusion and Memory Visibility
To understand the dangers of synchronized, we must first understand its complete mandate. Many programmers incorrectly assume that synchronization is solely a mechanism for mutual exclusionโpreventing an object from being seen in an inconsistent state by one thread while it is being modified by another [5]. While mutual exclusion is a primary function, it is only half of the story.
The Java Memory Model (JMM) dictates how and when changes made by one thread become visible to others. In modern shared-memory multiprocessors, compilers and hardware frequently reorder instructions, cache variables in processor-local registers, and delay writing updates to main memory to optimize execution speed [6]. In the absence of synchronization, a reading thread might see a completely stale value for a shared variable, or worse, observe a partially constructed object [7, 8].
Locking is not just about mutual exclusion; it is also intrinsically about memory visibility [9, 10]. When a thread exits a synchronized block, it establishes a happens-before relationship with any subsequent thread that acquires the exact same lock [11, 12]. This guarantees that all memory operations performed prior to releasing the lock are completely visible to the next thread.
Therefore, a critical pitfall to avoid is inconsistent synchronization. If a variable is guarded by a lock, that exact same lock must be held every single time the variable is accessedโwhether for writing or merely reading [13, 14]. Reading a shared mutable variable without acquiring the appropriate lock sacrifices out-of-thin-air safety and exposes your application to stale data, which can manifest as infinite loops, corrupted data structures, or inaccurate computations [7, 15].
Critical Pitfall 1: Lock-Ordering Deadlocks
One of the most dangerous liveness hazards associated with the synchronized keyword is the lock-ordering deadlock. A deadlock occurs when multiple threads wait forever due to a cyclic locking dependency [16]. For instance, if Thread A holds Lock L and attempts to acquire Lock M, while Thread B holds Lock M and attempts to acquire Lock L, both threads will remain permanently blocked [16, 17].
Unlike database systems that can detect deadlocks and abort transactions to recover, the Java Virtual Machine (JVM) provides no automatic deadlock resolution. When a set of Java threads deadlocks, those threads are permanently out of commission, often requiring a complete application restart to restore functionality [18].
Lock-ordering deadlocks frequently occur in dynamic scenarios where the sequence of lock acquisition is not strictly controlled. Consider a banking application with a method transferMoney(Account fromAccount, Account toAccount, double amount). To ensure thread safety, a developer might synchronize on both account objects before updating their balances. However, if User 1 transfers money to User 2, the thread acquires User 1's lock then User 2's lock. If User 2 simultaneously transfers money to User 1, the second thread acquires the locks in the exact opposite order. This creates a classic dynamic lock-ordering deadlock [19, 20].
To avoid this pitfall, you must ensure that whenever multiple locks are acquired, they are always acquired in a globally consistent order. In the banking example, this can be resolved by inducing a lock ordering based on a unique, immutable identifier, such as the account number or the object's hash code, ensuring that the "lesser" lock is always acquired first regardless of the argument order [21].
Critical Pitfall 2: Alien Methods and Hidden Liveness Hazards
Another subtle but catastrophic cause of deadlocks occurs when interacting with "alien methods." An alien method is a method whose behavior is not fully known by the calling class; it might be a method designed to be overridden by a subclass, or a callback provided by a client [3].
A paramount rule in concurrent Java programming is: never call an alien method from within a synchronized region [22]. Because you have no control over what an alien method does, invoking it while holding an intrinsic lock is asking for liveness trouble [23, 24]. The alien method might execute a long-running network I/O operation, putting the thread to sleep and starving all other threads waiting for your lock. Even worse, the alien method might attempt to acquire other locks, inadvertently creating a cyclic locking dependency and triggering a deadlock [23].
This pitfall is notoriously common in the Observer pattern or listener-based GUI frameworks. If a subject synchronizes its internal state while iterating through a list of registered listeners to notify them of an event, it is calling alien methods while holding a lock. If a listener responds to the event by attempting to unregister itself or access another synchronized method on the subject, it can paralyze the system [25].
The solution to this pitfall is to use open calls. An open call is an invocation made without holding any locks. You can achieve this by shrinking the synchronized block so that it only guards the specific shared state operations. If you need to iterate over listeners, synchronize only to make a snapshot copy of the listener list, then release the lock and iterate over the copy to dispatch the notifications [26]. Striving to use open calls throughout your program makes it significantly easier to analyze your architecture for deadlock freedom [27].
Critical Pitfall 3: The Illusion of Compound Actions
A pervasive misconception among Java developers is that using thread-safe collections or synchronizing every individual method of a class automatically renders the entire class safe for concurrent use. This completely ignores the vulnerability of compound actions.
Consider a check-then-act operation, such as a put-if-absent routine on a synchronized Vector or Hashtable. Even though the contains() and add() methods are individually atomic and synchronized, the overall sequence is not [28, 29]. If Thread A checks if an element is absent and prepares to add it, Thread B can easily interleave, check the same condition, and insert the element first. By the time Thread A resumes and adds its element, the collection has been corrupted with a duplicate entry.
To fix this, the entire compound action must be executed atomically. However, developers often make the mistake of synchronizing the compound action on the wrong lock. For instance, if you create a helper class with a synchronized putIfAbsent method that operates on a Collections.synchronizedList, the helper method synchronizes on the helper object's intrinsic lock, not the list's lock [30]. This provides only the illusion of synchronization.
To properly secure compound actions on existing synchronized objects, you must use client-side locking, which entails guarding the client code with the exact same lock that the shared object uses to guard its own internal state [31]. Even better, you should favor object composition, creating a new thread-safe class that encapsulates the collection and provides its own consistent locking protocol, shielding clients from the underlying implementation details [32].
The Performance Cost: Serialization and Lock Contention
While correctness and safety are non-negotiable, the indiscriminate use of the synchronized keyword can devastate application performance. When you wrap a block of code in a synchronized statement, you are enforcing serialized execution. Only one thread can navigate that code path at any given moment; all other concurrent requests must wait.
This brings us to the mathematical foundation of scalability: Amdahl's Law. Amdahl's Law dictates that the maximum theoretical speedup of a concurrent program on multiple processors is strictly limited by the fraction of the code that must be executed serially [33]. If 10% of your application's execution time is spent inside synchronized blocks, the maximum speedup you can ever achieveโeven with an infinite number of CPU coresโis exactly 10x. Therefore, the principal threat to scalability in concurrent applications is the exclusive resource lock [34].
Furthermore, contended synchronization is computationally expensive. When multiple threads compete for the same intrinsic lock, the JVM must rely on the operating system to manage the contention. This typically results in thread suspension, where the losing threads are put to sleep [35, 36]. Suspending and subsequently resuming a thread incurs massive overhead. It forces a context switch, which requires saving the thread's execution state, manipulating OS scheduling data structures, and inevitably causing CPU cache invalidation and a flurry of cache misses when the thread eventually resumes [37].
To optimize performance, you must aggressively reduce lock duration. The longer a lock is held, the higher the probability that another thread will request it, turning uncontended synchronization into heavily contended synchronization. You should never hold locks during lengthy computations or operations at risk of not completing quickly, such as network calls, database queries, or console I/O [2, 38]. Shrinking synchronized blocks so they only encompass the exact instructions that manipulate shared state allows other threads to execute their non-interfering computations in parallel [26].
Advanced Mitigation: Lock Splitting and Lock Striping
If your application suffers from lock contention even after reducing the scope of your synchronized blocks, you must reevaluate your synchronization policy. A common anti-pattern is using a single, object-level intrinsic lock (e.g., synchronizing entire methods) to guard completely independent state variables.
For instance, consider a ServerStatus object that tracks active user sessions and active database queries using two separate HashSet collections. If both the addUser and addQuery methods are synchronized, a thread adding a query will block a thread trying to add a user, even though they are manipulating completely independent datasets [39, 40].
This can be resolved through lock splitting. Lock splitting involves using separate, dedicated locks for each independent state variable [39]. By declaring private lock objects (e.g., private final Object userLock = new Object();) and synchronizing only on the corresponding lock, you decouple the execution paths. This dramatically reduces the probability that two threads will contend for the same lock, thereby increasing throughput.
Taking this concept further, we arrive at lock striping, a technique employed by highly scalable data structures like ConcurrentHashMap. Instead of using one lock for the entire collection, lock striping partitions the data structure into segments, each guarded by its own lock. For example, ConcurrentHashMap internally utilizes an array of 16 locks, each guarding 1/16th of the hash buckets [41, 42]. Assuming a good hash distribution, this reduces lock contention by a factor of 16, allowing up to 16 threads to write to the map simultaneously without blocking each other. While lock striping makes exclusive operations (like resizing the entire map) more complex, the massive reduction in routine contention makes it an essential strategy for high-performance systems.
Knowing When to Leave Synchronized Behind
Since Java 5.0, the java.util.concurrent package has provided powerful alternatives to intrinsic synchronized locks that offer superior flexibility and, in specific scenarios, better performance.
While synchronized enforces block-structured locking (where a lock is automatically released at the end of the block), it severely lacks flexibility for advanced error recovery. If a thread attempts to acquire an intrinsic lock that is currently held by another permanently stalled thread, the requesting thread will block indefinitely.
To circumvent this, developers can utilize ReentrantLock. ReentrantLock implements the Lock interface and provides the exact same memory visibility and mutual exclusion semantics as intrinsic locks, but introduces critical advanced features:
1.
Timed Lock Acquisition: You can use tryLock(long timeout, TimeUnit unit) to attempt to acquire a lock only for a specific duration. If the lock is not available within the time budget, the thread can back off, log a failure, or attempt a recovery mechanism rather than deadlocking [43, 44].
2.
Interruptible Lock Acquisition: lockInterruptibly() allows a thread waiting for a lock to be interrupted, which is essential for building cleanly cancellable tasks [45].
3.
Fairness: While intrinsic locks are nonfair (meaning waiting threads are served in an arbitrary order), ReentrantLock can be configured for fairness, ensuring the longest-waiting thread acquires the lock next [46].
Furthermore, for read-heavy data structures, synchronized is overly restrictive because it prevents concurrent reads. Using a ReentrantReadWriteLock allows an unlimited number of threads to read the data simultaneously, while still ensuring that write operations have exclusive access [47, 48].
For even finer-grained scalability, developers can abandon locking entirely in favor of nonblocking algorithms powered by Atomic variables (e.g., AtomicInteger, AtomicReference). These classes leverage hardware-level Compare-And-Swap (CAS) instructions to safely update values without ever suspending a thread [49, 50]. By bypassing the OS scheduler and eliminating context switches, atomic variables offer blazing-fast atomic operations that gracefully handle moderate contention.
Conclusion: Balancing Safety and Liveness
The synchronized keyword remains a foundational tool in the Java concurrency ecosystem, but its simplicity is an illusion that masks deep architectural responsibilities. Writing high-performance, thread-safe code is not merely about preventing concurrent modification exceptions; it requires a rigorous, mathematical approach to system design.
When utilizing intrinsic locks, you must be acutely aware of the exact variables each lock guards to preserve memory visibility and prevent stale data. You must meticulously orchestrate the order in which multiple locks are acquired to eliminate the possibility of cyclic deadlocks. You must ruthlessly audit your codebase to ensure no alien methods or long-running I/O operations are invoked from within a synchronized region. Finally, you must constantly balance the absolute requirement for atomicity against the performance penalties of serialization mandated by Amdahl's Law.
By understanding the internal mechanics of the Java Memory Model, recognizing the dangers of compound actions, and mastering advanced techniques like lock splitting, lock striping, and ReentrantLock, you transcend basic keyword application. You gain the engineering discipline required to build resilient, highly scalable systems that thrive under the intense pressure of millions of concurrent requests.
References
[1] Java Concurrency in Practice โ Informally, an objectโs state is its data, stored in state variables such as instance or static fields. An objectโs state may include fields from other, dependent objects; a HashMapโs state is partially stored in the HashMap object itself, but also in many Map.Entry objects. An objectโs state encompโฆ
[2] Java Concurrency in Practice โ Whenever you use locking, you should be aware of what the code in the block is doing and how likely it is to take a long time to execute. Holding a lock for a long time, either because you are doing something compute-intensive or because you execute a potentially blocking operation, introduces the rโฆ
[3] [JAVA][Effective Java 3rd Edition] โ Item 79: Avoid excessive synchronization Item 78 warns of the dangers of insufficient synchronization. This item concerns the opposite problem. Depending on the situation, excessive synchronization can cause reduced performance, deadlock, or even nondeterministic behavior. Tโฆ
[4] Java Concurrency in Practice โ Threads are sometimes called lightweight processes, and most modern oper-ating systems treat threads, not processes, as the basic units of scheduling. In the absence of explicit coordination, threads execute simultaneously and asyn-chronously with respect to one another. Since threads share the memoโฆ
[5] [JAVA][Effective Java 3rd Edition] โ Item 78: Synchronize access to shared mutable data The synchronized keyword ensures that only a single thread can execute a method or block at one time. Many programmers think of synchronization solely as a means of mutual exclusion, to prevent an object from being seen in an inconsistent state byโฆ
[6] Java Concurrency in Practice โ 16.1 What is a memory model, and why would I want one? Suppose one thread assigns a value to aVariable: aVariable = 3; A memory model addresses the question โUnder what conditions does a thread that reads aVariable see the value 3?โ This may sound like a dumb question, but in the absence of synchronโฆ
[7] Java Concurrency in Practice โ 3.1.1 Stale data NoVisibility demonstrated one of the ways that insufficiently synchronized pro-grams can cause surprising results: stale data. When the reader thread examines ready, it may see an out-of-date value. Unless synchronization is used every time a variable is accessed, it is possible to โฆ
[8] Java Concurrency in Practice โ 16.2.1 Unsafe publication The possibility of reordering in the absence of a happens-before relationship ex-plains why publishing an object without adequate synchronization can allow an-other thread to see a partially constructed object (see Section 3.5). Initializing a new object involves writing toโฆ
[9] Java Concurrency in Practice โ Locking is not just about mutual exclusion; it is also about memory visi-bility. To ensure that all threads see the most up-to-date values of shared mutable variables, the reading and writing threads must synchronize on a common lock. 3.1.4 Volatile variables The Java language also provides an alterโฆ
[10] Java Concurrency in Practice โ Locking is not just about mutual exclusion; it is also about memory visi-bility. To ensure that all threads see the most up-to-date values of shared mutable variables, the reading and writing threads must synchronize on a common lock. 3.1.4 Volatile variables The Java language also provides an alterโฆ
[11] Java Concurrency in Practice โ 5.2 Concurrent collections Java 5.0 improves on the synchronized collections by providing several concurrent collection classes. Synchronized collections achieve their thread safety by serial-izing all access to the collectionโs state. The cost of this approach is poor concur-rency; when multiple thโฆ
[12] Java Concurrency in Practice โ Figure 16.2 illustrates the happens-before relation when two threads synchronize using a common lock. All the actions within thread A are ordered by the program 3. Locks and unlocks on explicit Lock objects have the same memory semantics as intrinsic locks. 4. Reads and writes of atomic variables haโฆ
[13] Java Concurrency in Practice โ In SynchronizedFactorizer in Listing 2.6, lastNumber and lastFactors are guarded by the servlet objectโs intrinsic lock; this is documented by the @Guard-edBy annotation. There is no inherent relationship between an objectโs intrinsic lock and its state; an objectโs fields need not be guarded by itsโฆ
[14] Java Concurrency in Practice โ Compound actions on shared state, such as incrementing a hit counter (read-modify-write) or lazy initialization (check-then-act), must be made atomic to avoid race conditions. Holding a lock for the entire duration of a compound action can make that compound action atomic. However, just wrapping theโฆ
[15] Java Concurrency in Practice โ 3.1.1 Stale data NoVisibility demonstrated one of the ways that insufficiently synchronized pro-grams can cause surprising results: stale data. When the reader thread examines ready, it may see an out-of-date value. Unless synchronization is used every time a variable is accessed, it is possible to โฆ
[16] Java Concurrency in Practice โ When a thread holds a lock forever, other threads attempting to acquire that lock will block forever waiting. When thread A holds lock L and tries to acquire lock M, but at the same time thread B holds M and tries to acquire L, both threads will wait forever. This situation is the simplest case of dโฆ
[17] Java Concurrency in Practice โ When a thread holds a lock forever, other threads attempting to acquire that lock will block forever waiting. When thread A holds lock L and tries to acquire lock M, but at the same time thread B holds M and tries to acquire L, both threads will wait forever. This situation is the simplest case of dโฆ
[18] Java Concurrency in Practice โ The JVM is not nearly as helpful in resolving deadlocks as database servers are. When a set of Java threads deadlock, thatโs the end of the gameโthose threads are permanently out of commission. Depending on what those threads do, the application may stall completely, or a particular subsystem may stโฆ
[19] Java Concurrency in Practice โ 10.1.2 Dynamic lock order deadlocks Sometimes it is not obvious that you have sufficient control over lock ordering to prevent deadlocks. Consider the harmless-looking code in Listing 10.2 that transfers funds from one account to another. It acquires the locks on both Ac-count objects before executiโฆ
[20] Java Concurrency in Practice โ 208 Chapter 10. Avoiding Liveness Hazards // Warning: deadlock-prone! public void transferMoney(Account fromAccount, Account toAccount, DollarAmount amount) throws InsufficientFundsException { synchronized (fromAccount) { synchronized (toAccount) { if (fromAccount.getBalance().compareTo(amount) < 0)โฆ
[21] Java Concurrency in Practice โ if (fromHash < toHash) { synchronized (fromAcct) { synchronized (toAcct) { new Helper().transfer(); } } } else if (fromHash > toHash) { synchronized (toAcct) { synchronized (fromAcct) { new Helper().transfer(); } } } else { synchronized (tieLock) { synchronized (fromAcct) { synchronized (toAcct) { nโฆ
[22] [JAVA][Effective Java 3rd Edition] โ to achieve high concurrency, such as lock splitting, lock striping, and nonblocking concurrency control. These techniques are beyond the scope of this book, but they are discussed elsewhere [Goetz06, Herlihy08]. If a method modifies a static field and there is any possibility โฆ
[23] Java Concurrency in Practice โ It was easy to spot the deadlock possibility in LeftRightDeadlock or trans-ferMoney by looking for methods that acquire two locks. Spotting the deadlock possibility in Taxi and Dispatcher is a little harder: the warning sign is that an alien method (defined on page 40) is being called while holding โฆ
[24] Java Concurrency in Practice โ It was easy to spot the deadlock possibility in LeftRightDeadlock or trans-ferMoney by looking for methods that acquire two locks. Spotting the deadlock possibility in Taxi and Dispatcher is a little harder: the warning sign is that an alien method (defined on page 40) is being called while holding โฆ
[25] [JAVA][Effective Java 3rd Edition] โ a background thread to unsubscribe itself, but the problem is real. Invoking alien methods from within synchronized regions has caused many deadlocks in real systems, such as GUI toolkits. In both of the previous examples (the exception and the deadlock) we were lucky. The resource that โฆ
[26] Java Concurrency in Practice โ Taxi and Dispatcher in Listing 10.5 can be easily refactored to use open calls and thus eliminate the deadlock risk. This involves shrinking the synchronized blocks to guard only operations that involve shared state, as in Listing 10.6. Very often, the cause of problems like those in Listing 10.5 isโฆ
[27] Java Concurrency in Practice โ Strive to use open calls throughout your program. Programs that rely on open calls are far easier to analyze for deadlock-freedom than those that allow calls to alien methods with locks held. Restructuring a synchronized block to allow open calls can sometimes have undesirable consequences, since itโฆ
[28] Java Concurrency in Practice โ For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock. If synchronization is the cure for race conditions, why not just declare ev-ery method synchronized? It turns out that such indiscriminate application of synchroniโฆ
[29] Java Concurrency in Practice โ For every invariant that involves more than one variable, all the variables involved in that invariant must be guarded by the same lock. If synchronization is the cure for race conditions, why not just declare ev-ery method synchronized? It turns out that such indiscriminate application of synchroniโฆ
[30] Java Concurrency in Practice โ Listing 4.14 shows a failed attempt to create a helper class with an atomic put-if-absent operation for operating on a thread-safe List. @NotThreadSafe public class ListHelper<E> { public List<E> list = Collections.synchronizedList(new ArrayList<E>()); ... public synchronized boolean putIfAbsent(E xโฆ
[31] Java Concurrency in Practice โ 4.4. Adding functionality to existing thread-safe classes 73 To make this approach work, we have to use the same lock that the List uses by using client-side locking or external locking. Client-side locking entails guarding client code that uses some object X with the lock X uses to guard its own stโฆ
[32] Java Concurrency in Practice โ 74 Chapter 4. Composing Objects @ThreadSafe public class ImprovedList<T> implements List<T> { private final List<T> list; public ImprovedList(List<T> list) { this.list = list; } public synchronized boolean putIfAbsent(T x) { boolean contains = list.contains(x); if (contains) list.add(x); return !conโฆ
[33] Java Concurrency in Practice โ 11.6. Reducing context switch overhead 245 Summary Because one of the most common reasons to use threads is to exploit multiple processors, in discussing the performance of concurrent applications, we are usu-ally more concerned with throughput or scalability than we are with raw service time. Amdahโฆ
[34] Java Concurrency in Practice โ 11.4 Reducing lock contention Weโve seen that serialization hurts scalability and that context switches hurt per-formance. Contended locking causes both, so reducing lock contention can im-prove both performance and scalability. Access to resources guarded by an exclusive lock is serializedโonly oneโฆ
[35] Java Concurrency in Practice โ 232 Chapter 11. Performance and Scalability 11.3.3 Blocking Uncontended synchronization can be handled entirely within the JVM (Bacon et al., 1998); contended synchronization may require OS activity, which adds to the cost. When locking is contended, the losing thread(s) must block. The JVM can implโฆ
[36] Java Concurrency in Practice โ 232 Chapter 11. Performance and Scalability 11.3.3 Blocking Uncontended synchronization can be handled entirely within the JVM (Bacon et al., 1998); contended synchronization may require OS activity, which adds to the cost. When locking is contended, the losing thread(s) must block. The JVM can implโฆ
[37] Java Concurrency in Practice โ Context switches are not free; thread scheduling requires manipulating shared data structures in the OS and JVM. The OS and JVM use the same CPUs your pro-gram does; more CPU time spent in JVM and OS code means less is available for your program. But OS and JVM activity is not the only cost of conteโฆ
[38] Java Concurrency in Practice โ Whenever you use locking, you should be aware of what the code in the block is doing and how likely it is to take a long time to execute. Holding a lock for a long time, either because you are doing something compute-intensive or because you execute a potentially blocking operation, introduces the rโฆ
[39] Java Concurrency in Practice โ 9. If the JVM performs lock coarsening, it may undo the splitting of synchronized blocks anyway. 236 Chapter 11. Performance and Scalability @ThreadSafe public class ServerStatus { @GuardedBy("this") public final Set<String> users; @GuardedBy("this") public final Set<String> queries; ... public syncโฆ
[40] Java Concurrency in Practice โ } public Object get(Object key) { int hash = hash(key); synchronized (locks[hash % N_LOCKS]) { for (Node m = buckets[hash]; m != null; m = m.next) if (m.key.equals(key)) return m.value; } return null; } public void clear() { for (int i = 0; i < buckets.length; i++) { synchronized (locks[i % N_LOCKS]โฆ
[41] Java Concurrency in Practice โ Lock splitting can sometimes be extended to partition locking on a variable-sized set of independent objects, in which case it is called lock striping. For exam-ple, the implementation of ConcurrentHashMap uses an array of 16 locks, each of which guards 1/16 of the hash buckets; bucket N is guarded โฆ
[42] Java Concurrency in Practice โ The major scalability impediment for the synchronized Map implementations is that there is a single lock for the entire map, so only one thread can access the map at a time. On the other hand, ConcurrentHashMap does no locking for most successful read operations, and uses lock striping for write opeโฆ
[43] Java Concurrency in Practice โ 277 278 Chapter 13. Explicit Locks (Memory visibility is covered in Section 3.1 and in Chapter 16.) And, like synch-ronized, ReentrantLock offers reentrant locking semantics (see Section 2.3.2). ReentrantLock supports all of the lock-acquisition modes defined by Lock, pro-viding more flexibility forโฆ
[44] Java Concurrency in Practice โ ReentrantLock provides the same locking and memory semantics as intrinsic locking, as well as additional features such as timed lock waits, interruptible lock waits, fairness, and the ability to implement non-block-structured locking. The performance of ReentrantLock appears to dominate that of intrโฆ
[45] Java Concurrency in Practice โ 13.1.2 Interruptible lock acquisition Just as timed lock acquisition allows exclusive locking to be used within time-limited activities, interruptible lock acquisition allows locking to be used within cancellable activities. Section 7.1.6 identified several mechanisms, such as ac-quiring an intrinsiโฆ
[46] Java Concurrency in Practice โ ReentrantLock over intrinsic locks 1 2 4 8 16 32 64 0 1 2 3 4 5 Java 5.0 Java 6 Figure 13.1. Intrinsic locking versus ReentrantLock performance on Java 5.0 and Java 6. Performance is a moving target; yesterdayโs benchmark showing that X is faster than Y may already be out of date today. 13.3 Fairnesโฆ
[47] Java Concurrency in Practice โ 13.5 Read-write locks ReentrantLock implements a standard mutual-exclusion lock: at most one thread at a time can hold a ReentrantLock. But mutual exclusion is frequently a stronger locking discipline than needed to preserve data integrity, and thus limits concur-rency more than necessary. Mutual exโฆ
[48] Java Concurrency in Practice โ ReadWriteLock, shown in Listing 13.6, exposes two Lock objectsโone for reading and one for writing. To read data guarded by a ReadWriteLock you must first acquire the read lock, and to modify data guarded by a ReadWriteLock you must first acquire the write lock. While there may appear to be two sepaโฆ
[49] Java Concurrency in Practice โ This page intentionally left blank Chapter 15 Atomic Variables and Nonblocking Synchronization Many of the classes in java.util.concurrent, such as Semaphore and Concur-rentLinkedQueue, provide better performance and scalability than alternatives using synchronized. In this chapter, we take a look aโฆ
[50] Java Concurrency in Practice โ Nonblocking algorithms are considerably more complicated to design and im-plement than lock-based alternatives, but they can offer significant scalability and liveness advantages. They coordinate at a finer level of granularity and can greatly reduce scheduling overhead because they donโt block whenโฆ

.png&blockId=363b967d-93d5-81f2-8de9-ebceb830a517&width=3600)
.png&blockId=363b967d-93d5-80d1-a8f3-e1a03331802a)