Please enable JavaScript to view the comments powered by Disqus.Java Thread Safety: Concurrency and Memory Model
Search
๐Ÿงบ

Java Thread Safety: Concurrency and Memory Model

ํƒœ๊ทธ
Java
Concurrency
thread
multi-thread
memory
๊ณต๊ฐœ์—ฌ๋ถ€
์ž‘์„ฑ์ผ์ž
2026/04/05
Tags: Java, Concurrency, ThreadSafety, Backend, Multithreading

1. Introduction: The Hidden Perils of Multithreading

If you are a backend developer working on enterprise Java applications, you have likely experienced this frustrating scenario: your application runs perfectly in your local development environment, passing every unit test with flying colors. However, once deployed to a production environment under heavy user traffic, strange anomalies begin to emerge. Shopping cart items disappear, financial balances calculate incorrectly, or an inexplicable NullPointerException crashes a critical background process. When you inspect the logs, the errors are seemingly random and impossible to reproduce consistently.
More often than not, the root cause of these elusive bugs is a failure in concurrency control and thread safety. Modern Java web frameworks, such as Spring, are fundamentally built on the Servlet model. In this architecture, a single instance of a controller or service (a Singleton) is shared and accessed concurrently by multiple threads representing different user requests [1]. If the developers writing the business logic do not meticulously guarantee thread safety, the data integrity of the entire application can be compromised.
Merely slapping a synchronized keyword onto every method is not a viable solution; doing so destroys system throughput and scalability. To build truly robust and high-performing backend systems, developers must understand the profound relationship between the Java Memory Model (JMM), modern multi-core CPU architectures, and concurrent programming paradigms. This comprehensive guide will dissect the fundamental causes of concurrency hazards, provide real-world code examples of common anti-patterns, and equip you with the architectural principles required to write bulletproof, thread-safe Java applications.

2. The Root Cause of Concurrency Issues: Mutable State

When diving into multithreaded programming, developers often focus excessively on the mechanisms of threads, locks, and semaphores. However, Brian Goetz, the primary author of Java Concurrency in Practice, elegantly distills the essence of concurrency problems into a single, profound statement:
> "It's the mutable state, stupid." [2]
Every multithreading disaster is fundamentally caused by multiple threads attempting to access and modify the same "mutable state" (changeable variables) without adequate coordination. Ensuring that a class is thread-safe simply means guaranteeing that its internal invariants remain intact regardless of how many threads access it simultaneously, and regardless of how the operating system's thread scheduler interleaves their execution [3], [4].
If an object contains no stateโ€”meaning it has no instance variables and references no other classes with stateโ€”it is inherently thread-safe [5], [2]. Stateless objects execute their logic using only local variables stored within the thread's isolated execution stack. Because threads do not share their local stacks, the execution of one thread cannot possibly influence the outcome of another. However, most valuable enterprise applications must maintain and manipulate state. When state becomes shared and mutable, developers must actively protect it from the three major traps of concurrent programming.

3. Unmasking the Three Major Concurrency Pitfalls

To master thread safety, you must learn to recognize the subtle ways in which concurrent execution can corrupt data. Let us examine the most prevalent concurrency anti-patterns found in modern Java codebases.

3.1. The Illusion of Atomic Operations: Read-Modify-Write

Consider a very simple logic implementation designed to count the number of visitors to a website or the number of times an API endpoint is hit:
public class UnsafeCounter { private int count = 0; public void increment() { count++; // Danger: This is NOT thread-safe! } public int getCount() { return count; } }
Python
๋ณต์‚ฌ
Looking at the source code, the count++ statement appears to be a single, compact operation. However, this is a dangerous illusion. At the compiled bytecode and CPU hardware level, the increment operator is not an atomic instruction. It is actually a sequence of three distinct, sequential operationsโ€”a classic "Read-Modify-Write" compound action [3], [4]:
1.
Fetch: Read the current value of the count variable from main memory into a CPU register.
2.
Add: Add 1 to the value stored in the register.
3.
Store: Write the newly calculated value back to main memory.
Imagine a scenario where Thread A reads the value 0. Before Thread A can complete its addition and write the result back, the CPU context-switches to Thread B. Thread B also reads the value 0 from memory. Both threads independently add 1 to their respective registers and write the result back. Even though two increments were performed, the final value in memory is 1, not 2. One increment has been permanently lost. This is known as a race condition [6], [7].
The Solution: Atomic Variables and CAS
To resolve this, the entire read-modify-write sequence must execute atomically (as a single, indivisible unit). While intrinsic locking (synchronized) works, Java provides a much more performant, non-blocking alternative in the java.util.concurrent.atomic package. Classes like AtomicInteger leverage underlying hardware instructions known as Compare-And-Swap (CAS) to guarantee atomicity without the overhead of suspending threads [8], [9].
import java.util.concurrent.atomic.AtomicInteger; public class SafeCounter { private final AtomicInteger count = new AtomicInteger(0); public void increment() { count.incrementAndGet(); // Thread-safe, non-blocking increment } }
Python
๋ณต์‚ฌ

3.2. The Check-Then-Act Trap: Lazy Initialization

Another notoriously common source of race conditions is the "Check-Then-Act" compound action, frequently seen in the Lazy Initialization pattern. Developers often use this pattern to delay the creation of expensive objects until they are actually needed:
public class LazyInitRace { private ExpensiveObject instance = null; public ExpensiveObject getInstance() { if (instance == null) { // Check instance = new ExpensiveObject(); // Act } return instance; } }
Python
๋ณต์‚ฌ
This code makes a decision based on a stale observation. Thread A observes that instance is null and prepares to instantiate the object. However, before it does, Thread B preempts it, also sees that instance is null, and proceeds to create the object. Thread A then resumes and creates a second object. In the best-case scenario, this wastes memory and CPU cycles. In the worst-case scenario, it breaks critical system constraints, leading to inconsistent data or resource leaks [6], [7].
The Solution: Synchronization and Safe Idioms
To fix the Check-Then-Act anti-pattern, the observation and subsequent action must be wrapped in a synchronized block. Alternatively, you can utilize the Lazy Initialization Holder Class idiom, which brilliantly leverages the Java Virtual Machine's (JVM) class loading mechanism to guarantee thread safety without requiring explicit synchronization locks [10], [11].

3.3. The Stale Data Dilemma: Memory Visibility and Reordering

Perhaps the most perplexing and difficult-to-debug concurrency problems stem from memory visibility and instruction reordering. Consider the following seemingly harmless code:
public class NoVisibility { private static boolean ready; private static int number; public static void main(String[] args) throws InterruptedException { new Thread(() -> { while (!ready) { Thread.yield(); } System.out.println(number); }).start(); number = 42; ready = true; } }
Python
๋ณต์‚ฌ
What will this program print? Most developers would confidently answer 42. However, in reality, this program might print 0, or it might loop indefinitely and never terminate [12], [5].
This bizarre behavior is a direct consequence of the Java Memory Model (JMM) and modern hardware architecture. To maximize performance, modern multi-core CPUs do not continuously read and write from the main RAM. Instead, each core maintains its own high-speed, local cache (L1/L2 caches). When the main thread updates ready = true, it may only update its local cache. The background thread, executing on a completely different CPU core, may never see the updated value and will continue reading a stale false value from its own cache, causing an infinite loop.
Furthermore, both the Java compiler and the CPU are permitted to reorder instructions to optimize execution pipelines, provided the reordering does not alter the semantics of a single-threaded execution program. In a multithreaded context, the system might decide to execute ready = true before number = 42. If the background thread observes ready == true but reads number before it has been assigned, it will print the default integer value of 0 [10], [13].
The Solution: The Volatile Keyword
To combat visibility and reordering issues, Java provides the volatile keyword. Declaring a shared variable as volatile instructs the JVM and the hardware that this variable may be modified by other threads. It establishes a "memory barrier" (or memory fence) that strictly forbids the compiler and CPU from reordering operations around the variable, and it guarantees that any write to the variable is immediately flushed to the main memory, making it instantly visible to all other threads [14], [15].

4. Advanced Synchronization and Liveness Failures

When developers realize that multithreading is fraught with danger, their initial reaction is often to over-synchronize. While using the synchronized keyword (intrinsic locking) solves atomicity and visibility issues, naive locking strategies introduce their own severe complications, primarily in the form of liveness failures.

4.1. The Threat of Deadlocks and Lock Ordering

A liveness failure occurs when an application is technically executing but is fundamentally unable to make forward progress. The most infamous liveness failure is a Deadlock.
Deadlocks typically occur when two or more threads attempt to acquire multiple locks in different orders. Imagine a banking application where users can transfer funds between accounts. A naive implementation might lock the source account, then lock the destination account to ensure safety:
public void transferMoney(Account from, Account to, int amount) { synchronized (from) { synchronized (to) { from.deduct(amount); to.add(amount); } } }
Java
๋ณต์‚ฌ
If User A initiates a transfer to User B, Thread 1 acquires the lock on User A's account and waits for User B's lock. Simultaneously, User B initiates a transfer to User A, so Thread 2 acquires the lock on User B's account and waits for User A's lock. Both threads are now blocked permanently, waiting for resources held by the other. The application has frozen [16], [17].
The Solution: Inducing Lock Ordering
To prevent lock-ordering deadlocks, you must guarantee that all threads acquire multiple locks in a globally consistent order. In the banking example, you could enforce an order by comparing the unique numerical Account IDs and always locking the account with the smaller ID first, regardless of whether it is the sender or receiver.

5. Scaling Concurrent Applications: Escaping the Bottleneck

Once an application is structurally safe from data corruption and deadlocks, the next engineering challenge is achieving scalability. Scalability measures how effectively an application can utilize additional computing resources (like extra CPU cores) to increase its throughput.

5.1. Amdahl's Law and the Cost of Serialization

The mathematical boundary of scalability is defined by Amdahl's Law. It states that the maximum theoretical speedup of a program utilizing multiple processors is strictly limited by the fraction of the code that must be executed serially (sequentially) [18], [19].
The formula is defined as:
Speedup S(N) โ‰ค 1 / (F + (1 - F) / N)
Where:
โ€ข
N is the number of processor cores.
โ€ข
F is the fraction of the calculation that is purely sequential and cannot be parallelized.
Whenever a thread acquires an exclusive lock, it forces all other competing threads into a queue. The code inside the synchronized block represents the serialized fraction (F) of your application. If you synchronize massive, monolithic blocks of code (like an entire controller method), F becomes very large. According to Amdahl's Law, as F increases, throwing more CPU cores at the server yields absolutely zero performance benefit. Your system's throughput hits an unbreakable ceiling.

5.2. Lock Splitting and Lock Striping

To conquer the mathematical limits of Amdahl's Law, engineers must reduce the scope and duration of their locks. Two powerful techniques for this are Lock Splitting and Lock Striping [16], [17].
Consider Java's legacy Hashtable class. It uses a single class-level intrinsic lock to guard the entire collection. If Thread A is writing data, Thread B cannot read data, even if they are accessing completely different hash buckets. The serialized fraction is huge.
In contrast, modern concurrent collections like ConcurrentHashMap utilize Lock Striping. Instead of one monolithic lock, the internal data structure is partitioned into multiple separate segments (or stripes), each protected by its own independent lock. If a map has 16 stripes, it effectively allows up to 16 different threads to modify the map simultaneously without any lock contention, provided they access different segments. This brilliantly balances the dual requirements of strict thread safety and massive multi-core scalability [16], [17].

6. Essential Best Practices for Developers

To consistently build professional, scalable, and safe Java applications, integrate these fundamental guidelines into your software engineering practices:
1.
Maximize Immutability:
โ€ข
The simplest way to achieve thread safety is to prevent state from changing after creation. By declaring object fields as final and avoiding setter methods, you create Immutable Objects. Immutable objects require absolutely no synchronization and can be freely shared across thousands of concurrent threads with zero performance overhead [12], [20], [11].
2.
Utilize Thread Confinement:
โ€ข
If you do not share data between threads, you do not need to synchronize it. Whenever possible, confine state to the local execution stack. For data that must be accessed across multiple methods but should remain isolated per thread (like a database connection or a user transaction context), utilize the ThreadLocal class [12], [21].
3.
Guard Multivariable Invariants Unidirectionally:
โ€ข
If an object has multiple variables that represent a combined logical state (for example, the x and y coordinates of a point on a map), do not protect them with different locks. All variables involved in a logical invariant must be guarded by the exact same lock to ensure atomicity [20], [2].
4.
Document Your Synchronization Policy:
โ€ข
A class's thread-safety guarantees are a critical part of its API contract. Use Java concurrency annotations such as @ThreadSafe, @NotThreadSafe, and @GuardedBy to explicitly document which locks protect which variables. This prevents future maintainers from inadvertently breaking your safety protocols during code refactoring [22], [2].

7. Conclusion: Building Robust Java Systems

Mastering thread safety in Java is not merely a matter of memorizing API calls; it requires a profound paradigm shift in how you reason about application state, time, and hardware architecture. Every line of backend code you write may be executed simultaneously by dozens of CPU cores, each interacting with localized caches and instruction reordering optimizers.
By recognizing the dangers of read-modify-write operations, avoiding the traps of stale data, meticulously planning your locking strategies, and understanding the scalability constraints imposed by Amdahl's Law, you transcend the level of a typical programmer. You become an architect capable of building incredibly fast, highly concurrent, and fundamentally unbreakable enterprise systems. Treat shared mutable state as toxic, favor immutability wherever possible, and always design for concurrency from the very first line of code.

References

[1] Java Concurrency in Practice โ€” The problem with SimpleDateFormat could be avoided by not assuming a class is thread-safe if it doesnโ€™t say so. On the other hand, it is impossible to 8. If youโ€™ve never wondered this, we admire your optimism. 76 Chapter 4. Composing Objects develop a servlet-based application without making some prโ€ฆ
[2] Java Concurrency in Practice โ€” Guard each mutable variable with a lock. Guard all variables in an invariant with the same lock. Hold locks for the duration of compound actions. A program that accesses a mutable variable from multiple threads without synchronization is a broken program. Donโ€™t rely on clever reasoning about why youโ€ฆ
[3] Java Concurrency in Practice โ€” A Annotations for Concurrency 353 A.1 Class annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 A.2 Field and method annotations . . . . . . . . . . . . . . . . . . . . . . . 353 Bibliography 355 Index 359 Listings 1 Bad way to sort a list. Donโ€™t do this. . . . . . . . . . . โ€ฆ
[4] Java Concurrency in Practice โ€” A Annotations for Concurrency 353 A.1 Class annotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 A.2 Field and method annotations . . . . . . . . . . . . . . . . . . . . . . . 353 Bibliography 355 Index 359 Listings 1 Bad way to sort a list. Donโ€™t do this. . . . . . . . . . . โ€ฆ
[5] Java Concurrency in Practice โ€” 201 DemonstrateDeadlock; 210li Dispatcher; 212li, 214li DoubleCheckedLocking; 349li ExpensiveFunction; 103li Factorizer; 109li FileCrawler; 91li FutureRenderer; 128li GrumpyBoundedBuffer; 292, 294li GuiExecutor; 192, 194li HiddenIterator; 84li ImprovedList; 74li Indexer; 91li IndexerThread; 157li Inโ€ฆ
[6] Java Concurrency in Practice โ€” tion. Donโ€™t do this. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Race condition in lazy initialization. Donโ€™t do this. . . . . . . . . . . 21 2.4 Servlet that counts requests using AtomicLong. . . . . . . . . . . . . 23 2.5 Servlet that attempts to cache its last result without โ€ฆ
[7] Java Concurrency in Practice โ€” tion. Donโ€™t do this. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.3 Race condition in lazy initialization. Donโ€™t do this. . . . . . . . . . . 21 2.4 Servlet that counts requests using AtomicLong. . . . . . . . . . . . . 23 2.5 Servlet that attempts to cache its last result without โ€ฆ
[8] Java Concurrency in Practice โ€” xvi Listings 13.2 Guarding object state using ReentrantLock. . . . . . . . . . . . . . . 278 13.3 Avoiding lock-ordering deadlock using tryLock. . . . . . . . . . . . 280 13.4 Locking with a time budget. . . . . . . . . . . . . . . . . . . . . . . . 281 13.5 Interruptible lock acquisition. . . . . .โ€ฆ
[9] Java Concurrency in Practice โ€” in servlets with state; 19โ€“23 Index 361 AtomicBoolean; 325 AtomicInteger; 324 nonblocking algorithm use; 319 random number generator using; 327li AtomicLong; 325 AtomicReference; 325 nonblocking algorithm use; 319 safe publication use; 52 AtomicReferenceFieldUpdater; 335 audit(ing) See also instrumeโ€ฆ
[10] Java Concurrency in Practice โ€” (Michael and Scott, 1996). . . . . . . . . . . . . . . . . . . . . . . . . . 334 15.8 Using atomic field updaters in ConcurrentLinkedQueue. . . . . . . 335 16.1 Insufficiently synchronized program that can have surprising re- sults. Donโ€™t do this. . . . . . . . . . . . . . . . . . . . . . . . . . . โ€ฆ
[11] Java Concurrency in Practice โ€” non-interruptable blocking; 148 threads use to simulate; 4 utilization measurement tools; 240 idempotence and race condition mitigation; 161 idioms See also algorithm(s); conventions; design patterns; documen-tation; policy(s); protocols; strategies; double-checked locking (DCL) as bad practice; 348โ€ฆ
[12] Java Concurrency in Practice โ€” ing during construction. . . . . . . . . . . . . . . . . . . . . . . . . . 42 3.9 Thread confinement of local primitive and reference variables. . . . 44 3.10 Using ThreadLocal to ensure thread confinement. . . . . . . . . . . 45 3.11 Immutable class built out of mutable underlying objects. . . . . โ€ฆ
[13] Java Concurrency in Practice โ€” 142 thread pool advantages; 120 Index 391 rules See also guidelines; policy(s); strate- gies; happens-before; 341 Runnable handling exceptions in; 143 task representation limitations; 125 running ExecutorService state; 121 FutureTask state; 95 runtime timing and ordering alterations by thread safetyโ€ฆ
[14] Java Concurrency in Practice โ€” We assume the reader already has some familiarity with the basic mecha-nisms for concurrency in Java. Java Concurrency in Practice is not an introduction to concurrencyโ€”for that, see the threading chapter of any decent introductory volume, such as The Java Programming Language (Arnold et al., 2005).โ€ฆ
[15] Java Concurrency in Practice โ€” synchronization optimization by; 230 thread timeout interaction and core pool size; 172fn thread use; 9 uncaught exception handling; 162fn K keep-alive time thread termination impact; 172 L latch(es); 94, 94โ€“95 See also barriers; blocking; semaphores; synchroniz-ers; barriers vs.; 99 binary; 304 AQSโ€ฆ
[16] Java Concurrency in Practice โ€” cation, and progress notification. . . . . . . . . . . . . . . . . . . . . 199 9.8 Initiating a long-running, cancellable task with BackgroundTask. . 200 10.1 Simple lock-ordering deadlock. Donโ€™t do this. . . . . . . . . . . . . . 207 10.2 Dynamic lock-ordering deadlock. Donโ€™t do this. . . . . . . .โ€ฆ
[17] Java Concurrency in Practice โ€” cation, and progress notification. . . . . . . . . . . . . . . . . . . . . 199 9.8 Initiating a long-running, cancellable task with BackgroundTask. . 200 10.1 Simple lock-ordering deadlock. Donโ€™t do this. . . . . . . . . . . . . . 207 10.2 Dynamic lock-ordering deadlock. Donโ€™t do this. . . . . . . .โ€ฆ
[18] Java Concurrency in Practice โ€” UEHLogger example; 163li service as example of stopping a thread-based service; 150โ€“155 thread customization example; 177 ThreadPoolExecutor hooks for; 179 logical state; 58 loops/looping and interruption; 143 M main event loop vs. event dispatch thread; 5 Map ConcurrentHashMap as concurrent replaceโ€ฆ
[19] Java Concurrency in Practice โ€” inner classes publication risks; 41 instance confinement; 59, 58โ€“60 See also confinement; encapsulation; instrumentation See also analysis; logging; monitor- ing; resource(s), manage-ment; statistics; testing; of thread creation thread pool testing use; 258 potential as execution policy advantage; 1โ€ฆ
[20] Java Concurrency in Practice โ€” Guard each mutable variable with a lock. Guard all variables in an invariant with the same lock. Hold locks for the duration of compound actions. A program that accesses a mutable variable from multiple threads without synchronization is a broken program. Donโ€™t rely on clever reasoning about why youโ€ฆ
[21] Java Concurrency in Practice โ€” See also barrier(s); conditional; latch(es); as latch role; 94 ThreadGate example; 304 global variables ThreadLocal variables use with; 45 good practices See design; documentation; encap- sulation; guidelines; perfor-mance; strategies; graceful degradation and execution policy; 121 and saturation poโ€ฆ
[22] Java Concurrency in Practice โ€” At the very least, document the thread safety guarantees made by a class. Is it thread-safe? Does it make callbacks with a lock held? Are there any specific locks that affect its behavior? Donโ€™t force clients to make risky guesses. If you donโ€™t want to commit to supporting client-side locking, thatโ€™โ€ฆ