Layered Architecture in Java: Beyond the Diagram
A production-focused breakdown of layered architecture in Java — dependency rules, package structures, transaction boundaries, rich vs. anemic domain models, and every trap that academic diagrams never show you.
Every Java developer has seen the diagram. Four boxes stacked vertically: Presentation, Application, Domain, Infrastructure. Arrows pointing downward. Clean, orderly, satisfying.
Then you open a real codebase and find a UserService with 1,400 lines, a UserRepository that contains business logic, a UserController that reaches directly into the database through a static utility class someone added two years ago, and a UserDTO that is actually the JPA entity with @JsonIgnore annotations scattered across it like duct tape.
This is not a story about the diagram. This is a story about what the diagram is trying to protect you from — and what happens when teams follow the shape of layered architecture without internalizing its substance.
This article is the version that is actually useful in production: not a taxonomy, but a set of hard constraints that make systems survivable when things get complicated.
I. What Layered Architecture Is Actually For
The diagram is not the point. The dependency rule is the point.
In a layered architecture, dependencies flow in one direction only: inward.
The presentation layer can depend on the application layer. The application layer can depend on the domain layer. The domain layer depends on nothing except itself. The infrastructure layer depends on the domain but is depended on by nothing above it — it is wired in at runtime through interfaces.
This single rule, consistently enforced, is the entire value proposition of the pattern. Everything else follows from it.
Why does this matter operationally? Because the direction of dependencies determines what you can change independently.
- If your domain layer imports a JPA annotation, you cannot replace JPA without touching domain logic.
- If your service layer takes an
HttpServletRequestparameter, you cannot call that service from a background job. - If your controller calls
EntityManagerdirectly, you cannot test the controller without a running database.
Violations of the dependency rule are not style issues. They are structural defects that accumulate silently until the day you need to move fast — and discover you cannot.
II. The Four Layers, Precisely
Before implementation, precision about what each layer does — and critically, what it does not do.
The Domain Layer
The core of the system. It contains:
- Entities — Objects with identity and lifecycle:
Job,Application,Candidate. - Value Objects — Immutable, defined by attributes not identity:
Salary,EmailAddress,DateRange. - Domain Services — Logic that belongs to the domain but not to a single entity. An
ApplicationEligibilityCheckerthat evaluates whether a candidate can apply given both the job's constraints and their profile. - Repository Interfaces — Java interfaces that define the persistence contract. The implementation lives in infrastructure.
- Domain Events — Signals that something meaningful occurred:
ApplicationSubmitted,JobPublished.
What the domain layer does not contain: Spring annotations, JPA annotations, Jackson annotations, HTTP concepts, or anything from java.sql. If you find org.springframework.* imported in your domain layer, something has gone wrong.
The Application Layer
Where use cases live. Each use case is a discrete operation a user or external system can trigger:
SubmitApplicationPublishJobRequestCvReviewShortlistCandidate
The application layer orchestrates: it retrieves domain objects via repository interfaces, invokes domain logic, persists changes, and emits events. It does not contain business rules — those live in the domain. It coordinates their execution.
The application layer also owns transaction boundaries. In a well-designed system, the beginning and end of a database transaction maps directly to the beginning and end of a use case. Nowhere else.
The Infrastructure Layer
The adapter layer. Everything that talks to the outside world:
- JPA/Hibernate implementations of repository interfaces
- HTTP clients for external APIs
- Email senders, file storage adapters, message queue publishers
- Spring Security configurations
- Database migration files (Flyway or Liquibase)
Infrastructure classes are deliberately tightly coupled to specific technologies. That coupling is contained here, by design. Swapping technologies means rewriting infrastructure adapters — nothing in domain or application logic changes.
The Presentation Layer
Controllers, request/response DTOs, input validation, HTTP error mapping. Nothing more.
A controller that contains a conditional business rule has absorbed domain logic it should not own. That logic belongs in the domain or application layer, where it can be tested, reused, and reasoned about independently of HTTP.
III. The Package Structure That Enforces the Rules
Architecture that exists only in a wiki page is not architecture — it is aspiration. The package structure is the first mechanism that makes the rules real and visible.
com.yourcompany.yourapp/
│
├── domain/
│ ├── model/
│ │ ├── Job.java
│ │ ├── Application.java
│ │ ├── Candidate.java
│ │ ├── ApplicationStatus.java ← enum, belongs in domain
│ │ └── Salary.java ← value object
│ ├── repository/
│ │ ├── JobRepository.java ← interface ONLY
│ │ └── ApplicationRepository.java ← interface ONLY
│ ├── service/
│ │ └── ApplicationEligibilityService.java
│ └── event/
│ ├── ApplicationSubmittedEvent.java
│ └── JobPublishedEvent.java
│
├── application/
│ ├── usecase/
│ │ ├── SubmitApplicationUseCase.java
│ │ ├── PublishJobUseCase.java
│ │ └── RequestCvReviewUseCase.java
│ ├── dto/
│ │ ├── SubmitApplicationCommand.java ← input to use case
│ │ └── ApplicationResult.java ← output from use case
│ └── port/
│ └── out/
│ ├── CvAnalysisPort.java ← interface for AI
│ ├── FileStoragePort.java
│ └── NotificationPort.java
│
├── infrastructure/
│ ├── persistence/
│ │ └── jpa/
│ │ ├── JobJpaRepository.java ← Spring Data interface
│ │ ├── JobRepositoryAdapter.java ← implements domain JobRepository
│ │ ├── JobJpaEntity.java ← JPA entity (NOT the domain entity)
│ │ └── JobJpaMapper.java
│ ├── external/
│ │ ├── gemini/
│ │ │ └── GeminiCvAnalysisAdapter.java
│ │ ├── cloudinary/
│ │ │ └── CloudinaryFileStorageAdapter.java
│ │ └── resend/
│ │ └── ResendNotificationAdapter.java
│ └── config/
│ ├── SecurityConfig.java
│ └── JpaConfig.java
│
└── presentation/
├── api/v1/
│ ├── JobController.java
│ └── ApplicationController.java
├── dto/
│ ├── request/
│ │ └── SubmitApplicationRequest.java
│ └── response/
│ ├── JobResponse.java
│ └── ApplicationResponse.java
├── mapper/
│ └── JobResponseMapper.java
└── exception/
└── GlobalExceptionHandler.java
This structure makes violations immediately visible in code review. If a developer imports org.springframework inside domain/model/, the file is in the wrong package and it is obvious. The directory tree is the first line of enforcement.
The port/out/ directory inside application/ is worth explaining separately. These are interfaces that the application layer depends on but that are implemented in infrastructure. This is the Dependency Inversion Principle applied precisely: the application layer defines what it needs — a CvAnalysisPort that takes a PDF and returns a structured review — and infrastructure delivers an implementation. Swapping AI providers means writing a new adapter. Nothing in the application or domain layer changes.
IV. The Anemic Domain Model: The Most Expensive Mistake in Java
The most consequential mistake in layered Java systems is the anemic domain model.
An anemic domain model is one where domain objects are pure data containers — fields, getters, setters — and all the logic lives in services. It looks like object-oriented code. It is procedural code wearing object-oriented clothing.
Here is what it looks like in the wild:
// Anemic: the entity is a data bag
@Entity
public class Application {
private Long id;
private String status;
private LocalDateTime appliedAt;
private Long candidateId;
private Long jobId;
// getters, setters, nothing else
}
// All domain logic leaks into the service
@Service
public class ApplicationService {
public void withdraw(Long applicationId, Long requestingUserId) {
Application app = repo.findById(applicationId).orElseThrow();
if (!app.getCandidateId().equals(requestingUserId)) {
throw new ForbiddenException("Not your application");
}
if (app.getStatus().equals("WITHDRAWN") || app.getStatus().equals("HIRED")) {
throw new IllegalStateException("Cannot withdraw in this status");
}
app.setStatus("WITHDRAWN");
app.setWithdrawnAt(LocalDateTime.now());
repo.save(app);
}
}
The problem is not that this code is wrong. The problem is that the knowledge of what a valid Application state transition is — the rule that you cannot withdraw a HIRED application — lives in a service class. The entity itself does not know its own invariants. Every service that touches applications must duplicate or call these checks. The invariant is not protected by the object; it is protected by the diligence of every developer who ever writes code that touches it.
A rich domain model moves that knowledge where it belongs:
// Rich: the entity enforces its own rules
public class Application {
private final ApplicationId id;
private ApplicationStatus status;
private final CandidateId candidateId;
private LocalDateTime appliedAt;
private LocalDateTime withdrawnAt;
public void withdraw(CandidateId requestingCandidate) {
if (!this.candidateId.equals(requestingCandidate)) {
throw new DomainException("Only the candidate who applied can withdraw");
}
if (!status.isWithdrawable()) {
throw new DomainException("Cannot withdraw an application with status: " + status);
}
this.status = ApplicationStatus.WITHDRAWN;
this.withdrawnAt = LocalDateTime.now();
registerEvent(new ApplicationWithdrawnEvent(this.id, this.candidateId));
}
}
// ApplicationStatus encapsulates transition rules
public enum ApplicationStatus {
PENDING, REVIEWED, SHORTLISTED, REJECTED, HIRED, WITHDRAWN;
public boolean isWithdrawable() {
return this == PENDING || this == REVIEWED || this == SHORTLISTED;
}
}
The use case becomes thin coordination:
@UseCase
@Transactional
public class WithdrawApplicationUseCase {
private final ApplicationRepository applicationRepository;
public void execute(WithdrawApplicationCommand command) {
Application application = applicationRepository
.findById(command.applicationId())
.orElseThrow(() -> new NotFoundException("Application not found"));
application.withdraw(command.requestingCandidateId());
applicationRepository.save(application);
}
}
The domain logic is now testable without a database, without Spring, without a running application context. A pure unit test instantiates an Application, calls withdraw(), and asserts on the resulting state. That is the payoff.
V. Transaction Boundaries: The Rule Nobody Writes Down
Transaction management is where many otherwise well-structured Java applications fall apart.
The rule is simple but hard to follow under deadline pressure:
@Transactionalbelongs on use case classes. Never on repository methods. Never on domain objects.
When @Transactional is placed on a repository method, you lose the ability to coordinate multiple repository operations atomically. Each call opens and closes its own transaction. If the second call fails, the first has already committed.
// Wrong: separate transactions per repository call
public class ApplicationService {
public void submitApplication(SubmitApplicationCommand cmd) {
applicationRepository.save(application); // tx 1 commits here
notificationRepository.save(notification); // tx 2 fails — application is already persisted
}
}
// Correct: one transaction for the entire use case
@Transactional
public class SubmitApplicationUseCase {
public void execute(SubmitApplicationCommand cmd) {
applicationRepository.save(application);
notificationRepository.save(notification);
// both commit together, or both roll back
}
}
The Self-Invocation Trap
Spring's @Transactional is implemented via proxy. When a method calls another @Transactional method on the same class, it bypasses the proxy entirely. The transaction annotation is silently ignored.
@Service
public class JobService {
@Transactional
public void publishJob(Long jobId) {
// Calls another method on 'this' — bypasses the proxy
// The @Transactional below is NOT applied
this.notifySubscribers(jobId);
}
@Transactional(propagation = Propagation.REQUIRES_NEW)
public void notifySubscribers(Long jobId) {
// This annotation does nothing when called from publishJob above
}
}
The fix: move the inner operation to a separate Spring-managed bean. Proxies wrap beans, not this references.
Read-Only Transactions for Queries
Mark query-only use cases explicitly:
@Transactional(readOnly = true)
public class GetJobUseCase {
public JobResult execute(JobId jobId) {
return jobRepository.findById(jobId)
.map(JobResult::from)
.orElseThrow(() -> new NotFoundException("Job not found"));
}
}
readOnly = true is not just documentation. Hibernate uses it to skip dirty checking on all loaded entities — the persistence context does not track changes for entities that cannot be modified. At scale, this is a meaningful performance gain.
VI. Separating JPA Entities from Domain Entities
One of the most debated questions in layered Java architecture: should the JPA entity be the domain entity?
The temptation to merge them is understandable. Less code, less mapping, faster to write. In small, stable CRUD applications, it can work.
In systems that evolve — which is every production system — merging the two creates a coupling between your persistence strategy and your domain model. JPA has opinions: it needs a no-arg constructor, it often needs public setters for dirty checking, it carries persistence-lifecycle annotations that have no meaning in business logic. These requirements conflict with the immutability and controlled state transitions that make a rich domain model useful.
The separation:
// Infrastructure: JPA entity — JPA's rules apply here
@Entity
@Table(name = "applications")
public class ApplicationJpaEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Enumerated(EnumType.STRING)
private ApplicationStatus status;
@Column(name = "candidate_id")
private Long candidateId;
@Column(name = "job_id")
private Long jobId;
private LocalDateTime appliedAt;
private LocalDateTime withdrawnAt;
protected ApplicationJpaEntity() {} // required by JPA
// getters
}
// Domain: pure Java — zero framework dependencies
public class Application {
private final ApplicationId id;
private ApplicationStatus status;
private final CandidateId candidateId;
private final JobId jobId;
private final LocalDateTime appliedAt;
private LocalDateTime withdrawnAt;
public Application(ApplicationId id, CandidateId candidateId, JobId jobId) {
Objects.requireNonNull(id);
Objects.requireNonNull(candidateId);
Objects.requireNonNull(jobId);
this.id = id;
this.candidateId = candidateId;
this.jobId = jobId;
this.status = ApplicationStatus.PENDING;
this.appliedAt = LocalDateTime.now();
}
// domain methods...
}
The repository adapter maps between the two:
@Component
public class ApplicationRepositoryAdapter implements ApplicationRepository {
private final ApplicationJpaRepository jpaRepository;
private final ApplicationJpaMapper mapper;
@Override
public Optional<Application> findById(ApplicationId id) {
return jpaRepository.findById(id.value())
.map(mapper::toDomain);
}
@Override
public void save(Application application) {
ApplicationJpaEntity entity = mapper.toJpa(application);
jpaRepository.save(entity);
}
}
The mapper is boilerplate. MapStruct generates it at compile time with zero reflection overhead at runtime. The cost is a few extra classes per entity. The benefit is a domain model that is free to evolve independently of the persistence strategy.
VII. Input Validation: Where It Lives and Why
Input validation has two distinct responsibilities that should not be collapsed into one:
Syntactic validation — is the input structurally correct? Is the email a valid email format? Is the salary a positive number? This belongs in the presentation layer, enforced via Bean Validation on request DTOs.
Semantic validation — is the input valid in the context of domain rules? Has this candidate already applied to this job? Has the job been closed? This is domain logic and belongs in the domain layer.
// Presentation: syntactic validation on the request DTO
public record SubmitApplicationRequest(
@NotNull UUID jobId,
@NotBlank @Size(max = 2000) String coverLetter
) {}
// Domain: semantic validation inside the entity
public class Job {
public void acceptApplication(Candidate candidate) {
if (this.status != JobStatus.OPEN) {
throw new DomainException("Cannot apply to a job that is not open");
}
if (this.deadline != null && LocalDate.now().isAfter(this.deadline)) {
throw new DomainException("Application deadline has passed");
}
}
}
A common mistake is performing semantic validation in the controller using repository calls. The controller then needs to know domain rules to construct the right error response. This makes the controller fat and the domain rules fragmented across layers.
The GlobalExceptionHandler translates domain exceptions to HTTP responses — without letting domain logic leak into the controller:
@RestControllerAdvice
public class GlobalExceptionHandler {
@ExceptionHandler(DomainException.class)
public ResponseEntity<ErrorResponse> handleDomainException(DomainException ex) {
return ResponseEntity
.status(HttpStatus.UNPROCESSABLE_ENTITY)
.body(new ErrorResponse(ex.getMessage()));
}
@ExceptionHandler(NotFoundException.class)
public ResponseEntity<ErrorResponse> handleNotFoundException(NotFoundException ex) {
return ResponseEntity
.status(HttpStatus.NOT_FOUND)
.body(new ErrorResponse(ex.getMessage()));
}
}
Domain exceptions propagate up through the stack and are translated at the boundary. The domain layer never knows it is being called over HTTP.
VIII. Integrating External Services Without Polluting the Domain
External integrations — AI APIs, payment providers, file storage — are a common source of architectural decay. The integration code is complex, it is written under pressure, and the temptation is to write it directly into the service that needs it.
The result: your application logic is coupled to a specific SDK. When you switch providers — and you will — the surgery is expensive.
The port/adapter model solves this cleanly. Define what you need in the application layer as a Java interface:
// application/port/out/CvAnalysisPort.java
public interface CvAnalysisPort {
CvReviewResult analyze(byte[] pdfBytes, String candidateName);
}
// application/port/out/FileStoragePort.java
public interface FileStoragePort {
String upload(byte[] fileBytes, String fileName, String contentType);
byte[] download(String fileUrl);
void delete(String fileUrl);
}
Implement in infrastructure:
@Component
public class GeminiCvAnalysisAdapter implements CvAnalysisPort {
private final GeminiClient geminiClient;
@Override
public CvReviewResult analyze(byte[] pdfBytes, String candidateName) {
// All Gemini-specific code is contained here
GeminiRequest request = GeminiRequest.builder()
.model("gemini-2.5-flash")
.content(pdfBytes)
.systemPrompt(buildPrompt(candidateName))
.responseSchema(CvReviewSchema.INSTANCE)
.build();
GeminiResponse response = geminiClient.generate(request);
return mapToResult(response);
}
}
The use case depends only on the interface:
@UseCase
@Transactional
public class RequestCvReviewUseCase {
private final CandidateRepository candidateRepository;
private final FileStoragePort fileStorage;
private final CvAnalysisPort cvAnalysis;
private final CvReviewRepository reviewRepository;
public CvReviewResult execute(RequestCvReviewCommand command) {
Candidate candidate = candidateRepository
.findById(command.candidateId())
.orElseThrow();
byte[] pdfBytes = fileStorage.download(command.cvUrl());
CvReviewResult result = cvAnalysis.analyze(pdfBytes, candidate.getFullName());
CvReview review = CvReview.create(command.candidateId(), command.cvUrl(), result);
reviewRepository.save(review);
return result;
}
}
When you want to replace Gemini, you write a new adapter and swap the Spring bean. The use case is untouched. The domain is untouched. This is what the dependency rule buys you in practice.
IX. Ownership-Scoped Queries: Security at the Data Layer
Authorization in a multi-tenant or multi-role system requires more than role checks. A recruiter may be authorized to manage jobs — but not the jobs of another recruiter. Role-based access control answers "can this type of user perform this operation?" Ownership verification answers "can this specific user operate on this specific resource?"
The correct enforcement point is the database query itself:
// Wrong: fetch then check — leaks data before authorization
public Job getJobForEdit(Long jobId, Long recruiterId) {
Job job = jobRepository.findById(jobId)
.orElseThrow(() -> new NotFoundException("Job not found"));
if (!job.getRecruiterId().equals(recruiterId)) {
throw new ForbiddenException("Not your job");
}
return job;
}
// Correct: scope the query to the authenticated identity
public Optional<Job> findByIdAndRecruiter(JobId jobId, RecruiterId recruiterId) {
return jpaRepository
.findByIdAndRecruiterId(jobId.value(), recruiterId.value())
.map(mapper::toDomain);
}
// Spring Data JPA derives this from the method name
public interface JobJpaRepository extends JpaRepository<JobJpaEntity, Long> {
Optional<JobJpaEntity> findByIdAndRecruiterId(Long id, Long recruiterId);
}
If the job does not belong to the requesting recruiter, the repository returns empty. The use case throws NotFoundException. The caller receives a 404, not a 403 — which also avoids leaking the existence of resources the caller has no right to access.
This pattern means that even if the authorization layer were bypassed, the database returns nothing for data that does not belong to the caller. Defense in depth, enforced at the query level.
X. What This Architecture Costs and When to Deviate
Layered architecture with rich domain models, separated JPA entities, and ports/adapters is not free. The costs are real:
Mapping overhead. Every persistence operation passes through a mapper. For a simple lookup table — a list of cities, a reference data table — this is pure cost with no benefit. Use Spring Data JPA projections directly for read-only, structurally simple data.
More classes. A single feature can require a command object, a use case class, a domain entity, a JPA entity, a mapper, a repository interface, and a repository adapter. This is the right trade-off for complex domains. For a settings toggle, it is over-engineering.
Slower initial development. The first few features take longer to structure. This is front-loaded investment — the payoff comes when a new engineer can understand the system in a day, or when a requirement change touches one module instead of five.
The deviation rule is simple: match the depth of the architecture to the complexity of the domain. A bounded context with rich business rules, multiple actors, and evolving requirements deserves the full model. A bounded context that is fundamentally a CRUD interface over a table does not.
XI. Testing Strategy for Layered Systems
The layer boundaries define the testing strategy naturally. Each layer has a corresponding test type.
Domain layer: pure unit tests. No Spring context, no database, no infrastructure mocks. Test that application.withdraw() throws when the status is HIRED. These tests run in milliseconds and are the highest-value tests in the system.
@Test
void should_throw_when_withdrawing_hired_application() {
Application application = ApplicationTestFactory.hired();
assertThatThrownBy(() -> application.withdraw(application.getCandidateId()))
.isInstanceOf(DomainException.class)
.hasMessageContaining("Cannot withdraw");
}
Application layer: integration tests with a real database. Use @SpringBootTest with a Testcontainers PostgreSQL instance. Test the full use case: does SubmitApplicationUseCase persist the application, reject a duplicate, and fail atomically if a side effect fails?
@SpringBootTest
@Transactional
class SubmitApplicationUseCaseIntegrationTest {
@Autowired
private SubmitApplicationUseCase useCase;
@Test
void should_reject_duplicate_application() {
SubmitApplicationCommand command = SubmitApplicationCommandFactory.valid();
useCase.execute(command);
assertThatThrownBy(() -> useCase.execute(command))
.isInstanceOf(DomainException.class)
.hasMessageContaining("already applied");
}
}
Presentation layer: slice tests. Use @WebMvcTest to test controllers in isolation with the application layer mocked. Verify HTTP status codes, request deserialization, response structure, and validation error responses.
@WebMvcTest(ApplicationController.class)
class ApplicationControllerTest {
@Autowired
private MockMvc mockMvc;
@MockBean
private SubmitApplicationUseCase submitApplicationUseCase;
@Test
void should_return_400_when_cover_letter_is_blank() throws Exception {
mockMvc.perform(post("/api/v1/applications")
.contentType(MediaType.APPLICATION_JSON)
.content("""
{ "jobId": "some-id", "coverLetter": "" }
"""))
.andExpect(status().isBadRequest());
}
}
Infrastructure adapters: contract tests. Test adapters against the real external dependency — real DB, real Cloudinary sandbox, real queue. These are slower and run in CI rather than on every local build.
XII. Principles That Survive Framework Changes
Frameworks evolve. Spring Boot 2 became 3. Jakarta EE replaced Java EE. Quarkus arrived. The principles behind good layered architecture predate all of them and will outlast whatever comes next.
Design for the failure case first. Ownership-scoped queries, fallback search paths, cleanup ordering in file deletion — these are all expressions of the same instinct: assume the failure will happen, and design the system so the failure mode is recoverable, not catastrophic.
Put invariants in the domain, not in services. If an application cannot be withdrawn after it is hired, that rule should be encoded in the Application entity. Not in a service that could be bypassed, duplicated, or forgotten.
The dependency rule is the architecture. Diagrams and documentation are useful, but the package structure and the import statements in your actual code are the architecture. If the code violates the dependency rule, the rule does not exist.
Own your transaction boundaries. Every @Transactional annotation is a decision about atomicity. Make those decisions explicitly at the use case level, where the business operation is defined — not scattered across repository methods where the intent is invisible.
Interfaces at every external boundary. Not because you will definitely swap providers, but because defining an interface forces you to be precise about what you need from the outside world. That precision is itself valuable, separate from the flexibility it enables.
Conclusion
Layered architecture is not a guarantee of a clean codebase. It is a set of constraints that, when respected, make a codebase survivable as it grows — survivable for the developer who reads it six months from now, survivable for the engineer who inherits it, survivable for the debugging session when something is failing in a way the original diagram never anticipated.
The constraints are not bureaucratic. They are engineering. The domain layer's independence is what makes it testable. The application layer's ownership of transaction boundaries is what makes failures predictable. The infrastructure layer's containment of technology coupling is what makes the system changeable.
The diagram is the summary. The discipline is in the code.