#Why we decided to migrate
Let’s get to the point: we had all our public API in Spanish. Schemas, properties, enums, descriptions, request examples. Everything. And when we started to think seriously about the API as a product - not just as an internal tool - we realized that this was a problem.
Not because Spanish is worse than English for documenting software. But because the reality of the development ecosystem is that most developers expect to read an API in English. It doesn’t matter if your product is Spanish, German or Japanese. If your API exposes CrearFacturaRequest with a direccion_calle property, you are adding unnecessary friction to any non-Spanish speaking integrator.
And in Spain there are more of them than you can imagine: thousands of foreign entrepreneurs setting up businesses here every year, they need to invoice according to Spanish regulations, and software options with APIs accessible in English are practically non-existent.
ℹ️The real problem
A German developer setting up a startup in Barcelona needs to invoice. He searches for Spanish invoicing APIs. Finds documentation only in Spanish. Passes by. It’s not a technical problem, it’s an accessibility problem.
The decision was clear. What wasn’t so clear was how to execute it without breaking half a system - and being two people.
#The real scope of change
Before touching anything, we took an inventory. Our OpenAPI specification had about 20,000 lines distributed like this:
This meant to migrate:
- Schema names:
CreateInvoiceRequest→CreateInvoiceRequest - Properties:
address_street→address_street - Values for enums:
PHYSICAL/JURIDICAL→INDIVIDUAL/LEGAL_ENTITY - Descriptions and summaries of each endpoint
- All examples of requests/responses
- Error messages and validations
- Downstream Java code: Controllers, Mappers, DTOs
- Integration and unit tests
- Database migrations (email template variables)
A change of this caliber can break the integrity of the spec, generate inconsistencies in the generated code, or introduce subtle bugs that don’t jump out until production. But we had something in our favor: the system architecture.
#Why the hex architecture made this viable
BeeL.es is built with hexagonal architecture (Ports & Adapters) combined with DDDD. We are not going to explain the pattern from scratch - if you don’t know it, at the end of the article we leave resources - but we do want to tell why it was determinant in this migration.
The fundamental idea is simple: everything that is API (controllers, DTOs, specs) lives in the infrastructure layer. The business logic - the use cases that create invoices, validate tax data, calculate taxes - lives in the domain, completely isolated. Between the two layers there is a clear boundary: the mappers.
This meant that we could completely rewrite the public API contract without touching a single line of the domain. And that’s exactly what we did.
#How our system is organized
#The real impact
All the changes were concentrated on four points:
- OpenAPI specification - the public contract
- Controllers - the HTTP input
- DTOs - automatically generated from the spec
- Mappers - the translation boundary between infra and domain
The domain was not touched. Not a line. The use cases, the entities, the business validations, the persistence - all intact. That’s the promise of the hex architecture and in this migration it was delivered to the letter.
Also, by using openapi-generator-maven-plugin to generate the DTOs, the schema rename in the YAML was automatically propagated to the Java code. We only had to update the manual references in Controllers and Mappers.
#The strategy: branching by phase, incremental merge
The first thing we discarded was to do everything in one giant branch. A 20,000 line diff is unpredictable. If something breaks, you don’t know where. And if another developer needs to merge work in parallel, the conflicts would be impossible to resolve.
Instead, we designed a branching strategy by phase. Each phase of the migration lived in its own feature branch, with its own PR and its own validation. The flow went like this:
#Branching strategy
We worked with one branch per phase, all starting from develop. The convention was:
feature/api-i18n-glossary ← Phase 1: glossary of translations
feature/api-i18n-schemas ← Phase 2: OpenAPI schemas
feature/api-i18n-paths ← Phase 3: paths, descriptions, examples
feature/api-i18n-codegen ← Phase 4: regeneration of DTOs
feature/api-i18n-controllers ← Phase 5: controllers + mappers
feature/api-i18n-tests ← Phase 6: tests
feature/api-api-i18n-db ← Phase 7: Flyway migrationsEach branch was merged to develop only when the PR passed code review and CI was green. The next branch always started from the HEAD of develop after the previous merge. This gave us several things:
- Diffs revisable: each PR had a bounded scope. The schema PR was large (~4,000 lines), but was YAML only - easy to review.
- Bisect clean: if something broke later, we could do
git bisectand locate exactly at which stage the problem was introduced. - Without blocking each other: as long as one of us had a phase in review, the other could keep moving forward in
develop. Conflicts only appeared at merge points, and being incremental, they were manageable.
⚠️What we didn't do
We did not use a long-lived feature/api-migration type branch with
everything together. We also did not do aggressive rebasing between phases. We
merge explicit commits, so that the history would tell the real story of how
the process went.
#Phase 1: The glossary as a source of truth
Before touching code, we built a glossary of translations - a document with all the correspondences between Spanish and English terms. It seems basic, but it was what avoided inconsistencies like having customer in one endpoint and client in another, or invoice_number versus invoice_id for the same concept.
| English | Spanish | Context | | -------------------- | -------------------- | ---------- | --- | | CreateInvoiceRequest | CreateInvoiceRequest | Schema | | street_address | address_street | Property | | PHYSICAL | INDIVIDUAL | Enum value | | JURIDICA | LEGAL_ENTITY | Enum value | | invoice_number | invoice_number | Property | | entity_type | entity_type | Property | |
Each translation is discussed between the two of us: entity_type or entity_kind? INDIVIDUAL or NATURAL_PERSON? These decisions are made once, documented, and then followed without deviation. The glossary lived in the repo as a markdown inside /docs and was referenced in all PRs.
#Phase 2-3: Migration of the OpenAPI spec
With the glossary closed, the spec migration was methodical. First the schemas, then the paths and examples. An example of how a schema looked like:
# Before
CreateInvoiceRequest:
type: object
properties:
invoice_number:
type: string
street_address:
type: string
entity_type:
enum: [PHYSICAL, LEGAL]# Then
CreateInvoiceRequest:
type: object
properties:
invoice_number:
type: string
address_street:
type: string
entity_type:
enum: [INDIVIDUAL, LEGAL_ENTITY]After each set of changes, we would launch redocly lint openapi.yaml. Not at the end - after each significant change. A broken reference in a schema is detected in seconds if you validate on the fly. If you let it go and migrate 50 more schemas on top of it, the error multiplies and tracking it down is hell.
We integrated Redocly in CI so that no PR could merge with an invalid spec:
# In the CI pipeline
- name: Validate OpenAPI
run: npx @redocly/cli lint openapi.yaml --format=stylish#Phase 4: Regeneration of DTOs
Once the spec was valid in English, a mvn clean generate-sources regenerated all DTOs automatically. The openapi-generator-maven-plugin plugin gave us:
- New DTOs with English names
- Updated API interfaces
- Validation models
This is where the value of code generation comes in. Without it, we would have had to manually rename hundreds of Java classes. With it, it was one command.
#Phase 5: Controllers and Mappers - the translation frontier
The controllers went on to use the new DTOs:
// Before
@PostMapping("/invoices")
public InvoiceResponse createInvoice(
@RequestBody createInvoiceRequest request) { ... }
// After
@PostMapping("/invoices")
public InvoiceResponse createInvoice(
@RequestBody CreateInvoiceRequest request) { ... }But the most interesting change is in the mappers. It is where the language translation occurs in the data flow - the DTO speaks English, the domain entity still speaks Spanish:
// The mapper acts as a translation boundary
Invoice toEntity(CreateInvoiceRequest dto) {
return Factura.builder()
.invoiceNumber(dto.getInvoiceNumber())
.streetAddress(dto.getAddressStreet())
.build();
}This is a detail that deserves attention: the domain is still in Spanish. Invoice, InvoiceNumber, EntityType - all intact. The mapper translates between the public contract (English) and the internal domain (Spanish). That boundary is clearly defined thanks to the hexagonal architecture, and that is what made this migration possible.
This is the complete flow of a request, with the translation boundary marked:
#Phase 6: Tests
Integration and unit tests were updated to use the new DTOs:
@Test
void shouldCreateInvoiceCorrectly() {
CreateInvoiceRequest request = new CreateInvoiceRequest()
.invoiceNumber("2026-001")
.addressStreet("Calle Mayor 1");
// ...
}We had coverage above 80%, so each phase was validated by running the full suite of tests. No PR was merge without green in CI. This gave us the confidence to make aggressive changes - if we broke something, we knew within minutes, not in production.
#Phase 7: The database - what we almost missed
This is the point that caught us off guard the most. We thought the migration was “just code and specs”. But we had email templates in database with variables like {{invoice_number}} that needed to be migrated to {{invoice_number}}.
We created a Flyway migration to resolve this:
UPDATE email_templates
SET template_body = REPLACE(
REPLACE(template_body, '{{invoice_number}}', '{{invoice_number}}'),
'{{street_address}}', '{{address_street}}', '{{address_street_street}}'
);Lesson here: audit your data before starting such a migration. Not just the code. Any persisted data that references the old nomenclature is a point of failure.
#The tooling we use
Summarizing the tools that made the process feasible:
- Redocly CLI for linting the OpenAPI spec on each commit and CI
- openapi-generator-maven-plugin to regenerate DTOs automatically from the spec
- IntelliJ refactoring (Shift+F6, Structural Search) for safely renaming Java symbols in controllers and mappers
- Flyway for versioned database migrations
- Claude to generate initial glossary, review translations of technical descriptions, and detect inconsistencies. Useful for repetitive parts, but all decisions are made by us
- Git with branching by phase as already described - incremental PRs, CI mandatory, merge to develop
#What we learned
Some takeaways left after this migration:
Architecture saves you or sinks you. We could have had the best tools in the world - if the architecture had been coupled, this would have been weeks of debugging. The investment in hex paid for itself in this migration.
Code generation is not optional in a serious API. Generating DTOs from spec eliminates an entire category of bugs (spec-code desynchronization) and turns massive refactorings into a Maven command.
Branching granular > a mega-branch. It’s tempting to cram everything into a branch and do a big merge at the end. Don’t. Small PRs are best reviewed, conflicts are resolved before they grow, and if something breaks, git bisect tells you exactly where.
Validate early, validate always. A schema reference error is trivial to fix on the fly. Discovered after migrating 50 schemas over, it is a domino effect. Redocly in CI was non-negotiable.
Audit your data, not just your code. It almost caught us by surprise that the database also had Spanish nomenclature embedded in templates. Before starting any such migration, do a grep on your persisted data.
Document the decisions, not just the result. The glossary was vital. When mid-migration someone asked “why entity_type and not entity_kind?”, the answer was documented. Without that, you would have had repeated debates and inconsistent decisions.
#If you are faced with something similar
1Invest in architecture from day 1
If you’re going to build an API that will evolve over time, it’s worth starting with hex. The initial overhead is real, but the first time you need to make a cross-cutting change like this, it pays off.
2Generate code from the spec, not the other way around.
Don’t write DTOs by hand. Use openapi-generator (Java), oapi-codegen (Go), or whatever corresponds to your stack. The spec is the source of truth, the generated code is an artifact.
3Small branches, frequent merges
One branch per phase. PR with bounded scope. Green CI mandatory before merge. Explicit merge commits to keep history readable. No long-lived branches with 15,000 diff lines.
4Lint in CI, not as afterthought
redocly lint or spectral in each push. If the spec does not validate, the PR is not merged. It is as simple as that.
5Tests as a real safety net
Not decorative tests - tests that cover critical flows. Our >80% coverage gave us the confidence to make aggressive changes knowing that if we broke something, we were going to know about it.
6If your API already has users, plan the transition.
We were lucky that the public API was not yet in production. If yours is: deprecation notices, versioning (v1/v2), transition period. Don’t make breaking changes without notice.
#Conclusion
We migrated ~20,000 lines of OpenAPI spec, regenerated Java DTOs, updated controllers, mappers, tests and data in database. The domain - use cases, entities, business logic - was not touched. Zero lines changed in the core.
That was not luck. It was the result of having invested in an architecture that separates domain infrastructure from the beginning. And of having executed the migration with a branching strategy that allowed us to move incrementally, with validation at every step.
If you are building a product that is going to evolve - and every serious product does - architecture is not an expense. It’s the investment that allows you to make changes like that without crashing the system.
#Frequently Asked Questions
#Resources
If you want to go deeper into the topics we have touched on:
Hexagonal Architecture and DDD:
- Alistair Cockburn: Hexagonal Architecture - the original article by the pattern creator
- AWS Prescriptive Guidance: Hexagonal Architecture
- Baeldung: Hexagonal Architecture, DDD, and Spring - practical implementation in Spring Boot
OpenAPI and Code Generation:
If you are looking for a billing software with an API designed for developers, check out our public API.
