
How We Merged Two Large-Scale AWS Organizations
Thiago Vacare · August 21, 2025 · 13 min read
Migrating 110 AWS accounts is a major challenge. Doing it with zero downtime across multiple business units is the kind of project that gets a platform engineer’s pulse racing. That’s exactly what we faced at AutoScout24 when we needed to move all of Trader’s AWS accounts into our custom landing zone.
Unifying Clouds: The AutoScout24 and Trader Story
This project was a massive undertaking. Here’s a glance at the scope of what we accomplished before we dive into the details:
- 110 AWS accounts migrated across 4 business units
- 4-month timeline from planning to completion
- Zero downtime achieved across all services
- 3 major platforms involved: AutoSync, DealerTrack, and CMS
- Multiple regions with a special focus on Canada-specific requirements
- 0 security incidents during the migration process
AutoScout24
AutoScout24 stands as the largest pan-European online car market, featuring:
- Over 2 million vehicle offers
- Around 30 million users per month
- Over 800 employees
- Operations across Germany, Belgium, Luxembourg, the Netherlands, Italy, France, Austria, Norway, Denmark, Poland, and Sweden
Trader
Trader represents the biggest automotive market source in Canada, having acquired businesses such as DealerTrack, AutoSync, and CMS:
- 25.7 million visits monthly
- Over 1000 employees
- Multiple specialized automotive platforms
The Migration Roadmap

The migration encompassed 110 accounts across different business areas and environments, with a carefully planned timeline. Effective communication was crucial, so we integrated our communication strategy directly into our roadmap.
Timeline Highlights
- March: Initial alignment meetings with Trader, AWS, and AutoScout24 stakeholders
- End of April: AutoSync & Trader Core migrated (50 accounts)
- End of May: DealerTrack migrated (40 accounts)
- End of July: CMS migrated (20 accounts)
The result? Zero downtime and no disruption to developer workflows.
The Communication Playbook
- Stakeholder Identification:
- We identified key stakeholders across all involved parties:
- Trader Engineers
- AutoScout24 Platform team
- Our AWS Account Team
- Security
- Finance
- We identified key stakeholders across all involved parties:
- Expectation Management:
- We ensured clear communication covered all critical aspects:
- The impact assessment for each migration phase
- Upcoming changes to access
- Security requirements and policies
- We ensured clear communication covered all critical aspects:
- Centralized Channels:
- We established multiple channels for support and documentation:
- Dedicated Slack channels for support
- Comprehensive documentation on Confluence and our Platform Docs
- Clearly defined escalation paths for any issues
- We established multiple channels for support and documentation:
The Engineering Toolkit

The migration involved merging two different philosophies of infrastructure management.
Trader’s Setup:
- AWS Control Tower: A service that provides the easiest way to set up and govern a new, secure, multi-account AWS environment.
AS24’s Setup:
- Terraform: An IaC tool for building, changing, and versioning cloud resources. We use it specifically for managing our AWS Organizations, SCPs, accounts, and StackSets.
- AWS StackSets: A feature to deploy CloudFormation stacks across multiple accounts and regions.
- AWS CDK: A framework for defining cloud resources using familiar programming languages. We use it to manage roles and permissions for our AWS accounts based on Active Directory groups.
Putting the Plan into Action

First, it’s important to state a critical fact: AWS doesn’t offer an automated tool to simply move an account from one Organization to another. This isn’t a simple switch. Every step must be properly thought out and carefully executed to avoid breaking dependencies or causing downtime.
Our approach was to perform a deep assessment to map out all cross-account connections, networking, and governance policies before making any moves.
Pre-Migration Intelligence Gathering
The main risk during this migration wasn’t that resources would be deleted, but that a service could stop working due to a hardcoded reference to the old organization or a dependency on a shared resource. We used a series of AWS CLI commands to hunt for these potential issues.
Here are some of the key commands we used:
1. Check for roles referenced across multiple accounts: This helped us identify IAM roles that might have trust policies tied to the old organization’s ID.
# List roles that might be assumed by other accounts
aws iam list-roles --query "Roles[?contains(AssumeRolePolicyDocument.Statement[].Principal.AWS, 'arn:aws:iam::')].[RoleName, Arn]"
2. Audit existing Service Control Policies (SCPs): We needed to understand what, if any, guardrails were in place.
# List all SCPs in the current organization
aws organizations list-policies --filter SERVICE_CONTROL_POLICY
# Check which accounts an SCP is attached to
aws organizations list-targets-for-policy --policy-id <scp-id>
3. Identify network connections: This was crucial for mapping out dependencies between VPCs and on-premises networks.
# Find active VPC peering connections
aws ec2 describe-vpc-peering-connections
# Check for Transit Gateway attachments and configurations
aws ec2 describe-transit-gateway-attachments
aws ec2 describe-transit-gateways
A Key Finding: Unenrolling from Control Tower
One of our biggest initial concerns was what would happen when an account was unenrolled from the Trader Control Tower environment. After testing with two non-production accounts, we confirmed that unenrolling an account from Control Tower does not remove any resources from the account. All the VPCs, EC2 instances, and S3 buckets remain. The only change is that the account is no longer managed by Control Tower’s governance.
This was a huge relief, but it came with a new task: for any resources you want to manage with an IaC tool like Terraform or CDK going forward, you must import them into your tool’s state.
Avoiding IP Address Collisions
AutoScout24 uses a shared VPC model across our organization, so we had to be absolutely sure that no IP ranges in the Trader accounts would conflict with our existing network CIDRs. To automate this check, we used a Python script with boto3
to scan every VPC and subnet CIDR in every region of the Trader accounts and compare them against our reserved IP ranges.
import boto3
from ipaddress import ip_network, IPv4Network, IPv6Network
# Define your organization's primary network ranges to check against.
ORG_RANGES = [ip_network("10.0.0.0/8", strict=False), ip_network("172.16.0.0/12", strict=False)]
def _to_net(x):
return x if isinstance(x, (IPv4Network, IPv6Network)) else ip_network(x, strict=False)
def overlaps(a, b):
return _to_net(a).overlaps(_to_net(b))
def main():
session = boto3.Session()
regions = session.get_available_regions("ec2")
conflicts = []
for region in regions:
ec2 = session.client("ec2", region_name=region)
try:
# Check all VPC CIDRs in the region
vpcs = ec2.describe_vpcs()["Vpcs"]
for vpc in vpcs:
for assoc in vpc.get("CidrBlockAssociationSet", []):
cidr = assoc.get("CidrBlock")
if cidr:
for org_range in ORG_RANGES:
if overlaps(cidr, org_range):
conflicts.append((region, vpc["VpcId"], cidr, f"conflicts with {org_range}"))
except Exception as e:
print(f"[{region}] skipped due to error: {e}")
if conflicts:
print("Found overlapping CIDRs:")
for c in conflicts:
print(f" - Region: {c[0]}, VPC: {c[1]}, CIDR: {c[2]}, Details: {c[3]}")
else:
print("No network conflicts found.")
if __name__ == "__main__":
main()
Automating Governance at Scale

Once an account is in our organization, it needs to be governed. We manage our entire AWS Organization structure—from OUs to security policies—using Terraform. This “governance-as-code” approach gives us incredible flexibility and makes collaboration seamless.
Our security teams don’t just write documents; they can create Pull Requests (PRs) with SCP changes that we can review and apply in a controlled, automated way.
Defining the Organization Structure in Code
First, we define our company’s structure as a series of Organizational Units (OUs) in Terraform. This makes it easy to see and manage where accounts live.
# The root of our entire AWS Organization
resource "aws_organizations_organization" "autoscout24" {
aws_service_access_principals = [...]
enabled_policy_types = [...]
}
# Top-level OUs for each major Business Unit
resource "aws_organizations_organizational_unit" "trader-org-unit" {
name = "Trader"
parent_id = aws_organizations_organization.autoscout24.roots[0].id
}
# We can also create nested OUs for more granular control
resource "aws_organizations_organizational_unit" "trader-autosync-org-unit" {
name = "AutoSync"
parent_id = aws_organizations_organizational_unit.trader-org-unit.id
}
Solving the 5-SCP Limit
A common challenge with AWS Organizations is the limit of five Service Control Policies (SCPs) per OU. To work around this, we group multiple logical policies into a single combined-preventive-controls
policy in Terraform.
# We source multiple policy documents...
data "aws_iam_policy_document" "combined-preventive-controls-policy" {
source_policy_documents = [
data.aws_iam_policy_document.core-preventive-controls-policy.json,
data.aws_iam_policy_document.advanced-preventive-controls-policy.json
]
}
# ...and create a single AWS Organizations policy from the combined content.
resource "aws_organizations_policy" "combined-preventive-controls" {
name = "combined-preventive-controls"
content = data.aws_iam_policy_document.combined-preventive-controls-policy.minified_json
}
Knowing the power of these SCPs, we implemented a phased rollout, waiting 1-2 weeks after the accounts joined our organization before attaching the SCPs. This was a crucial safety net that prevented unnecessary troubleshooting and delays.
Zero-Touch Permissions with AWS CDK
Beyond the high-level guardrails of SCPs, we use the AWS CDK to run a fully automated permissions management system. Our CDK application connects our corporate identity provider with AWS through IAM Identity Center, allowing us to manage access centrally.
We provision a combination of standard roles (like AdministratorAccess
) and custom, least-privilege roles for teams with unique requirements. The entire process is automated:
- A Platform team member adds a new account’s details to a central
accounts.yaml
file. - A pull request is created and approved.
- Once merged, our CDK pipeline automatically provisions all necessary roles and permission sets.
The result is a zero-touch process. Engineers automatically have the correct level of access the moment a new account is created, saving an incredible amount of time.
More Than Just Tech
A migration’s success isn’t just measured in uptime; it’s also measured in how well you integrate the financial and security operations of the two organizations.
FinOps: Managing the Money
Merging the billing and cost management of two large cloud environments is a major challenge. Our primary focus was on Savings Plans (SPs) and Reserved Instances (RIs).
FinOps Pro Tip: The Purchase Freeze We froze all new Savings Plan and RI purchases two weeks before migration. Why? These commitments are tied to the management (payer) account of an AWS Organization. If we hadn’t paused, any commitments bought in the Trader ORG right before the move would have been “orphaned”—stuck in the old organization and unable to apply to usage in the new one. This would have been a costly mistake.
Furthermore, by consolidating our total cloud spend under a single, larger AWS organization, we’ve increased our negotiating power. This positions us for better enterprise agreements and unlocks greater volume discounts, leading to significant long-term cost savings for the entire company.
Security: Centralizing our Defenses
A huge win from this project was bringing all 110 Trader accounts under our centralized security umbrella. This meant we could enforce unified threat detection with Amazon GuardDuty, automated vulnerability scanning with Amazon Inspector, and consistent backup policies with AWS Backup across all accounts.
How We Did It: We used CloudFormation StackSets, with Terraform for managing them, to automatically deploy the necessary configurations to every new account. This allowed us to keep a standard with resources that we need for all accounts, such as certificates and other integrations, to make security scanners work, for example.
From Migration to Operation
With the technical migration complete, our focus shifted to the people. A successful project isn’t finished until the teams are comfortable and productive in their new environment.
Our goal was enablement, not just a handover.
We updated our Platform Docs with comprehensive onboarding material specifically for the Trader teams and streamlined all support and permission requests through our central Platform Slack channels. Our Platform Advocates led hands-on workshops and knowledge transfer sessions to ensure a smooth transition.
A key technical step was integrating the Trader teams into our Okta SSO for unified, secure access to all their accounts. On the financial side, we adjusted budgets, cost alerts, and allocation tags to give them full visibility in our FinOps tooling. This ensured every team felt supported and empowered in the new, unified environment from day one.
Wins and Challenges
No project of this scale is perfect, but it’s important to celebrate the victories and learn from the hurdles.
Looking back, a few things went even better than planned:
- Zero Downtime: Our top priority was met thanks to rigorous pre-migration testing, sandbox validation, and a gradual, account-by-account cutover instead of a risky “big bang” approach.
- Constant Alignment: Our proactive communication strategy was built on weekly touchpoints with the main stakeholders. This, combined with our dedicated Slack channels, ensured everyone was aligned and prevented any escalations due to miscommunication.
- Reusable Playbooks: We didn’t just complete a migration; we created a blueprint. We now have a battle-tested set of runbooks and automated scripts that will significantly accelerate any future acquisitions or large-scale migrations.
We also identified a few areas where we could improve our process for the future:
- Automate Discovery: Manually identifying delegated administrators for services like GuardDuty was slow and delayed our initial timeline. Next time, we’ll use an automated script during the initial assessment phase to find these delegations upfront.
- Deployment Speed: Deploying large AWS StackSets to dozens of accounts and regions at once can be slow. However, since we migrated in batches, this did not become a major bottleneck for the project.
- Start FinOps Earlier: The complexity of analyzing and reallocating existing Savings Plans was greater than we anticipated. Our key recommendation is to start the financial analysis at least 8 weeks before the migration and implement a purchase freeze 4 weeks prior.
Our Wishlist for AWS
This migration gave us a unique, hands-on perspective at scale. While the existing AWS tools are powerful, we identified a few areas where improvements could make large-scale migrations like this much smoother for all customers.
1. A Native Migration Tool for Organizations Currently, there’s no AWS-native tool for orchestrating a batch account migration between Organizations. We had to build custom scripts for assessment, validation, and tracking. A “Migration Hub for Organizations” with pre-flight checks, automated workflows, and rollback capabilities would be a game-changer.
2. Higher Service Control Policy (SCP) Limits The limit of 5 SCPs per OU forced us to consolidate multiple logical policies into single, complex JSON documents. This makes them harder to read, maintain, and audit. A higher limit would allow for more modular and manageable governance-as-code.
3. Centralized Region Management Enabling a new AWS region must be done on a per-account basis. For us, enabling the new Calgary region meant we had to deploy a role to over 100 accounts and then create a script that assumed that role to enable the region for each account. An organization-level control to enable (or disable) regions for specific OUs would save immense operational effort.
Key Takeaways for Your Journey

For any organization planning a similar migration, here are our top recommendations, based on what went right and what we’d do differently next time.
1. Over-Communicate Everything. Start stakeholder alignment months before any technical work begins. During the active migration, provide daily updates. After the migration, check in with teams consistently. You cannot communicate too much. Our proactive communication strategy, built on weekly touchpoints and dedicated Slack channels, was a huge win and prevented any escalations.
2. Assess, Test, and Plan to Fail. Spend weeks on a deep technical assessment of every account. Test every migration step in a non-production environment. Most importantly, have a tested and practiced rollback procedure for every single phase. Our zero-downtime success was thanks to this rigorous testing and a gradual, account-by-account cutover.
- Lesson Learned: Automate as much of the discovery phase as possible. Manually identifying delegated administrators for services like GuardDuty was slow. Next time, we’ll use a script to find these upfront.
3. Avoid Costly Financial Surprises. Don’t treat cost management as an afterthought. A clean financial cutover is as important as a technical one.
- Lesson Learned: The complexity of analyzing and reallocating existing Savings Plans was greater than we anticipated. Start the financial analysis at least 8 weeks before the migration and implement a purchase freeze 4 weeks prior.
4. Focus on Enablement, Not Just Handover. The project isn’t done when the accounts are moved. Success is when the teams are productive and comfortable in their new environment. Proactively conduct workshops, provide dedicated support channels, and update your documentation before they even have to ask. We didn’t just complete a migration; we created reusable playbooks and automated scripts that will accelerate any future integrations.
Driving Forward, Together

Migrating 110 AWS accounts with zero downtime wasn’t just a technical achievement—it was a testament to collaboration, planning, and relentless automation. We didn’t just move accounts; we integrated teams, processes, and cultures.
This project was never just about changing an organization’s structure. It was about enabling our combined company to innovate faster and more securely. The unified AWS organization now provides:
- Streamlined governance that scales with our growth.
- Enhanced security through centralized monitoring and policies.
- Greater cost optimization opportunities across all business units.
The patterns and processes we developed are now reusable assets that will accelerate future integrations. We’ve proven that with the right approach, large-scale infrastructure migrations can be executed with precision, minimal risk, and zero disruption.
Migration complete. Let’s drive forward.