From ETL to ELT to reverse ETL: The new data stack
Joey Lee
December 17, 2025
Modern data teams move information in very different ways than they did a decade ago. What used to be a linear pipeline has evolved into a flexible, cloud-native system where data flows continuously between tools and teams. At the center of this new data stack sits the cloud data warehouse, and increasingly, that warehouse is Snowflake.
To understand why Snowflake plays such a central role, it helps to look at how data movement has evolved from ETL to ELT and now to reverse ETL.
The original model: ETL
ETL stands for extract, transform, load. In this traditional model, data is pulled from source systems, transformed in a separate processing layer, and then loaded into a data warehouse.
This approach made sense in the era of on-premise systems. Warehouses had limited compute power, so heavy transformations needed to happen before data was loaded. ETL pipelines were often custom-built, fragile, and slow to change.
As data volumes grew and new sources emerged, ETL became a bottleneck. Transformations were hard to update, and reprocessing historical data was expensive and time-consuming.
The shift to ELT in the cloud
Cloud data warehouses changed the economics of compute. With scalable, on-demand processing, it became practical to load raw data first and transform it inside the warehouse.
This approach is known as ELT. Data is extracted from source systems, loaded into the warehouse in its raw form, and transformed using SQL once it is stored centrally.
Snowflake is particularly well-suited to ELT because of its elastic compute and strong SQL performance. Teams can run transformations without worrying about resource contention or infrastructure limits.
Tools like dbt became the standard for managing transformations in this model. dbt allows teams to define transformations as version-controlled SQL models that run directly in Snowflake.
Modern data ingestion with Fivetran
In the ELT world, ingestion tools play a critical role. Instead of building custom pipelines, teams rely on managed connectors to bring data into the warehouse.
Fivetran is one of the most widely used ingestion tools. It extracts data from applications like Salesforce, HubSpot, Google Ads, and many others, then loads it directly into Snowflake with minimal configuration.
Because Snowflake handles scale and performance, ingestion tools can focus on reliability and schema consistency rather than transformation logic.
Reverse ETL and data activation
For years, data movement flowed in one direction, from applications into the warehouse. Reverse ETL flips that model by sending curated data from the warehouse back into operational tools.
Reverse ETL tools like Hightouch and Census allow teams to define customer segments, metrics, or attributes in Snowflake and sync them to tools like CRMs, ESPs, and ad platforms.
This enables use cases such as:
Activating customer segments in marketing tools
Keeping CRM fields up to date with analytics data
Powering personalization across channels
Hightouch and Census both position Snowflake as the system of record for business logic and customer data.
Snowflake as the hub of the modern data stack
What ties ingestion, transformation, and activation together is the warehouse itself. Snowflake acts as the central hub where data is unified, transformed, and governed before being used downstream.
This hub-and-spoke model offers several advantages:
A single source of truth for analytics and activation
Consistent definitions across teams and tools
Strong governance and access control
Reduced data duplication and pipeline complexity
Because Snowflake supports high concurrency and independent workloads, analytics queries, transformations, and reverse ETL jobs can all run simultaneously without conflict.
Why this matters for marketing and analytics teams
For marketing ops and analytics leaders, the modern data stack changes how teams work. Business logic lives in Snowflake, not inside individual tools. Segments are defined once and reused everywhere. Data becomes more reliable and easier to audit.
Instead of stitching together reports across disconnected platforms, teams can build on a shared foundation. This leads to faster experimentation, better attribution, and more consistent customer experiences.
Snowflake’s role in this ecosystem is not just storage. It is coordination. It is the place where data becomes usable across the organization.
Looking ahead
The shift from ETL to ELT to reverse ETL reflects a broader trend toward composable data platforms. Instead of monolithic systems, teams assemble best-in-class tools around a powerful central warehouse.
Snowflake’s architecture and ecosystem make it uniquely suited to serve as that center. As data activation and AI-driven workflows continue to expand, the importance of the warehouse as a hub will only grow.
In the next post in this series, we will explore how CDPs, ESPs, and other marketing platforms integrate with Snowflake to turn data into action.
Inside Snowflake’s architecture: The magic behind the scenes
From ETL to ELT to reverse ETL: The new data stack



