Free Practice Questions for Snowflake DEA-C02 Certification
Study with 627 exam-style practice questions designed to help you prepare for the Snowflake SnowPro Advanced: Data Engineer (DEA-C02). All questions are aligned with the latest exam guide and include detailed explanations to help you master the material.
Start Practicing
Random Questions
Practice with randomly mixed questions from all topics
Domain Mode
Practice questions from a specific topic area
Quiz History
Exam Details
Key information about Snowflake SnowPro Advanced: Data Engineer (DEA-C02)
- Multiple choice
associate (intermediate)
Through Snowflake Continuing Education (CE) program (eligible ILT courses or higher-level certification)
August 22, 2025
Active SnowPro Core Certified credential
Data Engineers and Software Engineers with 2+ years of data engineering experience, including practical Snowflake usage, RESTful APIs, SQL, semi-structured data, and cloud-native concepts
2 years
Exam Topics & Skills Assessed
Skills measured (from the official study guide)
Domain 1: Data Movement
Subdomain 1.1: Given a data set, load data into Snowflake.
β Outline considerations for data loading β Define data loading features and potential impacts
Subdomain 1.2: Ingest data of various formats through the mechanics of Snowflake.
β Required file formats β Schema detection using INFER_SCHEMA for table design and data analysis β Ingestion of structured, semi-structured, and unstructured data β Implementation of stages and file formats β Manage storage integrations configurations β Manage encryption (pre-scoped URLs, server-side, or client-side) β Manage compression and parsing strategies β Extract metadata from staged files
Subdomain 1.3: Troubleshoot data ingestion.
β Identify causes of ingestion errors β Determine resolutions for ingestion errors
Subdomain 1.4: Design, build, and troubleshoot continuous data pipelines.
β Stages β Tasks β Streams β Dynamic tables β Materialized views β Snowpipe (for example, Auto Ingest compared to the REST API) β Snowpipe Streaming o Snowpipe Streaming compared to the Kafka connector β Create User-Defined Functions (UDFs) β Design and use the Snowflake SQL API β Openflow β Use Notebooks to run pipelines of stored procedures for data ingestion tasks β Use Snowflake scripting to develop and automate pipelines
Subdomain 1.5: Install, configure, and use connectors for Snowflake integration.
β Kafka connectors β Spark connectors β Python connectors β Native connectors
Subdomain 1.6: Design and build data sharing and data consumption solutions.
β Evaluate the use of a data share or a clone β Implement a data share β Manage auto-fulfillment β Create and manage views β Implement row-level filtering β Share data using the Snowflake Marketplace β Share data using a listing β Use Streamlit to build data applications and interfaces for data consumption β Create interactive dashboards for data exploration and sharing β Build self-service data access applications
Subdomain 1.7: Manage different types of tables and data operations.
β Manage external tables β Manage Iceberg tables β Manage hybrid tables β Perform general table management β Use Horizon Catalog to federate data from external catalogs β Manage schema evolution β Unload data
Domain 2: Performance Optimization
Subdomain 2.1: Troubleshoot underperforming queries.
β Identify underperforming queries β Outline telemetry around the operation β Identify the root cause β Increase efficiency
Subdomain 2.2: Given a scenario, configure a solution for optimal performance.
β Scale out compared to scale up β Virtual warehouse properties (for example, size, multi-cluster) o Snowpark-optimized virtual warehouses β Query complexity β Micro-partitions and the impact of clustering β Materialized views β Search optimization service β Query acceleration service β Snowpark-optimized warehouses β Caching features β Use the ACCOUNT_USAGE schema β Use warehouse metrics (such as warehouse queues) and configurations: β Resource monitors β Warehouse constraints on credit consumption β Balance optimization with credit consumption considerations β Optimize storage configurations and costs
Subdomain 2.3: Monitor continuous data pipelines.
β Snowflake objects β Tasks β Snowsight task history dashboards β Streams β Snowpipe Streaming β Alerts β Dynamic Tables β Notifications β Data quality and data metric function monitoring
Domain 3: Storage & Data Protection
Subdomain 3.1: Implement and manage data recovery features in Snowflake.
β Time Travel β Impact of streams β Fail-safe β Cross-region and cross-cloud replication
Subdomain 3.2: Use system functions to analyze micro-partitions.
β Clustering depth β Cluster keys β Automatic Clustering features and optimizations
Subdomain 3.3: Use Time Travel and cloning to create new development environments.
β Clone objects β Permission inheritance β Validate changes before promoting β Rollback changes
Domain 4: Data Governance
Subdomain 4.1: Monitor data.
β Apply object tagging and classifications β Use data classification to monitor data β Manage data lineage and object dependencies β Monitor data quality
Subdomain 4.2: Establish and maintain data protection.
β Use Horizon Catalog to share and federate data outside of Snowflake β Implement column-level security β Use in conjunction with Dynamic Data Masking β Use in conjunction with external tokenization β Use projection policies β Use data masking with Role-Based Access Control (RBAC) to secure sensitive data β Explain the options available to support row-level security using Snowflake row access policies β Use aggregation policies β Use DDL to manage Dynamic Data Masking and row access policies β Use best practices to create and apply data masking policies β Use Snowflake Data Clean Rooms to share data β Clean room UI β Snowflake developer APIs
Domain 5: Data Transformation
Subdomain 5.1: Define User-Defined Functions (UDFs) and outline how to use them.
β Snowpark UDFs (for example, Java, Python, Scala) β Secure UDFs β SQL UDFs β JavaScript UDFs β User-Defined Table Functions (UDTFs) β User-Defined Aggregate Functions (UDAFs)
Subdomain 5.2: Define and create external functions.
β Secure external functions β Work with external functions
Subdomain 5.3: Design, build, and leverage stored procedures.
β Snowpark stored procedures β SQL Scripting stored procedures β JavaScript stored procedures β Transaction management
Subdomain 5.4: Handle and transform semi- structured data.
β Traverse and transform semi-structured data to structured data β Transform structured data to semi- structured data
Subdomain 5.5: Handle and process unstructured data.
β Use unstructured data β URL types β Use directory tables β Use the Rest API β Use semantic views β Use Snowflake Cortex features to: β Automate data categorization β Extract multimedia data β Perform semantic data analysis β Run text analytics in data pipelines β Run multimodal AI workflows within SQL queries β Use Cortex LLM for cost management
Subdomain 5.6: Implement and manage development workflows and code management.
β Snowsight Workspaces and development environments β Snowflake Notebooks β Git integration and version control β Snowflake dbt Projects management β Code deployment pipelines β Testing and validation frameworks β Environment management strategies
Subdomain 5.7: Use Snowpark for data trans- formations.
β Understand Snowpark architecture β Query and filter data using the Snowpark library β Perform data transformations using Snowpark (for example, aggregations) β Manipulate Snowpark DataFrames
Techniques & products