Free Practice Questions for Snowflake DEA-C02 Certification

    πŸ”„ Last checked for updates April 5th, 2026

    Study with 627 exam-style practice questions designed to help you prepare for the Snowflake SnowPro Advanced: Data Engineer (DEA-C02). All questions are aligned with the latest exam guide and include detailed explanations to help you master the material.

    Start Practicing

    Random Questions

    Practice with randomly mixed questions from all topics

    Question MixAll Topics
    FormatRandom Order

    Domain Mode

    Practice questions from a specific topic area

    Quiz History

    Exam Details

    Key information about Snowflake SnowPro Advanced: Data Engineer (DEA-C02)

    Official study guide

    View

    Question formats CertSafari offers
    • Multiple choice
    level:

    associate (intermediate)

    renewal:

    Through Snowflake Continuing Education (CE) program (eligible ILT courses or higher-level certification)

    last updated:

    August 22, 2025

    prerequisites:

    Active SnowPro Core Certified credential

    target audience:

    Data Engineers and Software Engineers with 2+ years of data engineering experience, including practical Snowflake usage, RESTful APIs, SQL, semi-structured data, and cloud-native concepts

    certification validity:

    2 years

    Exam Topics & Skills Assessed

    Skills measured (from the official study guide)

    Domain 1: Data Movement

    Subdomain 1.1: Given a data set, load data into Snowflake.

    ● Outline considerations for data loading ● Define data loading features and potential impacts

    Subdomain 1.2: Ingest data of various formats through the mechanics of Snowflake.

    ● Required file formats ● Schema detection using INFER_SCHEMA for table design and data analysis ● Ingestion of structured, semi-structured, and unstructured data ● Implementation of stages and file formats β—‹ Manage storage integrations configurations β—‹ Manage encryption (pre-scoped URLs, server-side, or client-side) β—‹ Manage compression and parsing strategies ● Extract metadata from staged files

    Subdomain 1.3: Troubleshoot data ingestion.

    ● Identify causes of ingestion errors ● Determine resolutions for ingestion errors

    Subdomain 1.4: Design, build, and troubleshoot continuous data pipelines.

    ● Stages ● Tasks ● Streams ● Dynamic tables ● Materialized views ● Snowpipe (for example, Auto Ingest compared to the REST API) ● Snowpipe Streaming o Snowpipe Streaming compared to the Kafka connector ● Create User-Defined Functions (UDFs) ● Design and use the Snowflake SQL API ● Openflow ● Use Notebooks to run pipelines of stored procedures for data ingestion tasks ● Use Snowflake scripting to develop and automate pipelines

    Subdomain 1.5: Install, configure, and use connectors for Snowflake integration.

    ● Kafka connectors ● Spark connectors ● Python connectors ● Native connectors

    Subdomain 1.6: Design and build data sharing and data consumption solutions.

    ● Evaluate the use of a data share or a clone ● Implement a data share β—‹ Manage auto-fulfillment ● Create and manage views ● Implement row-level filtering ● Share data using the Snowflake Marketplace ● Share data using a listing ● Use Streamlit to build data applications and interfaces for data consumption β—‹ Create interactive dashboards for data exploration and sharing β—‹ Build self-service data access applications

    Subdomain 1.7: Manage different types of tables and data operations.

    ● Manage external tables ● Manage Iceberg tables ● Manage hybrid tables ● Perform general table management ● Use Horizon Catalog to federate data from external catalogs ● Manage schema evolution ● Unload data

    Domain 2: Performance Optimization

    Subdomain 2.1: Troubleshoot underperforming queries.

    ● Identify underperforming queries ● Outline telemetry around the operation ● Identify the root cause ● Increase efficiency

    Subdomain 2.2: Given a scenario, configure a solution for optimal performance.

    ● Scale out compared to scale up ● Virtual warehouse properties (for example, size, multi-cluster) o Snowpark-optimized virtual warehouses ● Query complexity ● Micro-partitions and the impact of clustering ● Materialized views ● Search optimization service ● Query acceleration service ● Snowpark-optimized warehouses ● Caching features ● Use the ACCOUNT_USAGE schema ● Use warehouse metrics (such as warehouse queues) and configurations: β—‹ Resource monitors β—‹ Warehouse constraints on credit consumption ● Balance optimization with credit consumption considerations ● Optimize storage configurations and costs

    Subdomain 2.3: Monitor continuous data pipelines.

    ● Snowflake objects β—‹ Tasks β–  Snowsight task history dashboards β—‹ Streams β—‹ Snowpipe Streaming β—‹ Alerts β—‹ Dynamic Tables ● Notifications ● Data quality and data metric function monitoring

    Domain 3: Storage & Data Protection

    Subdomain 3.1: Implement and manage data recovery features in Snowflake.

    ● Time Travel β—‹ Impact of streams ● Fail-safe ● Cross-region and cross-cloud replication

    Subdomain 3.2: Use system functions to analyze micro-partitions.

    ● Clustering depth ● Cluster keys ● Automatic Clustering features and optimizations

    Subdomain 3.3: Use Time Travel and cloning to create new development environments.

    ● Clone objects β—‹ Permission inheritance ● Validate changes before promoting ● Rollback changes

    Domain 4: Data Governance

    Subdomain 4.1: Monitor data.

    ● Apply object tagging and classifications ● Use data classification to monitor data ● Manage data lineage and object dependencies ● Monitor data quality

    Subdomain 4.2: Establish and maintain data protection.

    ● Use Horizon Catalog to share and federate data outside of Snowflake ● Implement column-level security β—‹ Use in conjunction with Dynamic Data Masking β—‹ Use in conjunction with external tokenization β—‹ Use projection policies ● Use data masking with Role-Based Access Control (RBAC) to secure sensitive data ● Explain the options available to support row-level security using Snowflake row access policies β—‹ Use aggregation policies ● Use DDL to manage Dynamic Data Masking and row access policies ● Use best practices to create and apply data masking policies ● Use Snowflake Data Clean Rooms to share data β—‹ Clean room UI β—‹ Snowflake developer APIs

    Domain 5: Data Transformation

    Subdomain 5.1: Define User-Defined Functions (UDFs) and outline how to use them.

    ● Snowpark UDFs (for example, Java, Python, Scala) ● Secure UDFs ● SQL UDFs ● JavaScript UDFs ● User-Defined Table Functions (UDTFs) ● User-Defined Aggregate Functions (UDAFs)

    Subdomain 5.2: Define and create external functions.

    ● Secure external functions ● Work with external functions

    Subdomain 5.3: Design, build, and leverage stored procedures.

    ● Snowpark stored procedures ● SQL Scripting stored procedures ● JavaScript stored procedures ● Transaction management

    Subdomain 5.4: Handle and transform semi- structured data.

    ● Traverse and transform semi-structured data to structured data ● Transform structured data to semi- structured data

    Subdomain 5.5: Handle and process unstructured data.

    ● Use unstructured data β—‹ URL types ● Use directory tables ● Use the Rest API ● Use semantic views ● Use Snowflake Cortex features to: β—‹ Automate data categorization β—‹ Extract multimedia data β—‹ Perform semantic data analysis β—‹ Run text analytics in data pipelines β—‹ Run multimodal AI workflows within SQL queries ● Use Cortex LLM for cost management

    Subdomain 5.6: Implement and manage development workflows and code management.

    ● Snowsight Workspaces and development environments β—‹ Snowflake Notebooks ● Git integration and version control ● Snowflake dbt Projects management ● Code deployment pipelines ● Testing and validation frameworks ● Environment management strategies

    Subdomain 5.7: Use Snowpark for data trans- formations.

    ● Understand Snowpark architecture ● Query and filter data using the Snowpark library ● Perform data transformations using Snowpark (for example, aggregations) ● Manipulate Snowpark DataFrames

    Techniques & products

    Snowflake
    Snowpipe
    Snowpipe Streaming
    Stages
    Tasks
    Streams
    User-Defined Functions (UDFs)
    Snowflake SQL API
    Snowpark
    Kafka connectors
    Spark connectors
    Python connectors
    Data sharing
    Snowflake Marketplace
    External tables
    Iceberg tables
    Time Travel
    Fail-safe
    Cross-region replication
    Cross-cloud replication
    Micro-partitions
    Clustering depth
    Cluster keys
    Cloning
    Object tagging
    Data classification
    Data lineage
    Object dependencies
    Data quality
    Column-level security
    Dynamic Data Masking
    External tokenization
    Projection policies
    Role-Based Access Control (RBAC)
    Row-level security
    Aggregation policies
    DDL
    Snowflake Data Clean Rooms
    Snowflake developer API
    External functions
    Stored procedures
    Semi-structured data
    Unstructured data
    Directory tables
    REST API
    Snowpark DataFrames
    Virtual warehouses
    Multi-cluster warehouses
    Materialized views
    Search optimization service
    Query acceleration service
    Snowpark-optimized warehouses
    Caching features
    Alerts
    Notifications
    Data metric functions
    SQL
    Java
    Python
    Scala
    JavaScript
    dbt
    ETL
    ELT

    CertSafari is not affiliated with, endorsed by, or officially connected to Snowflake, Inc.. Full disclaimer