Free Practice Questions for Microsoft Azure Fabric Analytics Engineer Associate (DP-700) Certification
Study with 300 exam-style practice questions designed to help you prepare for the Microsoft Azure Fabric Analytics Engineer Associate (DP-700). All questions are aligned with the latest exam guide and include detailed explanations to help you master the material.
Start Practicing
Random Questions
Practice with randomly mixed questions from all topics
Domain Mode
Practice questions from a specific topic area
Exam Information
Exam Details
Key information about Microsoft Azure Fabric Analytics Engineer Associate (DP-700)
associate (intermediate)
Candidates for this exam should have subject matter expertise with data loading patterns, data architectures, and orchestration processes. They work closely with analytics engineers, architects, analysts, and administrators to design and deploy data engineering solutions for analytics. Skills in SQL, PySpark, and KQL are essential.
Exam Topics & Skills Assessed
Skills measured (from the official study guide)
Domain 1: Implement and manage an analytics solution
Subdomain 1.1: Configure Microsoft Fabric workspace settings
- Configure Spark workspace settings - Configure domain workspace settings - Configure OneLake workspace settings - Configure data workflow workspace settings
Subdomain 1.2: Implement lifecycle management in Fabric
- Configure version control - Implement database projects - Create and configure deployment pipelines
Subdomain 1.3: Configure security and governance
- Implement workspace-level access controls - Implement item-level access controls - Implement row-level, column-level, object-level, and folder/file-level access controls - Implement dynamic data masking - Apply sensitivity labels to items - Endorse items - Implement and use workspace logging - Configure and implement OneLake security
Subdomain 1.4: Orchestrate processes
- Choose between Dataflow gen 2, a pipeline and a notebook - Design and implement schedules and event-based triggers - Implement orchestration patterns with notebooks and pipelines, including parameters and dynamic expressions
Domain 2: Ingest and transform data
Subdomain 2.1: Design and implement loading patterns
- Design and implement full and incremental data loads - Prepare data for loading into a dimensional model - Design and implement a loading pattern for streaming data
Subdomain 2.2: Ingest and transform batch data
- Choose an appropriate data store - Choose between dataflows, notebooks, KQL, and T-SQL for data transformation - Create and manage shortcuts to data - Implement mirroring - Ingest data by using pipelines - Ingest data by using continuous integration from OneLake - Transform data by using Power Query (M), PySpark, SQL, and KQL - Denormalize data - Group and aggregate data - Handle duplicate, missing, and late-arriving data
Subdomain 2.3: Ingest and transform streaming data
- Choose an appropriate streaming engine - Choose between native storage, mirrored storage, or shortcuts in Real-Time Intelligence - Choose between accelerated shortcuts and non-accelerated shortcuts in Real-Time Intelligence - Process data by using eventstreams - Process data by using Spark structured streaming - Process data by using KQL - Create windowing functions
Domain 3: Monitor and optimize an analytics solution
Subdomain 3.1: Monitor Fabric items
- Monitor data ingestion - Monitor data transformation - Monitor semantic model refresh - Configure alerts
Subdomain 3.2: Identify and resolve errors
- Identify and resolve pipeline errors - Identify and resolve dataflow errors - Identify and resolve notebook errors - Identify and resolve eventhouse errors - Identify and resolve eventstream errors - Identify and resolve T-SQL errors - Identify and resolve Shortcut errors
Subdomain 3.3: Optimize performance
- Optimize a lakehouse table - Optimize a pipeline - Optimize a data warehouse - Optimize eventstreams and eventhouses - Optimize Spark performance - Optimize query performance
Techniques & products