Free Practice Questions for AWS Certified AI Practitioner (AIF-C01) Certification

    šŸ”„ Last checked for updates February 16th, 2026

    Study with 364 exam-style practice questions designed to help you prepare for the AWS Certified AI Practitioner (AIF-C01).

    Start Practicing

    Random Questions

    Practice with randomly mixed questions from all topics

    Question MixAll Topics
    FormatRandom Order

    Domain Mode

    Practice questions from a specific topic area

    Exam Information

    Exam Details

    Key information about AWS Certified AI Practitioner (AIF-C01)

    Official study guide:

    View

    level:

    associate (intermediate)

    exam code:

    AIF-C01

    exam format:

    Multiple choice, multiple response, ordering, matching

    passing score:

    700 out of 1000

    prerequisites:

    Familiarity with core AWS services (Amazon EC2, Amazon S3, AWS Lambda, Amazon Bedrock, Amazon SageMaker AI), AWS shared responsibility model, AWS Identity and Access Management (IAM), and AWS service pricing models.

    target audience:

    Individuals with up to 6 months of exposure to AI/ML technologies on AWS, who use but do not necessarily build AI/ML solutions.

    number of questions:

    50 scored questions, plus 15 unscored

    Exam Topics & Skills Assessed

    Skills measured (from the official study guide)

    Domain 1.0: Fundamentals of AI and ML

    1.1 Explain basic AI concepts and terminologies.

    Objectives:

    • Define basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, fit, large language models(LLMs)). • Describe the similarities and differences between AI, ML, GenAI, and deep learning. • Describe various types of inferencing (for example, batch, real-time). • Describe the different types of data in AI models (for example, labeled and unlabeled, tabular, time-series, image, text, structured and unstructured). • Describe supervised learning, unsupervised learning, and reinforcement learning.

    1.2 Identify practical use cases for AI.

    Objectives:

    • Recognize applications where AI/ML can provide value (for example, assist human decision making, solution scalability, automation). • Determine when AI/ML solutions are not appropriate (for example, cost-benefit analyses, situations when a specific outcome is needed instead of a prediction). • Select the appropriate ML techniques for specific use cases (for example, regression, classification, clustering). • Identify examples of real-world AI applications (for example, computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting). • Explain the capabilities of AWS managed AI/ML services (for example, Amazon SageMaker AI, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly).

    1.3 Describe the ML development lifecycle.

    Objectives:

    • Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring). • Describe sources of ML models (for example, open source pre-trained models, training custom models). • Describe methods to use a model in production (for example, managed API service, self-hosted API). • Identify relevant AWS services and features for each stage of an ML pipeline (for example, SageMaker AI, SageMaker Data Wrangler, SageMaker Feature Store, SageMaker Model Monitor). • Describe fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training). • Describe model performance metrics (for example, accuracy, Area Under the Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models.

    Domain 2.0: Fundamentals of GenAI

    2.1 Explain the basic concepts of GenAI.

    Objectives:

    • Define foundational GenAI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models [FMs], multimodal models, diffusion models). • Identify potential use cases for GenAI models (for example, image, video, and audio generation; summarization; AI assistants; translation; code generation; customer service agents; search; recommendation engines). • Describe the foundation model lifecycle (for example, data selection, model selection, pre-training, fine-tuning, evaluation, deployment, feedback).

    2.2 Understand the capabilities and limitations of GenAI for solving business problems.

    Objectives:

    • Describe the advantages of GenAI (for example, adaptability, responsiveness, simplicity). • Identify disadvantages of GenAI solutions (for example, hallucinations, interpretability, inaccuracy, nondeterminism). • Identify factors to consider when selecting GenAI models (for example, model types, performance requirements, capabilities, constraints, compliance). • Determine business value and metrics for GenAI applications (for example, cross-domain performance, efficiency, conversion rate, average revenue per user, accuracy, customer lifetime value).

    2.3 Describe AWS infrastructure and technologies for building GenAI applications.

    Objectives:

    • Identify AWS services and features to develop GenAI applications (for example, Amazon SageMaker JumpStart, Amazon Bedrock PartyRock, Amazon Q, Amazon Bedrock Data Automation). • Describe the advantages of using AWS GenAI services to build applications (for example, accessibility, lower barrier to entry, efficiency, cost-effectiveness, speed to market, ability to meet business objectives). • Describe the benefits of AWS infrastructure for GenAI applications (for example, security, compliance, responsibility, safety). • Describe cost tradeoffs of AWS GenAI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models).

    Domain 3.0: Applications of Foundation Models

    3.1 Describe design considerations for applications that use foundation models (FMs).

    Objectives:

    • Identify selection criteria to choose pre-trained models (for example, cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length, prompt caching). • Describe the effect of inference parameters on model responses (for example, temperature, input/output length). • Define Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock Knowledge Bases). • Identify AWS services that help store embeddings within vector databases (for example, Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon RDS for PostgreSQL). • Explain the cost tradeoffs of various approaches to FM customization (for example, pre-training, fine-tuning, in-context learning, RAG). • Describe the role of agents in multi-step tasks (for example, Amazon Bedrock Agents, agentic AI, model context protocol).

    3.2 Choose effective prompt engineering techniques.

    Objectives:

    • Define the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space, prompt routing). • Define techniques for prompt engineering (for example, chain-of-thought, zero-shot, single-shot, few-shot, prompt templates). • Identify and describe the benefits and best practices for prompt engineering (for example, response quality improvement, experimentation, guardrails, discovery, specificity and concision, using multiple comments). • Define potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking).

    3.3 Describe the training and fine-tuning process for FMs.

    Objectives:

    • Describe the key elements of training an FM (for example, pre-training, fine-tuning, continuous pre-training, distillation). • Define methods for fine-tuning an FM (for example, instruction tuning, adapting models for specific domains, transfer learning, continuous pre-training). • Describe how to prepare data to fine-tune an FM (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF]).

    3.4 Describe methods to evaluate FM performance.

    Objectives:

    • Determine approaches to evaluate FM performance (for example, human evaluation, benchmark datasets, Amazon Bedrock Model Evaluation). • Identify relevant metrics to assess FM performance (for example, Recall-Oriented Understudy for Gisting Evaluation [ROUGE], Bilingual Evaluation Understudy [BLEU], BERTScore). • Determine whether a FM effectively meets business objectives (for example, productivity, user engagement, task engineering). • Identify approaches to evaluate the performance of applications built with FMs (for example, RAG, agents, workflows).

    Domain 4.0: Guidelines for Responsible AI

    4.1 Explain the development of AI systems that are responsible.

    Objectives:

    • Identify features of responsible AI (for example, bias, fairness, inclusivity, robustness, safety, veracity). • Explain how to use tools to identify features of responsible AI (for example, Amazon Bedrock Guardrails). • Define responsible practices to select a model (for example, environmental considerations, sustainability). • Identify legal risks of working with GenAI (for example, intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations). • Identify characteristics of datasets (for example, inclusivity, diversity, curated data sources, balanced datasets). • Describe effects of bias and variance (for example, effects on demographic groups, inaccuracy, overfitting, underfitting). • Describe tools to detect and monitor bias, trustworthiness, and truthfulness (for example, analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI [Amazon A2I]).

    4.2 Recognize the importance of transparent and explainable models.

    Objectives:

    • Describe the differences between models that are transparent and explainable and models that are not transparent and explainable. • Describe tools to identify transparent and explainable models (for example, SageMaker Model Cards, open source models, data, licensing). • Identify tradeoffs between model safety and transparency (for example, measure interpretability and performance). • Describe principles of human-centered design for explainable AI.

    Domain 5.0: Security, Compliance, and Governance for AI Solutions

    5.1 Explain methods to secure AI systems.

    Objectives:

    • Identify AWS services and features to secure AI systems (for example, IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model). • Describe the concept of source citation and documenting data origins (for example, data lineage, data cataloging, Amazon SageMaker Model Cards). • Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity). • Describe security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit).

    5.2 Recognize governance and compliance regulations for AI systems.

    Objectives:

    • Identify AWS services and features to assist with governance and regulation compliance (for example, AWS Config, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor). • Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention). • Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements).

    Techniques & products

    AWS Data Exchange
    Amazon EMR
    AWS Glue
    AWS Glue DataBrew
    AWS Lake Formation
    Amazon OpenSearch Service
    Amazon QuickSight
    Amazon Redshift
    AWS Budgets
    AWS Cost Explorer
    Amazon EC2
    Amazon Elastic Container Service (Amazon ECS)
    Amazon Elastic Kubernetes Service (Amazon EKS)
    Amazon DocumentDB
    Amazon DynamoDB
    Amazon ElastiCache
    Amazon MemoryDB
    Amazon Neptune
    Amazon RDS
    Amazon Augmented AI (Amazon A2I)
    Amazon Bedrock
    Amazon Comprehend
    Amazon Fraud Detector
    Amazon Kendra
    Amazon Lex
    Amazon Nova
    Amazon Personalize
    Amazon Polly
    Amazon Q Developer
    Amazon Q Business
    Amazon Rekognition
    Amazon SageMaker AI
    Amazon Textract
    Amazon Transcribe
    Amazon Translate
    AWS CloudTrail
    Amazon CloudWatch
    AWS Config
    AWS Trusted Advisor
    AWS Well-Architected Tool
    Amazon CloudFront
    Amazon VPC
    AWS Artifact
    AWS Audit Manager
    AWS Identity and Access Management (IAM)
    Amazon Inspector
    AWS Key Management Service (AWS KMS)
    Amazon Macie
    AWS Secrets Manager
    Amazon S3
    Amazon S3 Glacier
    Deep learning
    Neural networks
    Computer vision
    Natural language processing (NLP)
    Model training
    Inferencing (batch, real-time)
    Bias
    Fairness
    Large language models (LLMs)
    Supervised learning
    Unsupervised learning
    Reinforcement learning
    ML development lifecycle
    Data collection
    Exploratory data analysis (EDA)
    Data pre-processing
    Feature engineering
    Hyperparameter tuning
    Model evaluation
    Model deployment
    Model monitoring
    MLOps
    Accuracy
    Area Under the Curve (AUC)
    F1 score
    Tokens
    Chunking
    Embeddings
    Vectors
    Prompt engineering
    Transformer-based LLMs
    Foundation models (FMs)
    Multimodal models
    Diffusion models
    Hallucinations (GenAI)
    Interpretability (GenAI)
    Retrieval Augmented Generation (RAG)
    Vector databases
    Amazon Bedrock Knowledge Bases
    Amazon Bedrock Agents
    Chain-of-thought prompting
    Zero-shot prompting
    Single-shot prompting
    Few-shot prompting
    Prompt templates
    Pre-training (FMs)
    Fine-tuning (FMs)
    Continuous pre-training
    Distillation
    Instruction tuning
    Transfer learning
    Reinforcement learning from human feedback (RLHF)
    ROUGE (Recall-Oriented Understudy for Gisting Evaluation)
    BLEU (Bilingual Evaluation Understudy)
    BERTScore
    Responsible AI
    Inclusivity
    Robustness
    Safety
    Veracity
    Amazon Bedrock Guardrails
    Intellectual property infringement
    Overfitting
    Underfitting
    Amazon SageMaker Clarify
    Amazon SageMaker Model Monitor
    Transparent models
    Explainable models
    Amazon SageMaker Model Cards
    Human-centered design
    IAM roles, policies, permissions
    Encryption
    AWS PrivateLink
    AWS shared responsibility model
    Source citation
    Data lineage
    Data cataloging
    Data quality
    Privacy-enhancing technologies
    Data access control
    Data integrity
    Application security
    Threat detection
    Vulnerability management
    Infrastructure protection
    Prompt injection
    Encryption at rest and in transit
    AWS Config
    AWS Audit Manager
    AWS CloudTrail
    AWS Trusted Advisor
    Data governance strategies
    Data lifecycles
    Logging
    Residency
    Monitoring
    Observation
    Retention
    Generative AI Security Scoping Matrix

    CertSafari is not affiliated with, endorsed by, or officially connected to Amazon Web Services, Inc.. Full disclaimer