Free Practice Questions for AWS Certified AI Practitioner (AIF-C01) Certification
Study with 364 exam-style practice questions designed to help you prepare for the AWS Certified AI Practitioner (AIF-C01).
Start Practicing
Random Questions
Practice with randomly mixed questions from all topics
Domain Mode
Practice questions from a specific topic area
Exam Information
Exam Details
Key information about AWS Certified AI Practitioner (AIF-C01)
associate (intermediate)
AIF-C01
Multiple choice, multiple response, ordering, matching
700 out of 1000
Familiarity with core AWS services (Amazon EC2, Amazon S3, AWS Lambda, Amazon Bedrock, Amazon SageMaker AI), AWS shared responsibility model, AWS Identity and Access Management (IAM), and AWS service pricing models.
Individuals with up to 6 months of exposure to AI/ML technologies on AWS, who use but do not necessarily build AI/ML solutions.
50 scored questions, plus 15 unscored
Exam Topics & Skills Assessed
Skills measured (from the official study guide)
Domain 1.0: Fundamentals of AI and ML
1.1 Explain basic AI concepts and terminologies.
Objectives:
⢠Deļ¬ne basic AI terms (for example, AI, ML, deep learning, neural networks, computer vision, natural language processing [NLP], model, algorithm, training and inferencing, bias, fairness, ļ¬t, large language models(LLMs)). ⢠Describe the similarities and diļ¬erences between AI, ML, GenAI, and deep learning. ⢠Describe various types of inferencing (for example, batch, real-time). ⢠Describe the diļ¬erent types of data in AI models (for example, labeled and unlabeled, tabular, time-series, image, text, structured and unstructured). ⢠Describe supervised learning, unsupervised learning, and reinforcement learning.
1.2 Identify practical use cases for AI.
Objectives:
⢠Recognize applications where AI/ML can provide value (for example, assist human decision making, solution scalability, automation). ⢠Determine when AI/ML solutions are not appropriate (for example, cost-beneļ¬t analyses, situations when a speciļ¬c outcome is needed instead of a prediction). ⢠Select the appropriate ML techniques for speciļ¬c use cases (for example, regression, classiļ¬cation, clustering). ⢠Identify examples of real-world AI applications (for example, computer vision, NLP, speech recognition, recommendation systems, fraud detection, forecasting). ⢠Explain the capabilities of AWS managed AI/ML services (for example, Amazon SageMaker AI, Amazon Transcribe, Amazon Translate, Amazon Comprehend, Amazon Lex, Amazon Polly).
1.3 Describe the ML development lifecycle.
Objectives:
⢠Describe components of an ML pipeline (for example, data collection, exploratory data analysis [EDA], data pre-processing, feature engineering, model training, hyperparameter tuning, evaluation, deployment, monitoring). ⢠Describe sources of ML models (for example, open source pre-trained models, training custom models). ⢠Describe methods to use a model in production (for example, managed API service, self-hosted API). ⢠Identify relevant AWS services and features for each stage of an ML pipeline (for example, SageMaker AI, SageMaker Data Wrangler, SageMaker Feature Store, SageMaker Model Monitor). ⢠Describe fundamental concepts of ML operations (MLOps) (for example, experimentation, repeatable processes, scalable systems, managing technical debt, achieving production readiness, model monitoring, model re-training). ⢠Describe model performance metrics (for example, accuracy, Area Under the Curve [AUC], F1 score) and business metrics (for example, cost per user, development costs, customer feedback, return on investment [ROI]) to evaluate ML models.
Domain 2.0: Fundamentals of GenAI
2.1 Explain the basic concepts of GenAI.
Objectives:
⢠Deļ¬ne foundational GenAI concepts (for example, tokens, chunking, embeddings, vectors, prompt engineering, transformer-based LLMs, foundation models [FMs], multimodal models, diļ¬usion models). ⢠Identify potential use cases for GenAI models (for example, image, video, and audio generation; summarization; AI assistants; translation; code generation; customer service agents; search; recommendation engines). ⢠Describe the foundation model lifecycle (for example, data selection, model selection, pre-training, ļ¬ne-tuning, evaluation, deployment, feedback).
2.2 Understand the capabilities and limitations of GenAI for solving business problems.
Objectives:
⢠Describe the advantages of GenAI (for example, adaptability, responsiveness, simplicity). ⢠Identify disadvantages of GenAI solutions (for example, hallucinations, interpretability, inaccuracy, nondeterminism). ⢠Identify factors to consider when selecting GenAI models (for example, model types, performance requirements, capabilities, constraints, compliance). ⢠Determine business value and metrics for GenAI applications (for example, cross-domain performance, eļ¬ciency, conversion rate, average revenue per user, accuracy, customer lifetime value).
2.3 Describe AWS infrastructure and technologies for building GenAI applications.
Objectives:
⢠Identify AWS services and features to develop GenAI applications (for example, Amazon SageMaker JumpStart, Amazon Bedrock PartyRock, Amazon Q, Amazon Bedrock Data Automation). ⢠Describe the advantages of using AWS GenAI services to build applications (for example, accessibility, lower barrier to entry, eļ¬ciency, cost-eļ¬ectiveness, speed to market, ability to meet business objectives). ⢠Describe the beneļ¬ts of AWS infrastructure for GenAI applications (for example, security, compliance, responsibility, safety). ⢠Describe cost tradeoļ¬s of AWS GenAI services (for example, responsiveness, availability, redundancy, performance, regional coverage, token-based pricing, provision throughput, custom models).
Domain 3.0: Applications of Foundation Models
3.1 Describe design considerations for applications that use foundation models (FMs).
Objectives:
⢠Identify selection criteria to choose pre-trained models (for example, cost, modality, latency, multi-lingual, model size, model complexity, customization, input/output length, prompt caching). ⢠Describe the eļ¬ect of inference parameters on model responses (for example, temperature, input/output length). ⢠Deļ¬ne Retrieval Augmented Generation (RAG) and describe its business applications (for example, Amazon Bedrock Knowledge Bases). ⢠Identify AWS services that help store embeddings within vector databases (for example, Amazon OpenSearch Service, Amazon Aurora, Amazon Neptune, Amazon RDS for PostgreSQL). ⢠Explain the cost tradeoļ¬s of various approaches to FM customization (for example, pre-training, ļ¬ne-tuning, in-context learning, RAG). ⢠Describe the role of agents in multi-step tasks (for example, Amazon Bedrock Agents, agentic AI, model context protocol).
3.2 Choose eļ¬ective prompt engineering techniques.
Objectives:
⢠Deļ¬ne the concepts and constructs of prompt engineering (for example, context, instruction, negative prompts, model latent space, prompt routing). ⢠Deļ¬ne techniques for prompt engineering (for example, chain-of-thought, zero-shot, single-shot, few-shot, prompt templates). ⢠Identify and describe the beneļ¬ts and best practices for prompt engineering (for example, response quality improvement, experimentation, guardrails, discovery, speciļ¬city and concision, using multiple comments). ⢠Deļ¬ne potential risks and limitations of prompt engineering (for example, exposure, poisoning, hijacking, jailbreaking).
3.3 Describe the training and ļ¬ne-tuning process for FMs.
Objectives:
⢠Describe the key elements of training an FM (for example, pre-training, ļ¬ne-tuning, continuous pre-training, distillation). ⢠Deļ¬ne methods for ļ¬ne-tuning an FM (for example, instruction tuning, adapting models for speciļ¬c domains, transfer learning, continuous pre-training). ⢠Describe how to prepare data to ļ¬ne-tune an FM (for example, data curation, governance, size, labeling, representativeness, reinforcement learning from human feedback [RLHF]).
3.4 Describe methods to evaluate FM performance.
Objectives:
⢠Determine approaches to evaluate FM performance (for example, human evaluation, benchmark datasets, Amazon Bedrock Model Evaluation). ⢠Identify relevant metrics to assess FM performance (for example, Recall-Oriented Understudy for Gisting Evaluation [ROUGE], Bilingual Evaluation Understudy [BLEU], BERTScore). ⢠Determine whether a FM eļ¬ectively meets business objectives (for example, productivity, user engagement, task engineering). ⢠Identify approaches to evaluate the performance of applications built with FMs (for example, RAG, agents, workļ¬ows).
Domain 4.0: Guidelines for Responsible AI
4.1 Explain the development of AI systems that are responsible.
Objectives:
⢠Identify features of responsible AI (for example, bias, fairness, inclusivity, robustness, safety, veracity). ⢠Explain how to use tools to identify features of responsible AI (for example, Amazon Bedrock Guardrails). ⢠Deļ¬ne responsible practices to select a model (for example, environmental considerations, sustainability). ⢠Identify legal risks of working with GenAI (for example, intellectual property infringement claims, biased model outputs, loss of customer trust, end user risk, hallucinations). ⢠Identify characteristics of datasets (for example, inclusivity, diversity, curated data sources, balanced datasets). ⢠Describe eļ¬ects of bias and variance (for example, eļ¬ects on demographic groups, inaccuracy, overļ¬tting, underļ¬tting). ⢠Describe tools to detect and monitor bias, trustworthiness, and truthfulness (for example, analyzing label quality, human audits, subgroup analysis, Amazon SageMaker Clarify, SageMaker Model Monitor, Amazon Augmented AI [Amazon A2I]).
4.2 Recognize the importance of transparent and explainable models.
Objectives:
⢠Describe the diļ¬erences between models that are transparent and explainable and models that are not transparent and explainable. ⢠Describe tools to identify transparent and explainable models (for example, SageMaker Model Cards, open source models, data, licensing). ⢠Identify tradeoļ¬s between model safety and transparency (for example, measure interpretability and performance). ⢠Describe principles of human-centered design for explainable AI.
Domain 5.0: Security, Compliance, and Governance for AI Solutions
5.1 Explain methods to secure AI systems.
Objectives:
⢠Identify AWS services and features to secure AI systems (for example, IAM roles, policies, and permissions; encryption; Amazon Macie; AWS PrivateLink; AWS shared responsibility model). ⢠Describe the concept of source citation and documenting data origins (for example, data lineage, data cataloging, Amazon SageMaker Model Cards). ⢠Describe best practices for secure data engineering (for example, assessing data quality, implementing privacy-enhancing technologies, data access control, data integrity). ⢠Describe security and privacy considerations for AI systems (for example, application security, threat detection, vulnerability management, infrastructure protection, prompt injection, encryption at rest and in transit).
5.2 Recognize governance and compliance regulations for AI systems.
Objectives:
⢠Identify AWS services and features to assist with governance and regulation compliance (for example, AWS Conļ¬g, Amazon Inspector, AWS Audit Manager, AWS Artifact, AWS CloudTrail, AWS Trusted Advisor). ⢠Describe data governance strategies (for example, data lifecycles, logging, residency, monitoring, observation, retention). ⢠Describe processes to follow governance protocols (for example, policies, review cadence, review strategies, governance frameworks such as the Generative AI Security Scoping Matrix, transparency standards, team training requirements).
Techniques & products