The Certified Tester AI Testing (CT-AI) qualification is aimed at people who are seeking to extend their understanding of artificial intelligence and/or deep (machine) learning, most specifically testing AI based systems and using AI to test.
What are the entry criteria?
To achieve the CT-AI certification, candidates must hold the ISTQB® Certified Tester Foundation Level (CTFL) certificate.
Contents
The following image demonstrates the contents of the AI Testing syllabus which is part of the ISTQB® Specialist stream:
Business Outcomes
Individuals who hold the ISTQB® Certified Tester- AI Testing certification should be able to accomplish the following business outcomes:
Understand the current state and expected trends of AI
Experience the implementation and testing of a ML model and recognize where testers can best influence its quality
Understand the challenges associated with testing AI-Based systems, such as their self-learning capabilities, bias, ethics, complexity, non-determinism, transparency and explainability
Contribute to the test strategy for an AI-Based system
Design and execute test cases for AI-based systems
Recognize the special requirements for the test infrastructure to support the testing of AI-based systems
Understand how AI can be used to support software testing
Learning Objectives
Individuals who hold the ISTQB® Certified Tester- AI Testing certification should be able to to demonstrate their skills in the following areas:
Describe the AI effect and show how it influences the definition of AI
Distinguish between narrow AI, general AI, and super AI
Differentiate between AI-based systems and conventional systems
Recognize the different technologies used to implement AI
Identify popular AI development frameworks
Compare the choices available for hardware to implement AI-based systems
Explain the concept of AI as a Service (AIaaS)
Explain the use of pre-trained AI models and the risks associated with them
Describe how standards apply to AI-based systems
Explain the importance of flexibility and adaptability as characteristics of AI-based systems
Explain the relationship between autonomy and AI-based systems
Explain the importance of managing evolution for AI-based systems
Describe the different causes and types of bias for AI-based systems
Discuss the ethical principles that should be respected in the development, deployment and use of AI-based systems
Explain the occurrence of side effects and reward hacking in AI-based systems
Explain how transparency, interpretability and explainability apply to AI-based systems
Recall the characteristics that make it difficult to use AI-based systems in safety-related applications
Describe classification and regression as part of supervised learning
Describe clustering and association as part of unsupervised learning
Describe reinforcement learning
Summarize the workflow used to create an ML system
Given a project scenario, identify an appropriate ML approach (from classification, regression, clustering, association, or reinforcement learning)
Explain the factors involved in the selection of ML algorithms
Summarize the concepts of underfitting and overfitting
Demonstrate underfitting and overfitting
Describe the activities and challenges related to data preparation
Perform data preparation in support of the creation of an ML model
Contrast the use of training, validation and test datasets in the development of an ML model
Identify training and test datasets and create an ML model
Describe typical dataset quality issues
Recognize how poor data quality can cause problems with the resultant ML model
Recall the different approaches to the labelling of data in datasets for supervised learning
Recall reasons for the data in datasets being mislabeled
Calculate the ML functional performance metrics from a given set of confusion matrix data
Contrast and compare the concepts behind the ML functional performance metrics for classification, regression and clustering methods
Summarize the limitations of using ML functional performance metrics to determine the quality of the ML system
Select appropriate ML functional performance metrics and/or their values for a given ML model and scenario
Evaluate the created ML model using selected ML functional performance metrics
Explain the use of benchmark suites in the context of ML
Explain the structure and working of a neural network including a DNN
Experience the implementation of a perceptron
Describe the different coverage measures for neural networks
Explain how system specifications for AI-based systems can create challenges in testing
Describe how AI-based systems are tested at each test level
Recall those factors associated with test data that can make testing AI-based systems difficult
Explain automation bias and how this affects testing
Describe the documentation of an AI component and understand how documentation supports the testing of AI-based systems
Explain the need for frequently testing the trained model to handle concept drift
For a given scenario determine a test approach to be followed when developing an ML system
Explain the challenges in testing created by the self-learning of AI-based systems
Explain how autonomous AI-based systems are tested
Explain how to test for bias in an AI-based system
Explain the challenges in testing created by the probabilistic and non-deterministic nature of AI-based systems
Explain the challenges in testing created by the complexity of AI-based systems
Describe how the transparency, interpretability and explainability of AI-based systems can be tested
Use a tool to show how explainability can be used by testers
Explain the challenges in creating test oracles resulting from the specific characteristics of AI-based systems
Select appropriate test objectives and acceptance criteria for the AI-specific quality characteristics of a given AI-based system
Explain how the testing of ML systems can help prevent adversarial attacks and data poisoning
Explain how pairwise testing is used for AI-based systems
Apply pairwise testing to derive and execute test cases for an AI-based system
Explain how back-to-back testing is used for AI-based systems
Explain how A/B testing is applied to the testing of AI-based systems
Apply metamorphic testing for the testing of AI-based systems
Apply metamorphic testing to derive test cases for a given scenario and execute them
Explain how experience-based testing can be applied to the testing of AI-based systems
Apply exploratory testing to an AI-based system
For a given scenario select appropriate test techniques when testing an AI-based system
Describe the main factors that differentiate the test environments for AI-based systems from those required for conventional systems
Describe the benefits provided by virtual test environments in the testing of AI-based systems
Categorize the AI technologies used in software testing
Discuss, using examples, those activities in testing where AI is less likely to be used
Explain how AI can assist in supporting the analysis of new defects
Explain how AI can assist in test case generation
Explain how AI can assist in optimization of regression test suites
Explain how AI can assist in defect prediction
Implement a simple AI-based defect prediction system
Explain the use of AI in testing user interfaces
Syllabus
The ISTQB® CT-AI certification exam is based on the Certified Tester AI Testing (CT-AI) syllabus.
The International Software Testing Qualifications Board (ISTQB®) provides the syllabus to its Member Boards for them to accredit training providers. The syllabus is also used to derive examination questions (including in local language where available).
Training providers will produce courseware and determine appropriate teaching methods for accreditation using the syllabus. The syllabus will help candidates in their preparation for the examination.
The specialist stream AI Testing exam consists of 40 multiple-choice questions, with a pass mark grade of 65% to be completed within 60 minutes. Participants that take the exam not in their spoken language, will receive additional 25% time, and will have 15 minutes more, or a total of 75 min.
Module
Number of questions
Exam length (minutes)
Exam length +25% (minutes)
AI Testing
40
60
75
Accredited training providers
Exams may be taken as part of a course delivered by an Accredited Training Provider or taken independently at an examination center or in a public exam.
The typical duration of the AI Testing Certification Training offered by an Accredited Training Provider is 4 days. Completion of an accredited training course is not a prerequisite for participating in the exam.