AI Project Quality Management

Expert-defined terms from the Professional Certificate in Project Management Methodologies for Artificial Intelligence course at London School of Business and Administration. Free to read, free to share, paired with a globally recognised certification pathway.

AI Project Quality Management

Agile Methodology #

Agile Methodology

Concept #

A project management approach that values flexibility and collaboration, allowing for frequent iterations and adjustments to project requirements.

Explanation #

Agile methodology emphasizes adaptive planning, evolutionary development, early delivery, and continual improvement, and it encourages flexible responses to change. Agile projects are completed in small increments, with each iteration building on the previous one, allowing for quicker response to customer needs and changes in the market.

Artificial Intelligence (AI) #

Artificial Intelligence (AI)

Agile methodology #

A project management approach that values flexibility and collaboration, allowing for iterative development and continuous improvement. It is commonly used in AI projects to manage changing requirements and technology advancements.

Artificial Intelligence (AI) #

The development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

AI model #

A mathematical representation of an AI system, designed to learn from data and make predictions or decisions based on new inputs.

AI project governance #

The framework and processes for managing and overseeing AI projects, ensuring alignment with organizational goals, ethical considerations, and risk management.

AI project lifecycle #

The sequence of stages in an AI project, from initial planning to deployment and maintenance, including data preparation, model development, testing, and evaluation.

AI project manager #

A professional responsible for leading and coordinating AI projects, ensuring timely delivery, budget adherence, and quality standards.

AI quality assurance #

The process of verifying and validating AI models and systems to ensure they meet specified requirements, functional and non-functional, and are free from defects.

AI requirements management #

The process of defining, documenting, and maintaining AI project requirements, ensuring they align with business objectives and stakeholder needs.

AI risk management #

The process of identifying, assessing, and mitigating risks associated with AI projects, including technical, ethical, legal, and reputational risks.

AI stakeholder management #

The process of engaging, communicating, and aligning expectations with AI project stakeholders, including team members, customers, and executives.

Algorithm #

A well-defined set of instructions for solving a problem or performing a task, often used in AI to process and analyze data.

Big Data #

Large, complex datasets that cannot be processed or analyzed using traditional data processing tools, requiring specialized software and hardware.

Computer vision #

A subfield of AI that deals with enabling computers to interpret and understand visual information from the world, such as images and videos.

Deep learning #

A type of machine learning algorithm that uses multiple layers of artificial neural networks to learn and make decisions based on complex data.

Data preprocessing #

The process of cleaning, transforming, and preparing raw data for use in AI models, including data cleaning, normalization, and feature engineering.

Data science #

An interdisciplinary field that uses scientific methods, processes, algorithms, and systems to extract knowledge and insights from structured and unstructured data.

Decision tree #

A type of machine learning algorithm that uses a tree-like model to make decisions based on input features and their values.

Evaluation metric #

A quantitative measure used to assess the performance of AI models, such as accuracy, precision, recall, and F1 score.

Explainability #

The ability of an AI system to provide clear, understandable explanations for its decisions and actions, important for transparency and accountability.

Feature engineering #

The process of selecting and transforming input variables, or features, to improve AI model performance and interpretability.

Generalization #

The ability of an AI model to perform well on new, unseen data, rather than just the data it was trained on.

Hyperparameter tuning #

The process of adjusting the parameters of an AI model to optimize its performance, such as learning rate, regularization strength, and batch size.

Machine learning #

A subfield of AI that focuses on developing algorithms that can learn and improve from data, without explicit programming.

Model training #

The process of feeding data into an AI model to adjust its parameters and improve its performance.

Natural language processing (NLP) #

A subfield of AI that deals with enabling computers to understand, interpret, and generate human language.

Neural network #

A type of machine learning algorithm modeled after the structure and function of the human brain, used for tasks such as image recognition and speech recognition.

Overfitting #

A situation where an AI model performs well on training data but poorly on new, unseen data due to over-complexity or over-adaptation to the training data.

Predictive analytics #

The use of statistical models and machine learning algorithms to predict future outcomes based on historical data.

Reinforcement learning #

A type of machine learning where an agent learns to make decisions by interacting with an environment and receiving rewards or penalties for its actions.

Regression #

A type of machine learning algorithm used for predicting continuous outcomes, such as the price of a house or the likelihood of a disease.

Robotics #

The branch of technology that deals with the design, construction, and operation of robots, often used in AI applications for automation and control.

Scikit #

learn: An open-source machine learning library for Python, widely used in AI projects for data preprocessing, model training, and evaluation.

Supervised learning #

A type of machine learning where an algorithm is trained on labeled data, with input-output pairs, and learns to predict outputs for new inputs.

TensorFlow #

An open-source machine learning library for Python, widely used in AI projects for building and training deep learning models.

Testing #

The process of evaluating the performance and functionality of an AI model or system, including unit testing, integration testing, and acceptance testing.

Transfer learning #

The process of using a pre-trained AI model as a starting point for a new model, allowing for faster training and better performance on smaller datasets.

Unsupervised learning #

A type of machine learning where an algorithm is trained on unlabeled data and learns to discover patterns and relationships in the data.

Validation #

The process of assessing the performance and accuracy of an AI model or system, using a separate validation dataset.

Visualization #

The process of creating graphical representations of data or AI model results, to facilitate understanding and communication.

The glossary terms above provide a comprehensive overview of the key concepts an… #

These terms can help learners better understand the field and apply best practices in their AI projects.

May 2026 intake · open enrolment
from £90 GBP
Enrol