AI Ethics and Military Applications
Expert-defined terms from the Professional Certificate in AI for Military Defense course at London School of Business and Administration. Free to read, free to share, paired with a globally recognised certification pathway.
Artificial Intelligence (AI) #
The simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions), and self-correction.
AI Ethics #
A branch of ethics that studies the ethical implications of artificial intelligence and its applications. It includes the development of ethical guidelines, principles, and regulations for the design, deployment, and use of AI systems.
Algorithmic Bias #
The phenomenon where algorithms produce results that are systematically prejudiced or discriminatory due to biased data or flawed assumptions in the algorithm design.
Autonomous Weapons #
Weapons that can select and engage targets without human intervention. Also known as "killer robots," they raise ethical concerns due to the potential for unintended harm and the lack of human accountability.
Biometric Data #
Information related to the unique physical or behavioral characteristics of a person, such as fingerprints, facial recognition, or gait analysis, which can be used for identification and authentication purposes.
Chinese Room Argument #
A thought experiment proposed by philosopher John Searle that challenges the concept of "strong AI" or artificial general intelligence. It argues that even if a machine can mimic human-like behavior, it does not necessarily possess true understanding or consciousness.
Computational Propaganda #
The use of algorithms, automation, and big data to manipulate public opinion and influence political outcomes. It can involve the spread of disinformation, fake news, and propaganda through social media and other online platforms.
Deepfake #
A digital manipulation technique that uses machine learning algorithms to create realistic-looking videos or audio recordings of people doing or saying things they never actually did or said.
Dual #
Use Technology: Technology that has both civilian and military applications. AI is a prominent example of dual-use technology, as it can be used for a wide range of purposes, including military defense, healthcare, finance, and entertainment.
Explainable AI (XAI) #
An approach to AI that emphasizes the importance of making AI systems transparent, understandable, and explainable to human users. This is particularly important in high-stakes domains such as military defense, where accountability and trust are critical.
Fairness #
A principle in AI ethics that requires AI systems to treat all individuals or groups fairly and without discrimination. Ensuring fairness in AI systems can involve addressing biases in data and algorithms, as well as considering the potential impacts of AI on different groups.
General Data Protection Regulation (GDPR) #
A regulation in EU law on data protection and privacy in the European Union and the European Economic Area. It also addresses the transfer of personal data outside the EU and EEA areas.
Human #
in-the-Loop (HITL): An approach to AI system design that involves keeping a human in the decision-making loop, either to provide oversight, guidance, or to make the final decision. This approach is often used in military applications to ensure accountability and to prevent unintended harm.
Machine Learning (ML) #
A subset of AI that involves the use of algorithms to analyze data, learn patterns, and make predictions or decisions based on that data. There are different types of machine learning, including supervised, unsupervised, and reinforcement learning.
Military AI #
The application of artificial intelligence and machine learning to military defense, including areas such as intelligence, surveillance, reconnaissance, autonomous weapons, and command and control.
Privacy #
The state of being free from unauthorized intrusion or surveillance. In the context of AI, privacy concerns relate to the collection, storage, and use of personal data, as well as the potential for AI systems to invade individuals' privacy through facial recognition, location tracking, and other forms of surveillance.
Responsible AI #
An approach to AI development and deployment that emphasizes ethical considerations, transparency, accountability, and social responsibility. It involves developing AI systems that are aligned with human values, respect privacy and fairness, and are designed to benefit all of society.
Surveillance Capitalism #
A term coined by sociologist Shoshana Zuboff to describe the economic system that arises from the monetization of personal data through surveillance. It raises ethical concerns about privacy, autonomy, and power imbalances between individuals and corporations.
Swarm Intelligence #
The collective behavior of decentralized, self-organized systems, such as insect swarms or bird flocks. Swarm intelligence can be used in AI systems to create groups of autonomous agents that can work together to solve complex problems.
Transparency #
A principle in AI ethics that requires AI systems to be transparent and understandable to human users. Transparency can involve making the algorithms and data used in AI systems accessible and explainable, as well as providing clear and understandable explanations of AI system behavior and decision-making.
Trustworthiness #
A quality that refers to the degree to which AI systems are reliable, honest, and ethical in their behavior and decision-making. Trustworthiness is an important factor in building public trust and confidence in AI systems, particularly in high-stakes domains such as military defense.
Unintended Consequences #
The unforeseen or unintended outcomes of AI systems, which can include harm to individuals or groups, unintended biases, or negative impacts on society. Unintended consequences can arise from flaws in the design, deployment, or use of AI systems, and can be difficult to predict or mitigate.
Value Alignment #
The process of aligning AI systems with human values and ethical principles. Value alignment is an important aspect of responsible AI, as it ensures that AI systems are designed and deployed in ways that benefit humanity and are consistent with our ethical norms and standards.
Weaponized AI #
The use of AI and machine learning algorithms to create weapons or to enhance the effectiveness of existing weapons. Weaponized AI raises ethical concerns about the potential for unintended harm, the lack of human accountability, and the potential for AI systems to be used in offensive or aggressive ways.