We’ve compiled a glossary of AI terms to help you navigate this rapidly evolving and
somewhat intimidating new technology. This is by no means a complete list and no doubt new terms will evolve as the technology continues to develop. It can also be useful in preparing an acceptable AI usage policy for your organisation as you take steps to ensure an ethical and secure use of these tools.
Artificial Intelligence (AI) that is considered to have a human level of intelligence and capable of doing tasks in a wide array of areas.
A computer system that can use a neural network to analyse data and come up with reasoned responses to queries based on provided data.
Bias is an inclination or prejudice for or against something, especially in a way considered to be unfair. There are several types of bias that are referred to within AI. Computational bias is when AI produces a systematic error or deviation from the true value of a prediction – which can be caused by the AI model making an assumption, or from an issue with the data it’s been trained or fed. Cognitive bias refers to inaccurate individual judgment or distorted thinking, while societal bias leads to systemic prejudice, favouritism, and/or discrimination in favour of or against an individual or group. These two types of bias can factor into AI if the training data hasn’t come from a wide range of diverse sources.
A form of AI designed to simulate human-like conversations and interactions that uses Natural Language Processing to understand and respond to questions. Often used in a customer assistance setting.
Images and videos that have been manipulated to depict realistic looking, but ultimately fake events. Often used for spreading misinformation or for purposes such as blackmail.
A subset of Machine Learning (ML) in which artificial neural networks that mimic the human brain are used to do unsupervised complex tasks.
The ability to describe or provide sufficient information about how an AI system generates a specific output or arrives at a decision in a specific context to a predetermined question.
A field of AI that uses machine learning models trained on large data sets to create new content, such as written text, code, images, music, simulations and videos. These models are capable of generating novel outputs based on input data or user prompts.
Instances where a generative AI model creates content that either contradicts the source or creates factually incorrect outputs under the appearance of fact.
A form of AI that utilises deep learning algorithms to create models trained on massive text data sets to analyse and learn patterns and relationships among characters, words and phrases. There are generally two types of LLMs: generative models that make text predictions based on the probabilities of word sequences learned from its training data and discriminative models that make classification predictions based on probabilities of data features and weights learned from its training data. For example: Chat GPT is a generative model of a LLM.
A subset of AI that concentrates on the use of algorithms that improve through iterative use.
A subfield of AI that helps computers understand, interpret and manipulate human language by transforming information into content. It enables machines to read text or spoken language, interpret its meaning, measure sentiment, and determine which parts are important for understanding.
A type of model used in machine learning that mimics the way neurons in the brain interact with multiple processing layers, including at least one hidden layer. This layered approach enables neural networks to model complex nonlinear relationships and patterns within data. Artificial neural networks have a range of applications, such as image recognition and medical diagnosis.
AI/ML tools that provide a private context for users to add data, without that data being used in the public domain.
An independent assessment of the impact of any new system or application that deals with Personally Identifiable Information (PII) for the purpose of ensuring that all the necessary controls are in place not to breach any NZ or international laws.
AI/ML tools that are made available to the public (either for free or for a fee) as a service, where users can’t control the algorithms or how data provided to the tool is used. For example - Chat GPT.
Data generated by a system or model that can mimic and resemble the structure and statistical properties of real data. It is often used for testing or training machine learning models, particularly in cases where real-world data is limited, unavailable or too sensitive to use.
Kordia’s independent cyber consultants – Aura Information Security work closely with businesses to help them manage their cyber security risk, with actionable insights and advice on the right approaches and tools to tackle this in the most effective way.
If you need support to improve your organisations cyber security posture, speak to one of our consultants or your Kordia representative for more information.