Skip to content

AI Glossary of Terms

AI ethics:

AI ethics refers to the principles and guidelines that ensure artificial intelligence systems are designed and used responsibly. This includes making sure AI is fair, transparent, respects privacy, avoids bias, and is accountable for its actions, to ensure it benefits society without causing harm.

 

Algorithm:

An AI algorithm is a set of instructions designed to solve problems or perform tasks using artificial intelligence techniques. The algorithms analyse data, learn from patterns, and make decisions to achieve specific goals. Examples include algorithms for image recognition, language translation, and predicting future outcomes based on past data.

 

Bias in AI:

Bias in AI is when an artificial intelligence system produces unfair or prejudiced results. This happens because the data used to train the AI might be unbalanced, incomplete, or reflect existing human prejudices, causing the AI to learn and replicate those biases in its decisions and predictions. Bias in AI can lead to unfair treatment or discrimination against certain groups, perpetuating existing inequalities.

 

Big Data:

Datasets that are too large and complex for traditional data-processing applications to handle. For example, the vast amount of information collected from social media posts, online purchases, and website clicks every day.

 

Chatbot:

A chatbot is a computer program designed to have conversations with humans, usually through text or voice. It uses artificial intelligence to understand questions or commands and respond with appropriate answers or actions.

 

Deep Learning:

Deep learning is a subset of machine learning where computers learn to understand data through interconnected layers of nodes, called neural networks. Neural networks are a set of algorithms, modeled loosely after the human brain, that are designed to recognise complex patterns. It is used to make predictions, and solve problems, like identifying objects in images or understanding speech. Common examples include its use in facial recognition, virtual assistants, translation and personalised entertainment.

 

Generative AI:

Generative AI is a type of artificial intelligence that can create new content, like images, music, or text, based on patterns it learns from existing data. For example, a generative AI model trained on a dataset of human faces can generate realistic-looking faces that have never been seen before.

 

Hallucination:

A hallucination happens when an AI model mistakenly generates something that isn’t actually there or real. For instance, in image generation, a hallucination could be creating features or objects that don’t exist in the input data, like adding extra limbs to a person in a generated image.

 

Large language model (LLM):

A large language model is an artificial intelligence system designed to understand and generate human-like text. These models are trained on massive amounts of text data and can perform various language-related tasks, such as text generation, translation, summarization, and answering questions. Examples of large language models include OpenAI’s GPT and Google’s BERT.

 

Machine learning:

Machine learning is a type of artificial intelligence where computers learn to make predictions or decisions by analysing data patterns. Instead of following specific instructions, machine learning algorithms use patterns in the data to improve their performance on a task. Examples of machine learning include spam email detection, recommendation systems, and image recognition.

 

Prompt:

A prompt in AI is a specific instruction given to a language model or other AI system to generate a specific response or output. It guides the AI on what task to perform or what type of information to provide. For example a prompt to create a piece of text, provide an answer to a question, for translation, or to generate an image.

 

Training data:

Training data is used to teach AI applications to recognise patterns and make predictions or decisions. The data can take various forms, such as images, audio, or text, and can come from existing databases, publicly available information, user interactions, or data generated specifically for the training purpose. Examples could include social media platforms using likes, shares and comments to train their newsfeed algorithm, or shopping websites may suggest products to users based on their browsing history, purchase behaviour, and interactions with the website.

Before you go

Thousands of people in schools, clubs and organisations around Ireland are taking part in the Safer Internet Day 2024 celebrations...

Safer Internet Day Turns 21

February 6th marks the 21st anniversary of Safer Internet Day, which has become a landmark global...

Live-Stream: SID2024 Event

What do young people think of new and emerging technology? On Safer Internet Day, tune in to…

View all
Helplines

Talk to someone

Worried about something you have seen online or concerned about your child? Childline and the National Parents Council Primary offer free advice and support service.

Childline is a support service for young people up to the age of 18.There is a 24hr telephone, online and mobile phone texting service.

1800666666
50101
Get started


The National Parents Council Primary enables and empowers parents to be effective partners in their children’s education.

01 887 4477
helpline@npc.ie

Report

Report Illegal Content

Sometimes you might unwittingly stumble across illegal online content like child abuse imagery. Always remember: you can report it and get it removed using Hotline.ie.

More on illegal content

Make a report

Hotline.ie exists to combat the distribution and proliferation of illegal content, like child sexual
abuse content, in conjunction with police and Internet Industry