GenAI tools can produce text, images, code, voices, music, videos, and other media based on written prompts or questions that you type in. Examples include ChatGPT, Google Gemini (formerly known as Bard), Stable Diffusion, and Midjourney.
These tools have been ‘trained’ on huge amounts of data, such as web pages, artwork, music and other things created by humans. This training results in a model which can reliably predict, for example, the most probable sequence of words that would follow a given question or prompt.
GenAI can be used to review your work, autocomplete text and even generate poetry, images, and music.
Many GenAI chatbots are designed to produce text that reads like a human might have written it. To make them appear more human, some chatbots even deliver the output in the fashion of an individual typing it.
When you ask it a question, the answer may sound very convincing. However the outputs of this type of tool can be misleading, biased or inaccurate. Whilst it may have the appearance of competence and confidence, it may generate untrue statements, imaginary authors and made up references which are presented as true.