Most GenAI tools are trained on data that comes from people.
Any biases, prejudices and stereotypes present in the underlying data will be reflected and amplified by the generative AI’s outputs.
If the underlying data under-represents or over-represents certain voices in society, the outputs will reflect and reinforce this. Bias in algorithms in generative AI tools has an impact on marginalized and underrepresented communities.
You should be as critical of the outputs of GenAI as you would be of anything else you read. Try and use a range of resources such as books and articles to get a balanced view of a topic.