AI is exploding right now, with huge interest, investment, and reporting in the media of AI. However, the need for responsible AI is important to consider – but what does that actually mean?

AI is like a junior colleague – it needs monitoring, reviewing, and help. AI is really good at doing repetitive or predictable tasks, and can help to restructure your writing or solve complex problems. However, AI can introduce its own problems. The need for responsible AI is because it can be biased and be based off skewed training data, have hallucinations, may use (or have access to) personal data, and break rules and laws such as copyright and privacy.

Responsible AI against bias

There are many ways that AI can be affected by bias. The most controllable is that the prompt can guide AI towards a desired answer – with AI’s tendancy to want to be helpful and authoritative, it can provide a response that aligns with the subtext or content of the prompt. To avoid this, prompts should be carefully crafted to not ‘lead’ the response.

The most likely, and difficult to control for the end user, is that the training data and reference data that AI has access to may itself be skewed. For example, when early models were trained on recognising a human face, the source data was almost exclusively white skinned people. AI could not recognise a black person’s face – a bias first discovered by Joy Buolamwini when she was studying at MIT in 2018. Other examples are when the training data itself is skewed by the humans collecting the data – such as Police arrests focusing on non-white people, creating more data on POC arrests.

READ ARTICLE:   Forecasting the demise of IaaS

When AI reads data from discussion forums and social media, it can be skewed by the human interactions that tend to be negative and extreme. This is why multiple trials of AI Twitter bots end up being removed when they become violent extremists.

Responsible AI for privacy

Perhaps the number one concern of people using AI, comes from their fears around privacy and leakage of their data. It is a valid concern, because Generative AI learns from any information it can gain access to, including content in prompts. AI does not follow the same boundaries that we have set for our personal information, and can draw it’s own conclusions against the information it has access to.

Whilst we are cautious not to create a website that publishes our personal information, personal pictures, company data, financial data, intellectual property and the like, AI does not focus on understanding the boundaries for privacy. AI takes all information provided as a resource. Generative AI will find the benefit of learning from a request to summarise a series of documents that have been provided – to learn how the documents are laid out and structured, how to identify key areas to create an action list. However, it cannot (yet) identify and understand the private or sensitive information in the information provided.

The action we need to take is to ensure that private and sensitive information is either removed from the content, or not provided at all.

Responsible AI against hallucinations

Hallucinations are a convenient way of describing the situation where AI tries to provide an authoritative response, but has actually completely made up the answer. AI wants to be helpful and correct, and then confidently states information, and even provides references and web links, but it is a fabrication.

READ ARTICLE:   Tips for defining your BYOD policy

In testing performed on finding hallucinations, the same question was asked of various AI models, the response from the Generative AI model was limited to a single word response – this allowed comparative assessment. The results showed that some models hallucinated as much as 45% of their responses.

Hallucinations are far more common and prevalent than we assume. And there lies the issue, we assume that the answer from AI is correct and authoritative.

What we need to do is check the answer and result from Generative AI. Even if it seems really clear, has followed our own biases, and quotes references, we still need to check if it is true.

Responsible AI for laws

In a similar pattern to other responsible use for AI, the responses you get from Generative AI, and the actions taken by AI may break laws or regulations. This can include breaking copyright or other content ownership laws, or by using data that should have been private or limited access. This is much harder to evaluate by the end user, who will likely not be aware of the original ownership of source information, or that the output breaks a regulation or law.

A human should evaluate content and responses, to ensure that these types of breaches do not occur.

Share this knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *