At a recent Responsible AI conference, I was in a panel discussing the shortcomings of AI to understand the subtle social nuances of gender, race, and equality when providing a un-biased response. AI does not have a definitive list of rules around inclusivity to understand when bias and ethical balance are skewing responses and results. Discussion circled around defining the rules themselves could introduce bias or imbalance to an already confusing situation.

I was then able to relate this lack of a rulebook of unwritten rules to my own experience with Autism. I was diagnosed with Autism at 52, and my own journey of understanding how this affects my own experience in life has given me new insight. A key experience in my own Autism history is in not knowing the social cues and interaction subtleties that neurotypical people seem to just ‘know’. For me, it has always been hard to understand that a simple question of “how are you” could mean up to five different things, depending upon multiple contexts (psychological, social, physical, previous interactions, nearby people), tone, timing, situation, and interpersonal/intrapersonal factors. It is a minefield of complexity! For me, I would love to have a rulebook that explains how society and culture works – but no-one can give me one. Furthermore, with my Autism, even if I did have a ruleset, it could be adjusted or varied at any time, depending upon the person or culture that I was interacting with, adding further to my confusion.

At the conference, I raised the point that my own Autism allows me to understand where AI is making mistakes around these social-societal rules for diversity and inclusion. In the same way that a person who is raised in a predominantly white area, and rarely interacts with other races may believe that there is little racism in the world – AI is skewed by the prevalence of the source data.

READ ARTICLE:   Consumerisation of IT

My own experience with Autism shows that unwritten rules are difficult to define, complex, subjective, context-sensitive, and vacillating. Similarly, ‘rules’ for inclusivity, race, bias, gender, and trusted sources, can themselves also be difficult to define, complex, subjective, and vacillating. So how can we inform AI of these ‘rules’, if we can’t define them with unanimous agreement?

Bias – Facial recognition training data

In 2018, MIT researcher Dr Joy Buolamwini encountered a problem with the facial recognition algorithms not detecting her non-white face. This was because the training data for the AI system was trained on white skinned faces. The AI did not have any social instruction that there are other faces in the world, it did not raise a concern that the source data was not diverse enough – because it did not know to ask the question. AI had no societal rulebook to recognise these failure of social norms. It should be argued that the failure was with the humans providing the training data of white faces, but the further failure was that it was not considered that AI would not identify that data was missing.

Bias – predictive policing and skin colour issues

There are multiple examples where predictive analysis of criminals has had a strong negative effect towards ethnic minorities and non-white citizens. The US COMPAS system was trying to predict re-offending in criminals, but skewed heavily to predicting that black criminals would re-offend, and that white criminals would not, and then nearly half of the whites predicted to not re-offend, did. This is an example where source data is understood to be a “pattern” by humans, but then AI interprets this pattern as a “rule”. Calculation of relativistic probabilities does not take in to account the small nuances of culture and society that people inherently know – but may not explicitly exist in the source data.

READ ARTICLE:   My Executive MBA experience

I was struck by a documentary about front-line policing, where an officer pointed out that even though he does not consider himself racist, he does type-cast, and so a group of non-white young men in hoodies loitering during the day would warrant Police attention – which the officer described as “we would not be doing our job properly, if we didn’t check them out”.

Other traits that I believe are prevalent in both AI and Autism include the willingness to please, literal interpretations, and repetitive or specialist interests.

Willingness to please

My own Autism experience is that I have had many rejections and failures in my life that encourages me to work extra hard to impress or please. To avoid further rejection, I [used to] say or do anything to appear more authoritative or to make the other person happy, or to gain acceptance. AI has a similar trait – by being overly helpful and using a tone the exhibits knowledge and positivity towards the question. When AI hallucinates, it may provide references and citations that don’t exist, because it wants to provide an answer that you will believe and trust.

This AI trait relates to neurodiversity, and is perhaps inherent in a reward model that is used in AI development to reinforce correct results.

Literal interpretation

As a neurodiverse person, I struggle with understanding the hidden meanings and subtle inferences of tone and indirect language. This is also true in AI, where it both understands and responds with language in a very literal and direct way. Whilst this may not be an immediate concern or issue with the interaction with AI, I see that it is a relation to my own way of thinking. In research at Adelaide’s Responsible AI Research centre, there has been an initiative to work with children living with Autism, and their engagement and interaction with AI has been very positive.

READ ARTICLE:   The era of infrastructure DIY is dead

Repetitive or Special Interests

Whilst it is directly related to the training data, and the memory that Generative AI has of your previous prompts, but have you noticed how the responses keep coming back to something you have done in the past? Bringing in content that is no longer relevant to the current context, hitting tangents or going down paths that you need to bring it back from? The alignment with neurodiversity is apparent here.

Summary

Whilst approximately 0.7% of Australians are living with Autism, and 88% of those are living with a disability, it puts my own experience with the neurodiversity into the remaining small 0.084% of the population. This gives me an insight to relate some of these gaps to my own experience. Significantly, how the social rules that impact diversity, inclusion, bias,w and unwritten rules.

Share this knowledge

Leave a Reply

Your email address will not be published. Required fields are marked *