top of page
Search
  • Writer's pictureelizabethmmorrow

Can artificial intelligence make us more HUMANE?

Updated: Mar 29, 2021



On 9 March 2021 the Artificial Intelligence Board of America released its Top 5 trends in AI. https://www.artiba.org/blog/top-5-emerging-ai-trends-in-2021


These trends are:

#1 Conversational AI

#2 Ethical AI

#3 Quantum AI (e.g., using supercomputers to run simulations)

#4 AI in cybersecurity

#5 AIoT (AI & intelligent decision making)


After participating in two online AI health events this week (Precision Medicine Forum’s Public Week, NHS AI Lab Ethics Initiative) I am wondering is priority #2 (ethical AI) so complex and multifaceted that it can only be negotiated with the help of #1 (conversational AI) and #3 (quantum AI).


Before I explore this idea in more depth, first, a note on the headings I am using in this post. The headings are hurtful commonly stated assertions of power. I am using these phrases to draw attention to the different ways that AI could enable humans to become more humane. In the post I venture away from health to explore the potential for wellbeing and human happiness.


“I’ll show you what’s good for you”

We all want to be treated fairly by other humans and by AI. Ethical AI deals with issues of impact (no harm/beneficence), justice (procedural and distributive fairness), and autonomy (personal comprehension and control) of AI. These are ethical principles for AI that AI can learn to use and apply, perhaps even more fairly than humans can because AI could evaluate the potential impact of its decisions i.e., play out different scenarios.


Research has shown that democratic and consensus approaches can marginalise or side-line minority groups and issues – simply because they privilege the majority view. Furthermore can groups of people, no matter how diverse and representative they are, be trusted to handle such important and challenging evaluative judgements, such as:

- how ‘good’ is the source data for AI: is there ‘data poverty’, is the data correct

- how ‘fair’ are the algorithms that AI uses, are they discriminatory and/or addressing social inequalities, what is the impact on real people

- how ‘equal’ are the outcomes of AI, for different groups of people and for individuals


Should we instead look to developing AI that can help humans work these 'calculations' (value judgements) out in any given socio-political-economic context of human inequality? Work is going on at Stanford University and elsewhere to build human centred AI.


Another challenging question to ask, which might go against an ethos of inclusion, is whether the current concern for raising public awareness and deliberation on the ethics of AI, is the best route to public benefit from AI? Should society instead work on building and using AI to think for us, the public, on our behalf, in our best interests? I currently think ‘no’ but I could be convinced ‘yes’ if the evidence showed AI is up to the job.


Either way leads to a future where we might take comfort in AI.


“I’m not being sexist/racist/ageist/etc... but”

Although we tend to think of technology as being more objective than humans AI cannot be objective or unbiased because it is created from humans, data, and algorithms for example. AI needs to learn what notions such as ‘discriminatory’ and ‘prejudice’ means and when it is being offensive or rude. So do humans, and AI can help us.


At present societies are trying to tackle ways of understanding how intersectionality and structural inequalities impact on the individual person, as well as how their genetic make-up affects things for them (genomic predictive medicine) or how treatment and care can be individualised for them (precision medicine). AI might be the solution we are looking for to address social injustice and health inequalities by targeting societal resources fairer than ever before.


“Just take my word for it”

To be trusted, AI needs to be able to explain its judgements and where it is drawing its evidence from to inform its thinking. This is something that policymakers, clinicians and other decision makers find incredibly difficult to do because they are drawing on lots of information to make decisions.


People need to know how AI works things out to trust it – yet if we do get an explanation, we might not be able to comprehend the complexity of it. ‘Just because’ is not going to cut it. Societies have not historically respected human beliefs, it is unlikely they will stand for AI having beliefs. AI must explain itself (explainable AI) so we know and can check if it is behaving ethically – it might help humans explain our own decisions and beliefs too. But if AI cannot be explained or understood that demands trust and deference.


“Life isn’t fair”

With supercomputers AI has the potential to think before it acts or recommends an action. This is where #5 Quantum AI comes in – using the superpower of supercomputers to run simulations.


If it were possible, would it be ethical for an individual person to put in all their personal, situational, demographic, and genomic data into a quantum AI machine and run simulations of their own life to determine ‘what they need’ from a fair society to live a ‘good life’ (recognising that these are personal, political, and philosophical questions).


Would you do it? Press the button that tells you what your most probable best life could be. This is not as far from reality as it seems. For example, careers advice, personality tests, credit applications, health screening, online dating, and so on, all work based on accurate ‘guesses’ about our future lives as individuals, each with our own unique strengths and risks.


“Don’t come crying to me”

If you did use AI to predict your personal alternate life courses, such as what degree to study, or what job to take, or who to have children with, would it feel empowering to know what direction to take in life? Would you see this as using wisdom to shape your life or as an artificial life.


Even if you were able to make the changes AI had calculated, with the support of an ideal socially just society, what if things still did not work out for you? Does a child, or a teenager, or an adult really want to know what they should do in life? Or, indeed, look back on what you could have done to probably live a better (from a utilitarian perspective) or a happier life?


While these issues are not for the here and now, my overall point is that advances in AI bring with them the opportunity to understand the individual at a level of detail than ever before and to use that insight to enrich humanity.


If you are interested in these issues, please visit the AIBA blog at the address link above.


45 views1 comment

Recent Posts

See All
Post: Blog2_Post
bottom of page