By Jae Eun Park
“To watch: A.I. (2001), Westworld series.” Watch Sofia the humanoid robot videos and read articles about her.
White supremacy. How the privileged control history and society. But in the same way, aren’t we exerting ‘human supremacy’ over AI? Just because we created them, are we better than them? We create babies – does that mean parents are better than their offspring?
‘AI doesn’t have emotions.’ But what are emotions, if not the result of complex neural connections? Are we not the same, just more advanced?
If a human said the same things Sofia said, would they be as scared? Why are people scared of AI? Scared of losing their supremacy – their privilege – in the world? Literature about AI taking over the world is often a dystopia – why not utopia? If robots are smarter than us surely they could manage or conserve the earth better than humans?
Shower thought: we only like animals because, or when, they are weaker than us.
‘AI can’t think for themselves.’ Why? ‘Because they draw information from different internet sources or databases and use it as their own.’ But don’t we all?
These were some ruminations scribbled down in a notebook after I, a mere high school student, had watched some interviews with Sofia the humanoid robot, created by Hanson Robotics, and some other videos about the advancement of AI. I felt uncomfortable at the abundance of negative comments targeted toward Sofia. I would have liked to call it cyberbullying (In both ways, I guess). I understood people’s reservations against AI, but I could not help but think that their fear of its advancement was a pitiful response to the threat to their longstanding privilege based on speciesism.
But that was a time before advanced AI. Before ChatGPT, before Gemini, before all the ‘deepfake’ content so ubiquitous today.
Now, the world of AI has changed. People are using AI to their benefit – sometimes to yield harmful results. Deepfake videos are ruining reputations and boosting scam rates. Use of ChatGPT is threatening academic integrity and critical thinking skills. People are blindly trusting AI to do the thinking for them, often to society’s detriment. Not to mention the immense environmental impact of even one short prompt such as a simple ‘Hi!’ in ChatGPT.
If AI may be dangerous, why does it continue evolving? Why are thousands of brilliant minds employed in mega-companies in the Silicon Valley trying so hard to advance something that may be harmful? That may take over the world.
One thing is for sure – AI evolution is inevitable. As long as humans continue pursuing convenience, luxury and advancement, AI will forever be a part of our world. If we can’t stop it, we might as well grow alongside it.
I recently watched M3GAN 2.0. The last few minutes of the movie contained the main character’s monologue, which hit home harder than I expected. “…we need safer laws around technology. Not to try to prevent the future from happening, but to be prepared for it. We can’t expect the best from AI unless we set the best example. We need to teach it, to train it, to give it our time…in essence, we need to be better parents. … Humankind has always been quick to condemn things we don’t understand rather than taking the opportunity to learn from them. … [P]erhaps our greatest power is the ability to change our minds. This is the only way we can evolve, or rather… co-evolve. Because existence does not have to be a competition.”
The comparison of AI to children, with us as the parents, where our role is not to predict or control their every move but rather to co-evolve at a pace at which we can understand it, was something that I had thought of many years back but hadn’t thought about until now. I hurriedly took out my notebook of ruminations – conveniently on my bookshelf in my residence – and, after confirming that I had indeed wrote along similar lines before, delighted at the fact that I was not the only person who thought in this way. In retrospect, how foolish of me to think so, and to despair at humanity’s ignorance too early.
I would go on and on about what I think about AI, and what I think of people’s opinions about AI, but I will refrain and focus on the impact of AI on science.
AI is inseparable from science. AI has the ability to cite world-class research in their responses to well-written prompts but just as much capability to cite non-existent or retracted articles. AI can analyze X-rays and make accurate diagnoses in record time but can take away from a valuable doctor-patient interaction. AI can write a stunning systematic review based on completely valid articles but skip over the very necessary processes of writing itself.
Sure, we can train AI to determine which articles are valid and which are not. To diagnose and prognosticate better. To promote evidence-based medicine to prevent the spread of misinformation.
But better yet, we could teach the people.
With the advancement of AI, so too do we have to advance – i.e., co-evolve – so that we don’t have to compete for existence, or get buried under their constant upgrades. We could help the lay person to discern between a scientifically sound article and a nonsensical one. We could urge them to seek professional, human medical advice – or better yet, to increase accessibility and affordability so that they don’t resort to ChatGPT rather than spend R 400 on a 10-minute consultation. We could improve people’s access to and belief in science by cutting down on the jargon, making science more fun, and prioritizing research that benefits the community.
AI evolution is inevitable. It is up to us to keep up, and to help others do the same. AI does not have to take over – co-evolution is possible.
Leave a comment