Many worldwide are asking: has AI gone too far?

Digital Advisory at Kainos, Cristina Arciniegas, helps “people to deliver the AI solutions,” ethically.

AI “can mimic [the] human mind or human behaviour… trying to replace how humans work… to imitate humans’ minds.”

Thousands of samples are used to train a model.

“The more information that you provide to the model,” she summarises, “the better it is going to work.” And some LLMs – Large Language Models, such as ChatGPT – are trained upon the entire internet.

In recent years, AI tools have been exploding in popularity and public attention, especially with the release of Open AI’s ChatGPT, in 2022.

“AI went from being part of a scientific community, and research and development companies; to… everybody! This is why we think that AI is a new thing, but it’s not” – AI started in the 1950s.

However, AI has various ethical issues.

Bias can “harm people... damage companies’ reputations… infringe GDPR. Using AI can cause harms… because if you have a model that is biased, then you are going to have a biased result.” Personal data involvement and the identification of users is a huge challenge in the industry.

An interesting case study is the reverse bias implemented into Google’s AI, Gemini. Efforts to increase diversity & “positive reinforcement” backfired, and incorrect races were depicted in images generated of history!

We don’t know the processes from input to output in a neural network, which could lead to unpredictable mistakes.

“The electricity… it took… to train… ChatGPT3… [is] the equivalent… of one year of electricity of a city like San Fransisco! AI is nothing sustainable… it’s a greedy, greedy system.”

“You cannot do AI if you don’t have data.” Collecting that data is “sometimes… expensive.”

Some sensitive data can be used also to create deepfakes, with voice cloning & video generation. These can be dangerous in spreading misinformation.

That data can also be damaging to ethnic minorities – groups subject to prejudice in the justice system become vulnerable if depicted poorly by CCTV AI systems.

AI development is something “exponential” that “will replace humans in [a] certain sense.”

But this amplifies public fear – how will the world adapt to mass unemployment from economic automation?

She suggests that if governments have a clear plan for reemployment of those whose jobs are replaced by AI, then mass unemployment could be stopped.

Artificial General Intelligence and Artificial Superintelligence are also hypothetical challenges in future. Arciniegas doesn’t think it’s feasible anytime soon though.

Helpful in data analysis, medicine, and various other important areas of society, AI also has many flaws, which include its unpredictability and its potential to create harmful bias or develop too far.

However, Arciniegas seemed to have an optimistic perspective of hope for AI’s future.

Additionally, she says that further AI education for younger generations is needed, alongside regulation and government reemployment plans.

 “Ethics should be by design… at the onset, not at the end.”