I am wondering what the solution is to an AI or machine learning algorithm that’s gone awry, for example when it’s been developing a certain bias.
A biased AI or machine learning algorithm can interfere with our fundamental rights and freedoms, especially when they make decisions on our behalf or control the social and informational channels we are exposed to.
I understand it takes a long time to train an AI or machine learning algorithm – but what should be done when it develops a bias? Can it be easily corrected or does the whole AI or machine learning algorithm need to be scrapped and started anew? My non-programming knowledge stops me here.
However, I believe this has certain implications on how companies prioritise this issue. If correcting takes a long time and the AI/machine learning algorithm is imperative to revenue and growth, despite being biased, I doubt it will be prioritised sooner (or at all).
Thanks for any input on this.