And if the algorithm is producing negative side effects, then, of course, it should be looked at and changed.
I'm no expert myself, but to my understanding: any algorithm is limited by its data set.
Based on its data set, an algorithm comes to conclusions. But one can then, of course, ask: what's the basis for these conclusions?
I recall reading that a certain AI had been fooled into thinking a picture of a banana was showing a toaster or a helicopter, after a few part of the image were changed to contain tiny bits of those items.
It turned out that the AI used the apparent texture on places in the image to determine what was on the image, rather than doing a shape comparison.
Which sounds like a time-saving measure. Though it may very well have been the method that most consistently produced correct results, for the given dataset.
Frankly, the attitude of "we don't know how it works and we don't care" cannot possibly end well.
Neither the attitude "oh well make a better dataset then".
I get that we're all excited about the amazing abilities we're seeing here, but that doesn't mean we shouldn't look where we're going.
I recall a story of an AI researcher who didn't want to define anything because he was afraid of introducing bias. Upon hearing this, his colleague covered up his eyes. When asked why he did this, he replied: "The world no longer exists". And the other understood.
Because of course the world still exists. And just the same way: it's impossible get rid of bias.
Some human intervention is needed. Just like constant checks and comparison against human results.
The problem of the dataset is that you're not in control of who populates the dataset and what their intentions are. There's no understanding of an adversarial model and threat handling.