The inputs and assumptions made by the people selecting the math is the 'morally wrong' part.
Bias is real, like it or not. Your worldview, as a data scientist or programmer or whatever, impacts what you select as important factors in 'algorithm a'. Algorithm a then selects those factors for other people in the system, baking in your biases, but screening them behind math.
Every choice we make is a moral choice. Once we're done modelling and use that model then we make a moral choice.
For example, If you believe that lowering the debt default rates is more important than the fairness to an individual.
Then you make a moral choice. Of you believe it is OK to not give loans to Blacks because there's a largish amount of Blacks defaulting on their loans thats a moral choice.
Further more, Enscribnng truth to models is just an age old human fallacy. The truth can somewhat fit plenty of models. None of the models are truth.