The Ethical Dilemma
The text and dilemmas were presented by Partner Martin Gronemann at the Sibos Financial event 2020 as a 60 second ethical dilemma session.
WATCH THE SESSION HERE (MARTINS PART STARTS AT TIME STAMP 42:00)
Most people end up making suboptimal investment decisions. But is automation the answer from an ethical point of view?
Proponents for automated investments talk about its potential to take out human biases that lead to poor decision-making. That people are more likely to make bad investment calls than machines, because investing requires the processing power that is found in spades in machines but not as much in humans. Also, removing the human factor could lead to reduced wealth inequality, as more people get access to lower cost but higher quality services.
Those opposing automated investments argue that human intuition and experience trump technology. That human translation and sensitivity is needed to craft an investment strategy that fits to people’s lives and financial situation. Also, knowing that algorithmic bias is the subject of debate and concern: How do we know that the investment choices the machine is making is married to my values? Can we fully account for these variables in an algorithm? If not, do we care?
Question: So, is it more ethical to make investment decision-making more automated or less automated?
The dilemma was voted on for 30 sec. and here is the result:
More automated - algorithms can make better investment decisions and reach more people which makes automatization the morally right thing to do 35% of votes
Less automated - we lose something substantial by removing the human touch in investment decisions and more in risk of investing in morally suspect ways, making decreasing automatization the morally right thing to do 34% of votes
No difference - there is no big ethical difference between human and automated investment advice 18% of votes
Irrelevant - ethics is not an important dimension to consider when comparing automated and human investment advice 13% of votes
Predicting the response
Martin shared a prerecorded response, to guess the outcome of the voting:
It’s hard to predict the answers when I am not in the room and have a first-hand experience of the people joining the session. But my guess is that - when forced to choose - most people have greater faith in more automated, algorithmic advice than they have in humans. In times of crisis - like now - I could imagine people would opt for more human ethical judgement calls.
But I still believe that when forced to decide between the two, there are more people choosing the ‘more automated’ option. But I am curious to see how big a difference there is between the two. The two other categories I would be surprised to see a high number of votes.
I think there are clear ethical pros and cons for human versus automated advice so I would be surprised to see a lot of people choosing ‘no difference’. I could be tempted to read it as an ethical cop out.
Also, while I am not sure most executives consider this ethical dilemma on a daily basis, I would be surprised if a lot of people answered ‘irrelevant’ when being explicitly confronted with the dilemma.
Overall, it has been interesting to reflect on human versus automated advice. It seems clear to me that
The more we try to codify good advice, the more important ethics becomes
The more we try to use algorithms to decode human needs, the more important it becomes to ensure they are built on the right human assumptions
But also, that we would benefit from looking at questions like these in a broader perspective, Our research during COVID shows that what people need more than anything in a time of uncertainty is for banks to show their human face and become a place for stability. That the biggest human problem is not suboptimal investment returns but the fact that 51% of Americans are financially anxious and that 90% of people in the UK believe their bank is a transactional provider.
And is a machine the answer to that?