Ultimate objectivity is seen as the new vanguard against a dystopia caused by human decision making. Algorithms abound these days; machine intelligence is used in all but every sector, from farming to financial trading, blue collar to white collar and across supply chains. ‘Feeling’ is the enemy. Or is it?
With this proliferation comes a problem: the dichotomy between ethics and efficiency. Take the case of machine bias in criminal sentencing, policing, and inflammatory opinionizing through mass platforms. If robots decide who to prosecute, exclude, dole out loans to and shape public opinion, how do we know that they are fair, reasonable and free from bias? Aristotelian deliberation!
What is important to note is that security and ethics are socio-technical concepts in today’s world. Machine-led immigration control, for example, must improve security in an ethically acceptable fashion; if not, governments and organizations face heavy disapproval. GDPR’s impositions are only one aspect of this, because the resulting public and social scorn can force the most powerful companies to their knees #FacebookScandal #CambridgeAnalytica.
If you have computer intelligence doing your bidding, ensure that you are ethical and secure by design; embed these concepts into the tiniest decision and build upwards. Consumer consent first. Don’t train algorithms on biased or partial data sets; doing so, only equates stereotype to ‘efficiency’. Maintain data confidentiality by encrypting at rest and in transit; and integrity thorough verification and timely updates. Retrain on new data so as not to cast the status quo into stone. Last, but not least, hold algorithms accountable. If common ethical code has been violated by a classification or prediction, then go Arnie on the code and fire: ‘You are terminated’.
Bottomline: assess all stages in artificial intelligence through the lens of ethics, security and privacy.
So, are ethics and efficiency antithetical? Through proactive design, I think not.