This feels like an extension of ethics, in general, not being part of the curriculum in education.
Anaconda’s survey of data scientists from more than 100 countries found the ethics gap extends from academia to industry. While organizations can mitigate the problem through fairness tools and explainability solutions, neither appears to be gaining mass adoption.
Only 15% of respondents said their organization has implemented a fairness system, and just 19% reported they have an explainability tool in place.
The study authors warned that this could have far-reaching consequences:
Above and beyond the ethical concerns at play, a failure to proactively address these areas poses strategic risk to enterprises and institutions across competitive, financial, and even legal dimensions.
The survey also revealed concerns around the security of open-source tools and business training, and data drudgery. But it’s the disregard of ethics that most troubled the study authors:
Of all the trends identified in our study, we find the slow progress to address bias and fairness, and to make machine learning explainable the most concerning. While these two issues are distinct, they are interrelated, and both pose important questions for society, industry, and academia.
While businesses and academics are increasingly talking about AI ethics, their words mean little if they don’t turn into actions.