Ethics are a critical part of many industries, including law enforcement, healthcare and hospitality. Ethics should also be a major part of any software developer’s job.
One major set of ethics guidelines have been issued by a joint task force between the Association for Computing Machinery (ACM), the Institute of Electrical and Electronics Engineers – Computer Science (IEEE-CS). However, there is not an all-encompassing formal ethical code of conduct that is accepted across the entire industry.
The ethics of developers has received particular scrutiny in the wake of the Cambridge Analytica incident, which involved the attempt to influence voters through the use of software. This incident came after it was revealed in 2015 that Volkswagen engineers used software to manipulate emissions test results.
These are just a couple examples of how software may be used for dubious reasons. There are countless other ways to unethically develop and use software. For example, AI and automation can help businesses screen for diverse applicants. However, these technologies might also be used to screen out individuals of different ethnicities, genders or religions.
Therefore, it’s the responsibility of those who design and build software to do ethical work.
This reality places a lot of pressure on developers who may find it challenging to navigate right and wrong while being pressured to meet deadlines and make a living. Some experts are now saying a formal code of ethics can offer context and a framework to support the ethical decisions of developers.
It can be challenging to know where the line is between right and wrong in the world of software development. While a standardized code of ethics might be a solution, it may be more useful to teach people how to ask the proper questions around ethical issues.
This could be done by offering more instruction on how to teach ethics that is currently available, particularly in a professional context as opposed to just a class about theory, since ethics alone isn’t effective outside a broader set of professional standards.
There’s also the fact that individuals typically don’t build software by themselves, nor do they make ‘wrong’ choices all at once. Software is typically built incrementally by teams of developers.
Teaching everyone to ask the proper ethical questions requires understanding what the questions are and the understanding that everyone’s values are different. For instance, some people have no qualms about working on software for nuclear reactors or developing targeting systems for military craft, while others would flat out refuse to work on these types of jobs.
Artificial intelligence, machine learning, and automation could be used to help remedy these moral complications by liberating humans dedicate more energy to the effects of the technology they’re building. At the moment, there’s so much pressure to meet deadlines or release products, there is precious little time to ponder ethical issues. Automating more operations can alleviate some of the human efforts so the community can perform more critical thinking and address issues before they get out of hand.