This LinkedIn article by Prof. Christian Guttmann, UNSW provides a number of good resources related to AI Ethics, classified as (taken from the article):
What is AI ethics, AI regulation, AI sustainability?
For sake of simplicity, I have used the umbrella term “AI ethics and regulation”, and under this umbrella you find many topics. Below are 7 key notions associated to AI ethics, regulation and sustainability.
Algorithmic Bias and Fairness. When an AI makes decisions and takes actions that reflect the implicit values of the humans who are involved in coding, collecting, selecting, or using data to train the algorithm.
AI Safety. An example here are adversarial attacks. For example, Neural networks can be fooled. How can we manage such vulnerabilities in AI?
AI Security. Hacking a self-driving cars or a fleet of delivery drones poses a serious risk. Whole electricity nets and transport systems benefit from autonomous decision making and optimisation, they need to be secured at the same time. How can we secure AI systems?
AI Accountability. Who is accountable when an entire process is automated. For example, for self driving cars, when accidents occur, who can be accounted for? Is it the manufacturer of the car, the government, the driver of the car, or the car itself?
AI Quality Standardisation. Can we ensure that AI behaves in the same way for all AI services and products?
AI Explainability. Can or should an AI be able to explain the exact reasons of its actions and decisions?
AI Transparancy. Do we understand why an AI has taken certain actions and decisions? Should there be a requirement for automated decisions to be publicly available?
There are other related topics such as responsible AI, sustainable AI, and AI product liabilities.