Researchers from the University of Massachusetts Amherst have described a framework which aims to prevent "undesirable behaviour" in AI systems.  

A key issue in the development of machine learning algorithms is the risk of the algorithm being created containing inherent bias. This new framework hopes to combat this by making it easier for researchers to set the parameters of what is considered an "undesirable behaviour". 'Seldonian' algorithms, which form the core of the framework, can be developed to control a wide range of behaviour for many different applications - from controlling an automated insulin pump to predicting student GPAs (without the impeding gender bias). 

The article provides hope that we are working on making intelligent machines fair.