Ethical hacking can improve AI bias

Investors are pumping millions of dollars into encryption as unease about data security drives a rising need for ways to keep unwanted eyes away from personal and corporate information – © AFP

Researchers have called upon the community of ethical hackers to work together to help prevent the looming ‘crisis of trust’ that is set to impact artificial intelligence.

Specifically, the researchers are putting forward the need outside global hacker ‘red team’, macheted by the incentive of rewards, for hunting algorithmic biases. This type of activity is needed to help reduce the so-termed ‘tech-lash’ that artificial intelligence faces unless firm measures are taken to increase public trust.

The reason for proposing this issue is because technology sector is facing concerns that underpin advances in artificial intelligence, such as the progress with driverless cars and autonomous drones.

There is also concern with aspects like social media algorithms that spread misinformation and provoke political turmoil.

The researchers are also concerned with the level of ferocious competition which is leading to some errors in the development of artificial intelligence, such as creeping bias. This includes a lack of auditing or robust third party analysis. Artificial intelligence bias is an anomaly in the output of machine learning algorithms, due to the prejudiced assumptions made during the algorithm development process.

While regulation can assist with combating some of these errors, there is a need for greater activity within the technology sector.

This is where the ‘red team’ concept comes in. A red team is a group that plays the role of an enemy or competitor, and provides security feedback from that perspective.

Within the artificial intelligence sphere, the red teams would be formed of ethical hackers playing the role of malignant external agents. Ethical hacking (or penetration testing) is the exploitation of an information technology system with the permission of its owner to determine its vulnerabilities.

The idea is that they would be called in to attack any new artificial intelligence, or to strategize on how to use it for malicious purposes. The objective would be to reveal any weaknesses or potential for harm.

Since many companies would not have the resources to ‘red team’, the researchers are calling for a third-party community to independently interrogate new inventions and to share any findings for the benefit of all developers.

The research appears in the journal Sciencein a paper titled “Filling gaps in trustworthy development of AI.”

Leave a Comment