Shaping AI's Future: The Importance of Ethical Development
AI is going to play a fundamental role in solving humanity’s biggest challenges, but there are major ethical concerns about some of the outcomes from using this technology, from bias and discrimination to sensitive data loss.
Ethical AI is the concept of ensuring AI is used solely for the benefit of humanity, such as enhancing medical research and the efficiency of public services.
This approach also necessitates overcoming the potential negative outcomes from AI use, including data privacy threats and bias in decision making. This is something that must be achieved to ensure the potential of AI is unlocked in a truly ethical way.
Primary AI Risks
Experts have raised numerous concerns around AI’s impact on individuals and wider society as it is adopted more widely.
John Durcan, IDA Ireland’s Chief Technologist, told Infosecurity that a big issue is the fact that a lot of data AI models are currently trained on are centred on certain sections of society, in particular Western characteristics and beliefs.
This has the potential for bias and even discrimination when AI is applied in fields such as medicine and justice.
It can also result in ‘edge cases’, a problem or situation that occurs at the extreme of what's considered normal or expected, potentially impacting millions of people.
“We have a lot of data on particular segments of society, but not on all areas,” Durcan warned.
Another threat is the potential for government and law enforcement overreach in using AI for purposes such as surveillance using biometric technology.
While there is a strong case to be made around using this technology for policing and national security purposes, Durcan said there needs to be safeguards around such usage to avoid abuse.
Data security and privacy around AI models is already a major issue. This includes leakage of sensitive corporate data caused by prompt injection – users manipulating large language model (LLM) outputs through prompts accidently or maliciously.
Ronan Murphy, Member of the AI Advisory Council for the Government of Ireland, told Infosecurity that for AI technologies to deliver value to an organisation, they need access to vast quantities of data.
“That represents the biggest single risk from a cyber, governance and risk and compliance perspective, that any industry has ever faced,” he noted.
Register for Europe’s leading cybersecurity event
Join us at London ExCeL, 3-5 June, for three days of learning, networking, discovering and exploring all things Infosecurity.
How to Ensure Ethical AI Use
Overcoming ethical issues around AI represents a significant challenge, especially as the development and implementation of AI technology expands rapidly.
There are several steps governments and organisations can take to reduce the risk of these issues occurring.
Government Regulation and Safeguards
Countries around the world are introducing legislation to govern the use of AI, with the aim of developing rules around the safe and fair application of the technology
In October 2023, US President Joe Biden issued an Executive Order to establish new standards for AI safety and security.
The EU has led the way from a legislative perspective, passing the AI Act in August 2024.
This legislation sets out a risk-based framework for organisations’ use of AI tools, breaking down safeguarding requirements into different categories depending on the level of risk to citizens’ rights or safety.
The EU AI Act includes accountability and transparency requirements on law enforcement around their use of AI. For example, law enforcement agencies deploying biometric systems must provide clear explanations of how they procure and operate their biometric identification systems, including algorithms, data governance, and potential biases.
Durcan explained that this legislation is taking a similar approach to that seen with the General Data Protection Regulation (GDPR), whereby guidelines are set before refining and developing use cases over time.
“My guess is over the next 12-18 months we’re going to see a lot of iterations going through this and understanding where the boundaries are,” he commented.
However, Murphy cautioned against regulating the development of AI, which he believes will stifle innovation.
“I am in favour of the regulation on the application of AI in areas like critical infrastructure and health and how companies use the AI to deliver the services that they need, but if you try and regulate the development of AI, you kill it,” he warned.
Securing AI Training Data
The foundation of organisations’ approach to complying with regulations and embracing ethical AI use is the data their AI models are trained on, according to Murphy.
This involves having the tools and processes in place to “sanitize” any information going into AI applications. This is to ensure that any sensitive data, such as personal or proprietary information, are excluded or masked prior to being entered into the models.
Failure to do so will mean the organisation cannot use AI models and lose competitive advantage as a result.
“We’re seeing two types of companies – those who want to embrace AI and get their data into a good place and those who block it,” noted Murphy.
Focusing on AI models’ foundational data is also vital to overcoming issues like bias and discrimination from using this technology.
Durcan said that consumer tech companies are becoming increasingly aware of the need to ensure the data going into AI models reflects the demographics of their client base, for commercial and reputational needs.
This benefit has been demonstrated in the development of HR software by tech organisations with diverse workforces, Durcan observed.
“The diversity workforce has been a value add for them as they look at their models and go forward for internal testing. They are picking up stuff early on that they may not have done if it was a very monocultural culture,” he commented.
Building Multidisciplinary AI Governance Teams
Durcan also encouraged organisations to build specific AI teams that encompass a range of skills, far beyond just technical expertise. This will be essential to addressing the wide range of ethical challenges that AI can present.
These skills include:
- Cybersecurity: To review security and privacy measures around the AI models, including conducting regular red teaming exercises to test these
- Legal: To ensure compliance with relevant AI and data protection regulations
- Humanities. To consider issues such as bias and other sociological impacts
- Psychology. To understand how customers will interact with these tools most effectively, and addressing
- Public relations. To build trust in the AI tools by explaining why they are being used and providing transparency about how they are making decisions
“Companies that build these core multidisciplinary teams that are working together will be the companies that are going to have the future products that will successfully take off, because it will address the concerns of the consumers,” explained Durcan.
ADVERTISEMENT
Conclusion
AI, while promising to solve humanity's biggest challenges, must be developed and used ethically to avoid potential negative impacts like bias, discrimination, and data privacy breaches.
Governments and organisations must understand these threats, and ensure they are building in the necessary tools, processes and skills to reduce the risk of them occurring. With AI adoption growing fast, this task has become an urgent necessity.
Enjoyed this article? Make sure to share it!
Latest Articles
Keep up to date with the latest infosecurity news and trends in our latest articles.
Stay in the know
Receive updates about key events, news and recent insights from Infosecurity Europe.
Looking for something else?