What does an Artificial Intelligence model think a doctor looks like? The image may be computer-generated but it may also reflect some very human biases, as Bloomberg found when they tested one image generator that produced mostly male doctors and mostly female nurses.
AI has the potential to transform the research, healthcare, and publishing sectors. However, as its use grows, so do concerns about bias and data privacy, particularly in areas that rely on sensitive, diverse datasets where AI decisions have a real-world impact.
AI bias isn’t just a technical flaw, it’s a cultural one. As technologists and data scientists, we have a responsibility to ensure that as AI becomes embedded in business culture, it represents society and our diverse human population as a whole.
AI bias: concerns vs potential
AI bias refers to discriminatory patterns in algorithmic decision-making, often stemming from biased or unrepresentative training data. In hiring, this can result in biased recruitment, such as an AI model that favours male candidates. In healthcare, the consequences are even more critical, with biased models potentially causing misdiagnoses, unequal treatment, and the exclusion of vulnerable populations.
Elsevier’s Attitudes Towards AI report, a global study that looked at the current opinions of researchers and clinicians on AI, revealed that the most commonly cited disadvantage of the technology is the risk of biased or discriminatory outputs, with 24% of researchers ranking this among their top three concerns.
However, AI does have the potential to help remedy existing biases. The Pew Research Centre reported that 51% of US adults, who see a problem with racial and ethnic bias in health and medicine, think AI could improve the issue, and 53% believe the same for bias in hiring.
Enshrining data privacy to build trust in AI
Balancing data use with privacy is challenging. AI systems depend on large, often opaque datasets that pose risks like surveillance and unauthorised access.
But preserving data privacy is the cornerstone of trust in AI systems. Failing to address privacy and data concerns not only has a commercial impact but also significantly erodes trust among customers and end users.
Personal data, such as browsing habits or purchase history, can be used to infer sensitive details about individuals. Privacy frameworks help prevent unauthorised access, which is especially critical in sectors like publishing and research, where data often includes personal, academic, or medical information.
Bias mitigation in practice
Mitigating bias risk requires diverse, representative data, bias assessments of both inputs and outputs, and techniques like Retrieval-Augmented Generation (RAG) to ground responses in trusted sources. Accountability is reinforced through audits, transparent documentation, and collaboration between legal and technology teams.
In my own team, we apply mitigation principles by rigorously evaluating datasets for bias, using RAG to anchor Large Language Model outputs in peer-reviewed content, and monitoring for gender bias in reviewer recommendations. Strong governance, including an AI ethics board, compliance reviews, and privacy impact assessments, ensures our systems align with ethical and organisational standards and are backed by responsible AI principles.
Human-in-the-loop
Building responsible AI requires inclusive design, diverse perspectives, and ethical oversight. AI systems often reflect the values and assumptions of those who create them, which is why a responsible human touch, not just technical capability, must guide their development. This is the human-in-the-loop approach: overseeing everything that is produced to ensure decisions are being made fairly.
Transparency plays a key role in building trust. That includes making it clear how AI-generated content is produced and where the underlying data is sourced. By ensuring traceability and openness, we can help users better understand and evaluate the outputs of these systems.
Ultimately, the path to trustworthy AI lies in continuous learning, open dialogue, and a commitment to fairness. With thoughtful design and responsible governance, AI can be shaped into a tool that supports human decision-making and advancements that contribute positively to society.
