Articles | Open Access | DOI: https://doi.org/10.37547/tajet/Volume07Issue05-19

Mitigating Algorithmic Bias in Predictive Models

Tamanno Maripova , Data Analyst New York, USA

Abstract

This article considers the issue of systematic errors in predictive machine-learning models generating disparate outcomes for different social groups and proposes a holistic approach to its mitigation. The risks and increasing legal requirements, along with corporate commitments to ethical AIs, drive the relevance of this study. The work herewith attempts to develop a bias-source taxonomy at data collection and annotation, proxy-feature selection, model training, and deployment stages; also, it tries to compare pre-, in-, and post-processing methods' effectiveness on representative datasets measured by demographic parity, equalized error rates, and disparate impact. This article is unprecedented in undertaking a two-level approach: first, a systematic review of regulatory definitions (NIST, IBM) and case studies (COMPAS, healthcare-service prediction, face recognition) that identified key bias factors from sample imbalance to feedback loops; second, an empirical comparison of Reweighing, adversarial debiasing, threshold post-processing techniques alongside flexible multi-objective strategies—YODO (via AI Fairness 360 and Fairlearn libraries)—considering acceptable accuracy losses. The root source of unfairness remains data bias; hence, pre-processing must be undertaken (rebalancing, synthetic oversampling), while in- and post-processing can essentially harmonize group metrics at some cost in accuracy reduction Furthermore, without continuous online monitoring and documentation (datasheets, model cards), the balanced model risks losing fairness due to dynamic feedback effects. Bringing together technical fixes with rules and making the audit process official ensures the ability to copy and openness, which is key for long-term faith in AI systems. This article will help machine-learning builders, AI-responsibility experts, and checkers find ways to find, gauge, and lessen algorithmic bias in live models.

Keywords

algorithmic bias, fairness, , pre-processing, in-processing, post-processing, demographic parity, equalized odds, disparate impact, AI Act, NIST AI RMF, model cards

References

R. Schwartz, A. Vassilev, K. Greene, L. Perine, A. Burt, and P. Hall, “Towards a Standard for Identifying and Managing Bias in Artificial Intelligence,” Towards a Standard for Identifying and Managing Bias in Artificial Intelligence, vol. 1270, no. 1270, Mar. 2022, doi: https://doi.org/10.6028/nist.sp.1270.

A. Jonker and J. Rogers, “What is algorithmic bias?” IBM, Sep. 20, 2024. https://www.ibm.com/think/topics/algorithmic-bias (accessed Apr. 18, 2025).

A. Davison, “AI ethics tools,” IBM, Sep. 03, 2024. https://www.ibm.com/think/insights/ai-ethics-tools (accessed Apr. 19, 2025).

J. Larson, S. Mattu, L. Kirchner, and J. Angwin, “How We Analyzed the COMPAS Recidivism Algorithm,” ProPublica, May 23, 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm (accessed Apr. 20, 2025).

T. Devries, I. Misra, and C. Wang, “Does Object Recognition Work for Everyone?,” The CVPR. Accessed: Apr. 21, 2025. [Online]. Available: https://openaccess.thecvf.com/content_CVPRW_2019/papers/cv4gc/de_Vries_Does_Object_Recognition_Work_for_Everyone_CVPRW_2019_paper.pdf

J. Buolamwini and T. Gebru, “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification,” Proceedings of Machine Learning Research, vol. 81, no. 1, pp. 1–15, 2018, Accessed: Apr. 22, 2025. [Online]. Available: https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf

Z. Obermeyer, B. Powers, C. Vogeli, and S. Mullainathan, “Dissecting racial bias in an algorithm used to manage the health of populations,” Science, vol. 366, no. 6464, pp. 447–453, Oct. 2019, Accessed: Apr. 03, 2025. [Online]. Available: https://www.ftc.gov/system/files/documents/public_events/1548288/privacycon-2020-ziad_obermeyer.pdf

M. Hardt, E. Price, and N. Srebro, “Equality of Opportunity in Supervised Learning,” Arxiv, Oct. 07, 2016. https://arxiv.org/abs/1610.02413v1 (accessed Apr. 23, 2025).

D. Ensign, S. Friedler, S. Neville, C. Scheidegger, S. Venkatasubramanian, and C. Wilson, “Runaway Feedback Loops in Predictive Policing,” Proceedings of Machine Learning Research, vol. 81, 2018, Accessed: Apr. 23, 2025. [Online]. Available: https://proceedings.mlr.press/v81/ensign18a/ensign18a.pdf

European Parliament, P9_TA(2024)0138 Artificial Intelligence Act. 2024. Accessed: Apr. 23, 2025. [Online]. Available: https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf

[“Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile,” NIST, 2024, doi: https://doi.org/10.6028/nist.ai.600-1.

“Algorithmic Impact Assessment Tool,” The Government of Canada, May 30, 2024. https://www.canada.ca/en/government/system/digital-government/digital-government-innovations/responsible-use-ai/algorithmic-impact-assessment.html (accessed Apr. 24, 2025).

“Guidance on AI and data protection,” ICO, Jun. 13, 2023. https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ (accessed Apr. 24, 2025).

“Model AI Governance Framework for Generative AI,” AI Verify Foundation, May 30, 2024. https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf (accessed Apr. 25, 2025).

J. Rusu, “AI Update,” FCA. Accessed: Apr. 24, 2025. [Online]. Available: https://www.fca.org.uk/publication/corporate/ai-update.pdf

R. P. Grubenmann, “ISO/IEC 42001: The latest AI management system standard,” KPMG, 2024. https://kpmg.com/ch/en/insights/artificial-intelligence/iso-iec-42001.html (accessed Apr. 24, 2025).

OECD, “AI Principles,” OECD, 2024. https://www.oecd.org/en/topics/ai-principles.html (accessed Apr. 25, 2025).

H. Mahmoudian, “Reweighing the Adult Dataset to Make it ‘Discrimination-Free,’” Medium, Apr. 14, 2020. https://medium.com/data-science/reweighing-the-adult-dataset-to-make-it-discrimination-free-44668c9379e8 (accessed Apr. 26, 2025).

Y. Park et al., “Comparison of Methods to Reduce Bias From Clinical Prediction Models of Postpartum Depression,” JAMA Network Open, vol. 4, no. 4, p. e213909, Apr. 2021, doi: https://doi.org/10.1001/jamanetworkopen.2021.3909.

P. Awasthi, M. Kleindessner, and J. Morgenstern, “Equalized odds postprocessing under imperfect group information,” in Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics, PMLR, 2020. Accessed: Apr. 28, 2025. [Online]. Available: https://proceedings.mlr.press/v108/awasthi20a/awasthi20a.pdf

“Understand and mitigate bias in ML models,” AI Fairness 360. https://ai-fairness-360.org/ (accessed Apr. 29, 2025).

B. Hsu, R. Mazumder, P. Nandy, and K. Basu, “Pushing the limits of fairness impossibility: Who’s the fairest of them all?” 36th Conference on Neural Information Processing Systems, 2022, Accessed: May 18, 2025. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2022/file/d3222559698f41247261b7a6c2bbaedc-Paper-Conference.pdf

M. Kusner, J. Loftus, C. Russell, and R. Silva, “Counterfactual Fairness,” Proceedings of the 31st Conference on Neural Information Processing Systems, 2017, Accessed: May 04, 2025. [Online]. Available: https://proceedings.neurips.cc/paper_files/paper/2017/file/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf

X. Han, T. Chen, K. Zhou, Z. Jiang, Z. Wang, and X. Hu, “You Only Debias Once: Towards Flexible Accuracy-Fairness Trade-offs at Inference Time,” Arxive, Mar. 10, 2025. https://arxiv.org/pdf/2503.07066 (accessed May 06, 2025).

V. Prasad, “AI Algorithm Audits: Key Control Considerations,” ISACA, Aug. 02, 2024. https://www.isaca.org/resources/news-and-trends/industry-news/2024/ai-algorithm-audits-key-control-considerations (accessed May 08, 2025).

H. Dhaduk, “State of Generative AI in 2024,” Simform, Apr. 02, 2024. https://www.simform.com/blog/the-state-of-generative-ai/ (accessed May 09, 2025).

“AI Risk Management Framework,” NIST, Jan. 2023, doi: https://doi.org/10.6028/nist.ai.100-1.

“Datasheets for Datasets: Impact and Adoption Across Academic and Industry Sectors,” Hackernoon, Jun. 11, 2024. https://hackernoon.com/datasheets-for-datasets-impact-and-adoption-across-academic-and-industry-sectors (accessed May 12, 2025).

J. Le, “Datacast Episode 67: Model Observability, Ai Ethics, And Ml Infrastructure Ecosystem With Aparna Dhinakaran,” James Le, Jun. 28, 2021. https://jameskle.com/writes/aparna-dhinakaran (accessed May 18, 2025).

Article Statistics

Copyright License

Download Citations

How to Cite

Tamanno Maripova. (2025). Mitigating Algorithmic Bias in Predictive Models. The American Journal of Engineering and Technology, 7(05), 192–201. https://doi.org/10.37547/tajet/Volume07Issue05-19