Preview

Lex Genetica

Расширенный поиск

Algorithmic Bias and Non-Discrimination in Argentina

https://doi.org/10.17803/lexgen-2022-1-1-63-74

Аннотация

One of the major research problems related to artificial intelligence (AI) models at present is algorithmic bias. When an automated system “makes a decision” based on its training data, it can reveal biases similar to those inherent in the humans who provided the training data. Much of the data used to train the models comes from vector representations of words obtained from text corpuses, which can transmit stereotypes and social prejudices. AI system design focused on optimising processes and improving prediction accuracy ignores the need for new standards for compensating the negative impact of AI on the most vulnerable categories of peoples. An improved understanding of the relationship between algorithms, bias, and non-discrimination not only precedes any eventual solution, but also helps us to recognize how discrimination is created, maintained, and disseminated in the AI era, as well as how it could be projected into the future using various neurotechnologies. The opacity of the algorithmic decision-making process should be replaced by transparency in AI processes and models. The present work aims to reconcile the use of AI with algorithmic decision processes that respect the basic human rights of the individual, especially the principles of non-discrimination and positive discrimination. The Argentine legislation serves as the legal basis of this work.

Об авторе

F. Farinella
Mar del Plata National University
Аргентина


Список литературы

1. Angwin, J., Larson, J., Mattu, S. & Kirchner, L. (2016, May 23). Machine Bias. ProPublica. Available at: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

2. Anti-Discrimination Law, No. 23, 592, art. I (1988).

3. Argentine Const., art. 16, 19, 22, 23, 37, 75.

4. United Nations General Assembly (1979). Convention on the elimination of all forms of discrimination against women. Available at: https://www.un.org/womenwatch/daw/cedaw/cedaw.htm

5. Banaji, M. R. & Greenwald, A. G. (2013). Hidden Biases of Good People. New York: Bantam.

6. Benjamin, R. (2019). Race After Technology: Abolitionist Tools for the New Jim Code. Wiley, Princeton University, USA.

7. Consejo Nacional para prevenir la Discriminación. (2016). Ficha temática Mujeres, México. Available at: https://www.conapred.org.mx/userfiles/files/FichaTematica_Mujeres.pdf

8. Agencia de los Derechos Fundamentales de la Unión Europea y Consejo de Europa. (2019). Manual de legislación europea contra la discriminación: Edición de 2018. Available at: http://repositori.uji.es/xmlui/bitstream/handle/10234/187708/manual_agencia_2019.pdf?sequence=1

9. Hacking and Cyber Security News. (2020, May 31). Religion Biased Algorithms Continue to Depict How Facebook Doesn’t Believe in Free Speech. Available at: https://hackingncysecnews.blogspot.com/2020/05/religion-biased-algorithms-continue-to.html

10. Fayyad, U., Piatetsky-Shapiro, G. & Smyth, P. (1996). From data mining to knowledge discovery in databases. AI Magazine, 17(3), 37–37. https://doi.org/10.1609/aimag.v17i3.1230

11. Gomez Abajo, C. (2017, August 28). La inteligencia artificial tiene prejuicios, pero se pueden corregir. El Pais. Available at: https://elpais.com/retina/2017/08/25/tendencias/1503671184_739399.html

12. Gunning, D. & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI Magazine, 40(2), 44–58. https://doi.org/10.1609/aimag.v40i2.2850

13. Hao, K. (2018, November 28). Can you make an AI that isn’t ableist? MIT Technology Review. Available at: https://www.technologyreview.com/2018/11/28/1797/can-you-make-an-aithat-isnt-ableist/

14. Heilweil, R. (2020, February 18). Why algorithms can be racist and sexist. Vox. Available at: https://www.vox.com/recode/2020/2/18/21121286/algorithms-bias-discrimination-facial-recognition-transparency

15. Inter American Court of Human Rights, Proposal to amend the Political Constitution of Costa Rica related to naturalization, Advisory Opinion OC-4/84 of 01/19/1984, Series A, No.4, Chapter IV, § 56-58.

16. Inter-American Court of Human Rights, Consultative Opinion 18/03, § 4.

17. Inter-American Court of Human Rights. Case of Atala Riffo and daughters v. Chile. Judgment of February 24, 2012. (Merits, Reparations and Costs). Available at: https://www.humandignitytrust.org/wp-content/uploads/resources/Atala_Rif fo_and_Daughters_v_Chile_24_February_2012_Series_C_No._239.pdf

18. Kafka, P. (2009, April 13). Amazon Apologizes for ‘Ham-fisted Cataloging Error’. All Things D. Available at: https://allthingsd.com/20090413/amazon-apologizes-for-ham-fisted-cataloging-error/

19. Karlsruhe Institute of Technology. (2019, November 13). The risk of discrimination by algorithm. Tech Xplore. Available at: https://techxplore.com/news/2019-11-discrimination-algorithm.html

20. Kleinberg, J., Ludwig, J., Mullainathan, S. & Sunstein, C. R. (2018). Discrimination in the Age of Algorithms. Journal of Legal Analysis, 10, 113–174. https://doi.org/10.1093/jla/laz001

21. Law No. 5261, Ciudad Autónoma de Buenos Aires, Buenos Aires (2015).

22. Lawrence III, C. R. (2001). Two View of the River: A Critique of the Liberal Defense of Affirmative Action. Columbia Law Review, 101, 928. Available at: https://scholarship.law.georgetown.edu/cgi/viewcontent.cgi?article=1339&context=facpub

23. Mayson, S. G. (2018). Bias in, bias out. The Yale Law Journal, 128, 2218-2230. Available at: https://ssrn.com/abstract=3257004 (accessed 23 October 2021).

24. Metz, R. (2018, March 27). Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants. MIT Technology Review. Available at: https://www.technologyreview.com/2018/03/27/144290/microsofts-neo-nazi-sexbot-was-a-great-lesson-for-makersof-ai-assistants/

25. MPF Argentina, Igualdad y no Discriminación, Dictámenes del Ministerio Público Fiscal ante la Corte Suprema de Justicia de la Nación (2012-2016). Available at: https://www.mpf.gob.ar/dgdh/files/2016/06/Cuadernillo-2-Igualdad-y-no-Discriminaci%C3%B3n.pdf (Spanish).

26. Mullane, M. (2018, June 15). Eliminating bias from the data used to train algorithms is a key challenge for the future of machine learning. E-tech. Available at: https://etech.iec.ch/issue/2018-06/eliminating-bias-from-algorithms

27. Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism. New York University Press. https://doi.org/10.2307/j.ctt1pwt9w5

28. O’Neil, C. (2018). Amazon’s gender-biased algorithm is not alone. Bloomberg. Available at: https://www.bloomberg.com/opinion/articles/2018-10-16/amazon-s-gender-biasedalgorithm-is-not-alone

29. Supreme Court of Justice of Argentina, Sentences 312:496, Portillo case, § 8, 9, 10.

30. Supreme Court of Justice of Argentina, Sentences 334:1387, Pellicori case.

31. Supreme Court of Justice, Argentina, Sentences 153:67.

32. Supreme Court of Justice, Argentina, Sentences 16:118.

33. Supreme Court of Justice, Argentina, Sentences 314:1531 and ss.

34. The United Nations. (2008).Convention on the Rights of Persons with Disabilities. New York, 13 December 2006. Treaty Series, 2515, 3. Available at: https://treaties.un.org/doc/publication/unts/volume%202515/v2515.pdf

35. Waltl, B. & Vogl, R. (2018). Explainable Artificial Intelligence: The New Frontier in Legal Informatics. Jusletter IT, 4, 1–10.

36. Zhao, J., Wang, T., Yatskar, M., Ordonez, V. & Chang, K. W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. arXiv [Preprint]. Available at: https://arxiv.org/abs/1707.09457. https://doi.org/10.48550/arXiv.1707.09457


Рецензия

Для цитирования:


  . Lex Genetica. 2022;1(1):63-74. https://doi.org/10.17803/lexgen-2022-1-1-63-74

For citation:


Farinella F. Algorithmic Bias and Non-Discrimination in Argentina. Lex Genetica. 2022;1(1):63-74. https://doi.org/10.17803/lexgen-2022-1-1-63-74

Просмотров: 723


Creative Commons License
Контент доступен под лицензией Creative Commons Attribution 4.0 License.


ISSN 3034-1639 (Print)
ISSN 3034-1647 (Online)