Prohibit Cannot Regulate: On the Question of Adjusting the Use of AI in Everyday Life of Society
DOI:
https://doi.org/10.14515/monitoring.2025.3.2661Keywords:
artificial intelligence, problem of regulation, human—algorithm interdependence, formal and informal institutions, risks and taboo zonesAbstract
This article examines the framework governing artificial intelligence (AI) instruments that are becoming an integral part of society’s everyday life. The core question framing authors’ reflections is: Should AI technologies be prohibited or regulated from the outset?
The paper begins with a general overview of the central question surrounding the development of artificial intelligence (AI), setting the stage for a deeper exploration of critical issues. It examines specific challenges related to AI, including deceptive practices that may arise in machine learning algorithms, the functionalities and implications of autonomous agents in various sectors, and the concept of the "second transition," which refers to the developing state of AI capabilities and their integration into society. Throughout this discussion, the authors highlight the substantial regulatory challenges that arise from these advancements. Building on this foundation, the authors then analyze the existing regulatory context in detail, emphasizing the distinct roles that developers, consumers, and regulators play in the oversight of AI technologies. They discuss the responsibilities of developers to ensure ethical practices in AI design, the need for informed consumers to navigate complex AI systems, and the imperative for regulators to establish effective frameworks that keep pace with rapid technological changes. Additionally, the paper explores the valuable contributions that social scientists can make in shaping AI regulations by bringing insights into social behavior, ethical considerations, and public policy. This collaborative approach is essential for developing robust regulatory mechanisms that address both the opportunities and challenges of AI integration into society. In conclusion, the authors outline four regulatory principles: 1) The key is not to regulate AI’s design and manufacture but the relationships among those who create, deploy, and utilize it; 2) The focus of AI regulation should not be on its making but on addressing the negative effects of using AI tools in everyday life; 3) In the first stage, regulations should focus on the development of AI tools in specific areas where the benefits of replacing humans with AI are most apparent; 4) There is a fundamental difference between regulating AI in areas that relate to existential issues (where we need to introduce prohibitions right now) and other areas (where we should first regulate and then prohibit).
References
Дуглас М. Риск как судебный механизм // THESIS. 1994. Вып. 5. С. 242-253.
Douglas M. (1994) Risk as a Forensic Resource. THESIS. No. 5. P. 242-253. (In Russ.)
Зубофф Ш. Эпоха надзорного капитализма. Битва за человеческое будущее на новых рубежах власти. М.: Издательство Института Гайдара, 2022.
Zuboff Sh. (2022) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Moscow: Gaidar Institute Press. (In Russ.)
О’Нил К. Убийственные большие данные. Как математика превратилась в оружие массового поражения. М.: АСТ, 2018.
O’Neil C. (2018) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Moscow: AST. (In Russ.)
От искусственного интеллекта к искусственной социальности: новые исследовательские проблемы современной социальной аналитики / под ред. А.В. Резаева. М.: ВЦИОМ, 2020.
Rezaev A.V. (ed.) (2020) Artificial Intelligence on the Way to Artificial Sociality: New Research Agenda for Social Analytics. Moscow: VCIOM.
Резаев, А. В., Трегубова, Н. Д. Возможность и необходимость человеко-Ориентированного искусственного интеллекта в юридической теории и практике // Journal of Digital Technologies and Law. 2023. Т. 1. № 2. С. 564—580. https://doi.org/10.21202/jdtl.2023.24.
Rezaev A. V., Tregubova N. D. (2023) The Possibility and Necessity of the Human-Centered AI in Legal Theory and Practice. Journal of Digital Technologies and Law. Vol. 1. No. 2. P. 564—580. https://doi.org/10.21202/jdtl.2023.24.
Bareis J., Katzenbach C. (2022) Talking AI into Being: The Narratives and Imaginaries of National AI Strategies and Their Performative Politics. Science, Technology, & Human Values. Vol. 47. No. 5. P. 855-881. https://doi.org/10.1177/01622439211030007.
Castelfranchi C. (2000) Artificial liars: Why Computers Will (Necessarily) Deceive Us and Each Other? Ethics and Information Technology. Vol. 2. P. 113—119. https://doi.org/10.1023/A:1010025403776.
Cath C., Wachter S., Mittelstadt B., Taddeo M., Floridi L. (2018) Artificial Intelligence and the ‘Good Society’: The US, EU, and UK Approach. Science and Engeneering Ethics. Vol. 24. P. 505—528. https://doi.org/10.1007/s11948-017-9901-7.
Etzioni A., Etzioni O. (2017) Should Artificial Intelligence Be Regulated? Issues in Science and Technology. Vol. 33. No. 4. P. 32-36.
Greenblatt R., Denison C., Wright B., Roger F., MacDiarmid M., Marks S., Treutlein J., Belonax T., Chen J., Duvenaud D., Khan A., Michael J., Mindermann S., Perez E., Petrini L., Uesato J., Kaplan J., Shlegeris B., Bowman S. R., Hubinger E. (2024) Alignment Faking in Large Language Models. https://doi.org/10.48550/arXiv.2412.14093.
Haidt J. (2024) The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness. New York: Penguin Press.
Jiang B., Tan Z., Nirmal A., Liu, H. (2024) Disinformation Detection: An Evolving Challenge in the Age of LLMs. In: Proceedings of the 2024 SIAM International Conference on Data Mining (SDM). P. 427 — 435. https://doi.org/10.1137/1.9781611978032.50.
Jiang B., Zhao C., Tan Z., & Liu H. (2024) Catching Chameleons: Detecting Evolving Disinformation Generated using Large Language Models. In: Proceedings of 2024 IEEE 6th International Conference on Cognitive Machine Intelligence (CogMI). P. 197-206. https://doi.org/10.1109/CogMI62246.2024.00034.
Lucas J.S., Uchendu A., Yamashita M., Lee J., Rohatgi S., Lee D. (2023) Fighting Fire with Fire: The Dual Role of LLMs in Crafting and Detecting Elusive Disinformation. In: Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. P. 14279—14305. https://doi.org/10.18653/v1/2023.emnlp-main.883.
Kissinger H., Schmidt E., Huttenlocher D. (2021) The Age of AI: And Our Human Future. Boston, MA: Little, Brown and Company.
Rezaev A. V. (2021) Twelve Theses on Artificial Intelligence and Artificial Sociality. Monitoring of Public Opinion: Economic and Social Changes. No. 1. P. 20—30. https://doi.org/10.14515/monitoring.2021.1.1894.
Russell S. (2019) Human Compatible: Artificial Intelligence and the Problem of Control. New York: Viking.
Russell S., Norvig P. (2016) Artificial Intelligence: A Modern Approach. Harlow: Pearson Education Limited.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Monitoring of Public Opinion: Economic and Social Changes

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.




