<?xml version="1.0" encoding="utf-8"?>
<article xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:mml="http://www.w3.org/1998/Math/MathML" dtd-version="1.4" article-type="research-article">
  <front>
    <journal-meta>
      <journal-id journal-id-type="issn">2304-3369</journal-id>
      <journal-id journal-id-type="eissn">2308-8842</journal-id>
      <journal-title-group>
        <journal-title xml:lang="ru">Вопросы управления</journal-title>
        <journal-title xml:lang="en">Management Issues</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="edn">EVBNHQ</article-id>
      <title-group>
        <article-title xml:lang="ru">РАЗВИТИЕ СОЦИАЛЬНОГО ЭСКАПИЗМА ПОД ВЛИЯНИЕМ ТЕХНОЛОГИЙ ИСКУССТВЕННОГО ИНТЕЛЛЕКТА: СИСТЕМАТИЧЕСКИЙ ОБЗОР ЛИТЕРАТУРЫ</article-title>
        <trans-title-group xml:lang="en">
          <trans-title>ARTIFICIAL INTELLIGENCE AND SOCIAL ESCAPISM: A SYSTEMATIC LITERATURE REVIEW</trans-title>
        </trans-title-group>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <name name-style="eastern">
            <surname>Ваторопин</surname>
            <given-names>С. А.</given-names>
          </name>
          <name-alternatives>
            <name name-style="eastern" xml:lang="ru">
              <surname>Ваторопин</surname>
              <given-names>С. А.</given-names>
            </name>
            <name name-style="western" xml:lang="en">
              <surname>Vatoropin</surname>
              <given-names>S. A.</given-names>
            </name>
          </name-alternatives>
          <email>sergeyvatoropin@yandex.ru</email>
          <contrib-id contrib-id-type="orcid">0009-0007-6624-8104</contrib-id>
          <xref ref-type="aff" rid="aff1"/>
        </contrib>
        <aff-alternatives id="aff1">
          <aff>
            <institution xml:lang="ru">Российская академия народного хозяйства и государственной службы при Президенте Российской Федерации, Уральский институт управления – филиал (Екатеринбург, Россия)</institution>
          </aff>
          <aff>
            <institution xml:lang="en">Russian Presidential Academy of National Economy and Public Administration, Ural Institute of Management (Yekaterinburg, Russia)</institution>
          </aff>
        </aff-alternatives>
      </contrib-group>
      <pub-date pub-type="epub" iso-8601-date="2026-03-31">
        <day>31</day>
        <month>03</month>
        <year>2026</year>
      </pub-date>
      <pub-date date-type="collection">
        <year>2026</year>
      </pub-date>
      <volume>20</volume>
      <issue>1</issue>
      <fpage>97</fpage>
      <lpage>110</lpage>
      <history>
        <date date-type="received" iso-8601-date="2025-10-25">
          <day>25</day>
          <month>10</month>
          <year>2025</year>
        </date>
        <date date-type="accepted" iso-8601-date="2026-01-20">
          <day>20</day>
          <month>01</month>
          <year>2026</year>
        </date>
        <date date-type="rev-recd" iso-8601-date="2025-12-29">
          <day>29</day>
          <month>12</month>
          <year>2025</year>
        </date>
      </history>
      <permissions>
        <copyright-statement>© S. A. Vatoropin, 2026</copyright-statement>
        <copyright-year>2026</copyright-year>
        <license xlink:href="https://creativecommons.org/licenses/by-nc/4.0/">
          <license-p>CC BY-NC 4.0</license-p>
        </license>
      </permissions>
      <abstract xml:lang="ru">
        <p>Введение. Технологии искусственного интеллекта (ИИ), широко внедряемые в ключевые сферы общественной жизни, несут не только новые возможности, но и множество социальных рисков. От того, насколько данные риски учитываются в рамках управленческих стратегий на микро- и макроуровнях, зависит устойчивость развития организаций, отраслей, общества в целом. В статье рассматриваются феномен социального эскапизма (активно развивающийся тренд, связанный с формированием и практической реализацией установок на минимизацию социальных контактов и самоизоляцию от общества) и технологии ИИ в качестве фактора, обусловливающего развитие данного тренда.&#13;
Материалы и методы. Систематический обзор литературы подготовлен на основе использования метода PRISMA. Из 1319 публикаций на русском и английском языках, обнаруженных в литературных базах ScienceDirect, GoogleScholar, OpenAlex, CyberLeninka и Elibrary.ru (дата последнего поиска – 9 августа 2025 г.), для обзора отобрано 40 первоисточников, раскрывающих различные аспекты социального эскапизма под влиянием технологий ИИ.&#13;
Результаты. Определены ключевые направления влияния ИИ на развитие социального эскапизма. Первое направление связано с субституированием социальных взаимодействий искусственными, в рамках которого ИИ-агенты становятся альтернативой реальному общению, формируя эмоциональную привязанность и снижая потребность в межличностных контактах. Второе направление заключается в усилении существующего цифрового эскапизма, в рамках которого алгоритмы персонализации и гиперреалистичное виртуальное пространство на основе использования технологий виртуального ИИ обусловливают цифровое «затворничество» и социальную самоизоляцию. Третье направление отражает тенденцию к уходу от «ИИ-зированной» социальной среды вследствие угрозы со стороны рассматриваемых технологий традиционным ценностям свободы и справедливости, а также ценности человека как такового.&#13;
Обсуждение. Автором вводятся три модели развития социального эскапизма (эмоционально-коммуникативная, перцептивно-онтологическая и ценностно-экзистенциальная), раскрывающие различные поведенческие стратегии социального самоисключения в условиях субъективного принятия или непринятия релевантных технологий. Обозначаются перспективы дальнейшей концептуализации рассматриваемого феномена, связанные с проведением лонгитюдных, кросс-культурных и групповых исследований. Делается вывод о необходимости учёта социальных рисков, включая эскапистские тенденции, при внедрении технологий ИИ.</p>
      </abstract>
      <trans-abstract xml:lang="en">
        <p>Introduction. Artificial intelligence (AI) technologies, being widely implemented in key areas of public life, bring not only new opportunities but also numerous social risks. The sustainability of organizations, industries, and society depends on how these risks are considered in management strategies. The paper analyzes the impact of AI in terms of social escapism, a trend toward minimizing social contact and self-isolation from society.&#13;
Materials and Methods. A PRISMA-based systematic literature review included 40 primary sources selected from 1,319 publications in Russian and English identified in the ScienceDirect, GoogleScholar, OpenAlex, CyberLeninka, and Elibrary.ru databases (by August 9, 2025).&#13;
Results. The study identifies key cases in which AI causes the spread of social escapism: (a) when real-life communication is substituted by interactions with AI agents, fostering emotional attachment and reducing the need for interpersonal contact; (b) when the existing digital escapism is reinforced by personalization algorithms and hyper-realistic virtual spaces based on AI, which leads to digital «reclusion» and social isolation; and (c) when AI threatens traditional values of freedom and justice, as well as the value of the individual, beyond the social environment of AI.&#13;
Discussion. The three key cases are demonstrated by the three models of emerging social escapism: emotional-communicative, perceptual-ontological, and value-existential. These models explain various behavioral strategies of social self-exclusion which result from subjective acceptance or rejection of relevant technologies. The study outlines the ways for further conceptualization of AI-caused social escapism, including longitudinal, cross-cultural, and group studies, and the need to consider social risks and escapist tendencies, when implementing AI technologies.</p>
      </trans-abstract>
      <kwd-group xml:lang="ru">
        <title>Ключевые слова</title>
        <kwd>Социальный эскапизм</kwd>
        <kwd>социальная изоляция</kwd>
        <kwd>искусственный интеллект</kwd>
        <kwd>ИИ-агенты</kwd>
        <kwd>субституция социальных взаимодействий</kwd>
        <kwd>цифровой эскапизм</kwd>
        <kwd>гиперреальность</kwd>
      </kwd-group>
      <kwd-group xml:lang="en">
        <title>Keywords</title>
        <kwd>Social escapism</kwd>
        <kwd>social isolation</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>AI agents</kwd>
        <kwd>substitution of social interactions</kwd>
        <kwd>digital escapism</kwd>
        <kwd>hyperreality</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body/>
  <back>
    <ref-list>
      <ref id="ref1">
        <label>1</label>
        <mixed-citation xml:lang="en">1. Ishanov, S. A., Osin, E. N., Kostenko, V. Yu. (2018). Personal development and the quality of solitude. Cultural-Historical Psychology, 14 (1), pp. 30–40. https://doi.org/10.17759/chp.2018140104. https://elibrary.ru/yuiwmt.</mixed-citation>
      </ref>
      <ref id="ref2">
        <label>2</label>
        <mixed-citation xml:lang="en">2. Gonoshilovа, T. O., Perepelkina, K. V. (2018). The risk of social escapism among young people. Sociology in the modern world: science, education, creativity, (10), pp. 231–235. https://elibrary.ru/zdgepz.</mixed-citation>
      </ref>
      <ref id="ref3">
        <label>3</label>
        <mixed-citation xml:lang="en">3. Kibalnik, A. V., Fedosova, I. V. (2020). Social escapism among young people. Kazan pedagogical journal, 2 (139), pp. 222–230. https://elibrary.ru/adodqu.</mixed-citation>
      </ref>
      <ref id="ref4">
        <label>4</label>
        <mixed-citation xml:lang="en">4. Romanova, E. V. (2017). An alternative strategy of lifestyle of the individual in modern Russia: social activism and ascapism. Proceedings of Voronezh State University. Series: Philosophy, 3 (25), pp. 199–205. https://elibrary.ru/zvyzxl.</mixed-citation>
      </ref>
      <ref id="ref5">
        <label>5</label>
        <mixed-citation xml:lang="en">5. Davydov, O. B. (2015). Philosophical aspect of social escapism in the age of virtuality. Vestnik of NEFU, 12 (2), pp. 77–81. https://elibrary.ru/tzuhmb.</mixed-citation>
      </ref>
      <ref id="ref6">
        <label>6</label>
        <mixed-citation xml:lang="en">6. Beloborodov, V. A., Vorobyov, V. A., Seminsky, I. Zh., Kalyagin, A. N. (2023). The procedure for performing a systematic review and meta-analysis according to the PRISMA protocol. Quality Management System: Experience and Prospects, 12, pp. 5–9. https://elibrary.ru/rsmjtg.</mixed-citation>
      </ref>
      <ref id="ref7">
        <label>7</label>
        <mixed-citation xml:lang="en">7. Skorodumova, O.B., Melikov, I. M. (2020). Social Risks And Cultural Transformations In The Era Of Fourth Industrial Revolution. In D. K. Bataev (Ed.), Social and Cultural Transformations in the Context of Modern Globalism» Dedicated to the 80th Anniversary of Turkayev Hassan Vakhitovich, vol 92. European Proceedings of Social and Behavioural Sciences, European Publisher, pp. 1008–1015. https://doi.org/10.15405/epsbs.2020.10.05.133.</mixed-citation>
      </ref>
      <ref id="ref8">
        <label>8</label>
        <mixed-citation xml:lang="en">8. Xie, T., Pentina, I., Hancock, T. (2023). Friend, mentor, lover: does chatbot engagement lead to psychological dependence? Journal of Service Management, 34 (4), pp. 806–828. https://doi.org/10.1108/JOSM-02-2022-0072.</mixed-citation>
      </ref>
      <ref id="ref9">
        <label>9</label>
        <mixed-citation xml:lang="en">9. Yao, R., Qi, G., Sheng, D., Sun, H., Zhang, J. (2025). Connecting self-esteem to problematic AI chatbot use: the multiple mediating roles of positive and negative psychological states. Frontiers in Psychology, 16, article 1453072. https://doi.org/10.3389/fpsyg.2025.1453072.</mixed-citation>
      </ref>
      <ref id="ref10">
        <label>10</label>
        <mixed-citation xml:lang="en">10. Gultekin, M. (2022). Human-Social Robot Interaction, Anthropomorphism and Ontological Boundary Problem in Education. Psycho-Educational Research Reviews, 11 (3), pp. 751–773. https://doi.org/10.52963/PERR_Biruni_V11.N3.11.</mixed-citation>
      </ref>
      <ref id="ref11">
        <label>11</label>
        <mixed-citation xml:lang="en">11. Malyshkin, A. V. (2019). Integration of artificial intelligence into public life: some ethical and legal problems. Vestnik of Saint Petersburg University. Law, 10 (3), pp. 444–460. https://doi.org/10.21638/spbu14.2019.303. https://elibrary.ru/hbioxr.</mixed-citation>
      </ref>
      <ref id="ref12">
        <label>12</label>
        <mixed-citation xml:lang="en">12. Chen, H., Liu, Z. (2024) Educational Applications of ChatGPT: Ethical Challenges and Countermeasures. English Language Teaching and Linguistics Studies, 6 (3), pp. 100–116. https://doi.org/10.22158/eltls.v6n3p100.</mixed-citation>
      </ref>
      <ref id="ref13">
        <label>13</label>
        <mixed-citation xml:lang="en">13. Ravselj, D., Kerzic, D., Tomazeviz, N., Umek, L., Brezovar, N., et al. (2025). Higher education students’ perceptions of ChatGPT: A global study of early reactions. PLoS ONE, 20 (2), article e0315011. https://doi.org/10.1371/journal.pone.0315011.</mixed-citation>
      </ref>
      <ref id="ref14">
        <label>14</label>
        <mixed-citation xml:lang="en">14. Duran, V., Ersanli, E., Çelik, H. (2025). Unveiling student sentiment dynamics toward AI-based education through statistical analysis and Monte Carlo simulation. British Educational Research Journal, pp. 1–28. https://doi.org/10.1002/berj.4188.</mixed-citation>
      </ref>
      <ref id="ref15">
        <label>15</label>
        <mixed-citation xml:lang="en">15. ran, T. T., Le, T. V., Le, N. H., Dam, A. V. T., Nguyen, T. T., Nguyen, A. T. T., Nguyen, H. T. (2025). Emotional attachment to artificial intelligence and perceived social isolation among university students: An application of Sternberg’s triangular theory of love. Multidisciplinary Science Journal, 7 (12), article 2025662. https://doi.org/10.31893/multiscience.2025662.</mixed-citation>
      </ref>
      <ref id="ref16">
        <label>16</label>
        <mixed-citation xml:lang="en">16. Klimova, B., Pikhart, M. (2025). Exploring the effects of artificial intelligence on student and academic well-being in higher education: a mini-review. Frontiers in Psychology, 16, article 1498132. https://doi.org/10.3389/fpsyg.2025.1498132.</mixed-citation>
      </ref>
      <ref id="ref17">
        <label>17</label>
        <mixed-citation xml:lang="en">17. Wang, Q., et al. (2022). Understanding the Design Space of AI-Mediated Social Interaction in Online Learning: Challenges and Opportunities. Proceedings of the ACM on Human-Computer Interaction, 6 (CSCW1), article 130. https://doi.org/10.1145/3512977.</mixed-citation>
      </ref>
      <ref id="ref18">
        <label>18</label>
        <mixed-citation xml:lang="en">18. Crawford, J., Allen, K. A., Pani, B., Cowling, M. (2024) When artificial intelligence substitutes humans in higher education: the cost of loneliness, student success, and retention. Studies in Higher Education, 49 (5), pp. 883–897. https://doi.org/10.1080/03075079.2024.2326956.</mixed-citation>
      </ref>
      <ref id="ref19">
        <label>19</label>
        <mixed-citation xml:lang="en">19. Delello, J., Sung, W., Mokhtari, K., Hebert, J., Bronson, A., De Giuseppe, T. (2025). AI in the Classroom: Insights from Educators on Usage, Challenges, and Mental Health. Education Sciences, 15 (2), article 113. https://doi.org/10.3390/educsci15020113.</mixed-citation>
      </ref>
      <ref id="ref20">
        <label>20</label>
        <mixed-citation xml:lang="en">20. Liu, A. R., Pataranutaporn, P., Turkle, S., Maes, P. (2024). Chatbot companionship: A mixed-methods study of companion chatbot usage patterns and their relationship to loneliness in active users. ArXiv. https://doi.org/10.48550/arXiv.2410.21596.</mixed-citation>
      </ref>
      <ref id="ref21">
        <label>21</label>
        <mixed-citation xml:lang="en">21. Vredenburgh, K. (2022). Freedom at Work: Understanding, Alienation, and the AI-Driven Workplace. Canadian Journal of Philosophy, 52 (1), pp. 78–92. https://doi.org/10.1017/can.2021.39.</mixed-citation>
      </ref>
      <ref id="ref22">
        <label>22</label>
        <mixed-citation xml:lang="en">22. Hofeditz, L., Mirbabaie, M., Ortmann, M. (2023). Ethical Challenges for Human–Agent Interaction in Virtual Collaboration at Work. International Journal of Human-Computer Interaction, 40 (23), pp. 8229–8245. https://doi.org/10.1080/10447318.2023.2279400.</mixed-citation>
      </ref>
      <ref id="ref23">
        <label>23</label>
        <mixed-citation xml:lang="en">23. Sidorina, T. Yu., Glebov, O. A., Sidelnikov, I. A. (2022). Automation and artificial intelligence in labor practices. Journal of Social Policy studies, 20 (3), pp. 433–444. https://doi.org/10.17323/727-0634-2022-20-3-433-444. https://elibrary.ru/pgwdjn.</mixed-citation>
      </ref>
      <ref id="ref24">
        <label>24</label>
        <mixed-citation xml:lang="en">24. Palmier, C., Rigaud, A., Ogawa, T., Wieching, R., Dacunha, S., Barbarossa, F., Stara, V., Bevilacqua, R., Pino, M. (2024). Identification of Ethical Issues and Practice Recommendations Regarding the Use of Robotic Coaching Solutions for Older Adults: Narrative Review. Journal of Medical Internet Research, 26, article e48126. https://doi.org/10.2196/48126.</mixed-citation>
      </ref>
      <ref id="ref25">
        <label>25</label>
        <mixed-citation xml:lang="en">25. Pareto Boada, J. (2021). The ethical issues of social assistive robotics: A critical literature review. Technology in Society, 67, article 101726. https://doi.org/10.1016/J.TECHSOC.2021.101726.</mixed-citation>
      </ref>
      <ref id="ref26">
        <label>26</label>
        <mixed-citation xml:lang="en">26. Savic, M. (2024). Artificial Companions, Real Connections? Examining AI’s Role in Social Connection. M/C Journal, 27 (6). https://doi.org/10.5204/mcj.3111.</mixed-citation>
      </ref>
      <ref id="ref27">
        <label>27</label>
        <mixed-citation xml:lang="en">27. Fostervold, K. I. (2022). A longitudinal study of human–chatbot relationships. International Journal of Human-Computer Studies, 168 (4), article 102903. https://doi.org/10.1016/J.IJHCS.2022.102903.</mixed-citation>
      </ref>
      <ref id="ref28">
        <label>28</label>
        <mixed-citation xml:lang="en">28. Andreallo, F. (2019). Prosthetic Soul Mates: Sex Robots as Media for Companionship. M/C Journal. https://doi.org/10.13140/RG.2.2.23455.38565.</mixed-citation>
      </ref>
      <ref id="ref29">
        <label>29</label>
        <mixed-citation xml:lang="en">29. Mailenova, F. G. (2019). Love and Robots. Will Humanity Become Digisexual? RUDN Journal of Philosophy, 23 (3), pp. 312–323. https://doi.org/10.22363/2313-2302-2019-23-3-312-323. https://elibrary.ru/bztidx.</mixed-citation>
      </ref>
      <ref id="ref30">
        <label>30</label>
        <mixed-citation xml:lang="en">30. Tsvetkova, O. (2021). The Disintegration of Intersubjectivity: Madness as the Impossibility of Love (according to L. Binswanger). Philosophical Anthropology, 2, pp. 159–170. https://doi.org/10.21146/2414-3715-2021-7-2-159-170.</mixed-citation>
      </ref>
      <ref id="ref31">
        <label>31</label>
        <mixed-citation xml:lang="en">31. Rodilosso, E. (2024). Filter Bubbles and the Unfeeling: How AI for Social Media Can Foster Extremism and Polarization. Philosophy &amp; Technology, 37 (2), article 71. https://doi.org/10.1007/s13347-024-00758-4.</mixed-citation>
      </ref>
      <ref id="ref32">
        <label>32</label>
        <mixed-citation xml:lang="en">32. Yakovleva, E. L. (2023). Monetization of emotions in the life of an electronic nomad. Russian Journal of Economics and Law, 17 (3), pp. 473–489. https://doi.org/10.21202/2782-2923.2023.3.473-489. https://elibrary.ru/txlelp.</mixed-citation>
      </ref>
      <ref id="ref33">
        <label>33</label>
        <mixed-citation xml:lang="en">33. Hu, M., et al. (2025). AI as your ally: The effects of AI-assisted venting on negative affect and perceived social support. Applied Psychology: Health and Well-Being, 17 (1), article e12621. https://doi.org/10.1111/aphw.12621.</mixed-citation>
      </ref>
      <ref id="ref34">
        <label>34</label>
        <mixed-citation xml:lang="en">34. Weinstein, N., Itzchakov, G., Maniaci M. (2025). Exploring the connecting potential of AI: Integrating human interpersonal listening and parasocial support into human-computer interactions. Computers in Human Behavior: Artificial Humans, 4, article 100149. https://doi.org/10.1016/j.chbah.2025.100149.</mixed-citation>
      </ref>
      <ref id="ref35">
        <label>35</label>
        <mixed-citation xml:lang="en">35. Jin, S. (2023). “To comply or to react, that is the question:” the roles of humanness versus eeriness of AI-powered virtual influencers, loneliness, and threats to human identities in AI-driven digital transformation. Computers in Human Behavior: Artificial Humans, 1 (2), article 100011. https://doi.org/10.1016/j.chbah.2023.100011.</mixed-citation>
      </ref>
      <ref id="ref36">
        <label>36</label>
        <mixed-citation xml:lang="en">36. Freund, L. Beyond the physical self: understanding the perversion of reality and the desire for transcendence via digital avatars in the context of Baudrillard’s theory. AI &amp; Society, Jun 14. https://doi.org/10.32388/F3Y8IG.</mixed-citation>
      </ref>
      <ref id="ref37">
        <label>37</label>
        <mixed-citation xml:lang="en">37. Fedorchenko, S., Volodenkov, S. (2022). Digital human rights: risks, challenges, and threats of global socio-political trans-formations. Cuestiones Constitucionales. Revista Mexicana De Derecho Constitucionales, 1 (46), pp. 279–316. https://doi.org/10.22201/IIJ.24484881E.2022.46.17057.</mixed-citation>
      </ref>
      <ref id="ref38">
        <label>38</label>
        <mixed-citation xml:lang="en">38. Nah, K., Oh, S., Han, B., Kim, H., Lee, A. (2022). A Study on the User Experience to Improve Immersion as a Digital Human in Lifestyle Content. Applied Sciences, 12 (23), article 12467. https://doi.org/10.3390/app122312467.</mixed-citation>
      </ref>
      <ref id="ref39">
        <label>39</label>
        <mixed-citation xml:lang="en">39. Tatarov, V. Yu. (2025). Alienation multiplication in digital society: a social-philosophical analysis. Tver Medical Journal, 3, pp. 77–81. https://elibrary.ru/lumpgs.</mixed-citation>
      </ref>
      <ref id="ref40">
        <label>40</label>
        <mixed-citation xml:lang="en">40. Vatoropin, A. S., Vatoropin, S. A. (2025). Risks of Social Exclusion in the Context of the Implementation of Artificial Intelligence Technologies in Modern Organizations. Vestnik Surgutskogo gosudarstvennogo pedagogiceskogo universiteta, 2 (95), pp. 163–171. https://doi.org/10.69571/SSPU.2025.95.2.017. https://elibrary.ru/ppucpt.</mixed-citation>
      </ref>
      <ref id="ref41">
        <label>41</label>
        <mixed-citation xml:lang="en">41. Sedinin, Ya. A., Syrov, V. N. (2024). Biometrics of Capitalism: How Neoliberalism Splits the Body. Tomsk State University Journal of Philosophy, Sociology and Political Science, 80, pp. 136–145. https://doi.org/10.17223/1998863X/80/12. https://elibrary.ru/gakgqg.</mixed-citation>
      </ref>
      <ref id="ref42">
        <label>42</label>
        <mixed-citation xml:lang="en">42. Bogachenko, V. V. (2017). Transhumanism as a Form of Human Escapism in the Context of Cyberculture. Proceedings of BSTU, issue 6, History, Рhilosophy, 1 (197), pp. 98–102. https://elibrary.ru/vswgip.</mixed-citation>
      </ref>
      <ref id="ref43">
        <label>43</label>
        <mixed-citation xml:lang="en">43. Hongxia, H. (2021). On the three constraints of the development of artificial intelligence: Value, liberation, and responsibility. Cultures of Science, 4 (9), article 209660832110526. https://doi.org/10.1177/20966083211052637.</mixed-citation>
      </ref>
      <ref id="ref44">
        <label>44</label>
        <mixed-citation xml:lang="en">44. Gribanova, L. M. (2024). The virtual border of computer music. The Bulletin of Moscow State University of Culture and Arts (Vestnik MGUKI), 5 (121), pp. 53–60. https://doi.org/10.24412/1997-0803-2024-5121-53-60. https://elibrary.ru/jkqdon.</mixed-citation>
      </ref>
      <ref id="ref45">
        <label>45</label>
        <mixed-citation xml:lang="en">45. Shipley, G., Williams, D. (2023). Critical AI Theory: The Ontological Problem. Open Journal of Social Sciences, 11 (12), pp. 618–635. https://doi.org/10.4236/jss.2023.1112041.</mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>
