14 листопада 2024

Оцінювання ризиків штучного інтелекту: методи та виклики в Україні на основі звіту ООН «Управління ШІ на благо людства»

This article examines the methods of risk assessment related to the development and use of artificial intelligence (AI), with a particular focus on the challenges Ukraine faces. Using insights from the United Nations report “Governing AI for Humanity”, the article highlights the key areas of concern where AI poses significant risks, including security, societal inequality, economic development, and regulatory challenges. In the context of Ukraine, the article explores the unique risks associated with AI during wartime, such as the potential use of AI-driven autonomous weapons and surveillance systems and the spread of disinformation and cyberattacks by hostile actors.

Economically, the article delves into Ukraine’s struggles with technological inequality and limited access to critical AI resources, such as high-performance computing and large datasets, which could hinder the country's ability to leverage AI for sustainable development and economic growth. Moreover, it discusses the societal risks that AI could exacerbate existing social inequalities, particularly in rural regions or among marginalized groups, further widening the digital divide.

The article also explores the regulatory gaps in Ukraine's governance of AI, emphasizing the need for national strategies that align with global standards, particularly those proposed by the UN. The role of the government in creating a balanced framework for AI deployment and risk assessment is underscored as critical for Ukraine's AI governance model.

Finally, the article touches upon the potential for AI to play a significant role in Ukraine's post-war recovery, particularly in rebuilding human capital. However, challenges remain in terms of ensuring equitable access to AI-driven education and training resources, as well as overcoming infrastructure limitations. Overall, the article offers a comprehensive evaluation of AI risks in Ukraine, proposing measures that could mitigate these challenges and align the country's AI governance with global best practices.

Ця стаття досліджує методи оцінювання ризиків, пов'язаних із розвитком і використанням штучного інтелекту (ШІ), із особливим фокусом на викликах, з якими стикається Україна. Використовуючи висновки звіту ООН «Управління ШІ на благо людства», стаття висвітлює ключові сфери занепокоєння, де ШІ створює значні ризики, зокрема питання безпеки, соціальної нерівності, економічного розвитку та регуляторні виклики.

У контексті України стаття досліджує унікальні ризики, пов'язані з використанням ШІ під час війни, такі як потенційне застосування автономної зброї та систем спостереження на основі ШІ, а також поширення дезінформації та кібератак з боку ворожих акторів. З економічної точки зору стаття розглядає проблеми технологічної нерівності та обмеженого доступу до ключових ресурсів для розвитку ШІ, таких як високопродуктивні обчислення та великі набори даних, що може ускладнити здатність України використовувати ШІ для сталого розвитку та економічного зростання.

Крім того, у статті аналізуються соціальні ризики, пов'язані з тим, що ШІ може посилювати існуючу соціальну нерівність, особливо в сільських регіонах або серед маргіналізованих груп, що ще більше розширює цифровий розрив. Також розглядаються регуляторні прогалини в управлінні ШІ в Україні, підкреслюючи потребу у впровадженні національних стратегій, які б відповідали міжнародним стандартам, зокрема тим, що запропоновані ООН.

У статті наголошується на важливій ролі уряду в створенні збалансованої моделі впровадження ШІ та оцінювання ризиків, яка б забезпечувала ефективне управління ШІ. Нарешті, розглядається потенціал ШІ у післявоєнному відновленні України, зокрема у відбудові людського капіталу. Водночас залишаються виклики у забезпеченні рівного доступу до освітніх і навчальних ресурсів на основі ШІ та подоланні інфраструктурних обмежень.

Загалом, стаття пропонує всебічну оцінку ризиків, пов'язаних із ШІ в Україні, та пропонує заходи, які можуть допомогти мінімізувати ці виклики й узгодити управління ШІ в Україні з найкращими світовими практиками.

Keywords: artificial intelligence, risk management, public administration, sustainable development, UN.

Ключові слова: штучний інтелект, управління ризиками, публічне управління, сталий розвиток, ООН.

General statement of the problem and its link with major scientific or practical challenges. Artificial intelligence (AI) is rapidly transforming societies and economies across the globe, offering immense potential for technological advancement and addressing complex challenges. From optimizing energy systems to revolutionizing healthcare and education, AI is seen as a key enabler of progress. However, with these opportunities come significant risks, particularly in the areas of security, societal inequality, and economic stability. These risks require careful evaluation and mitigation, especially in countries like Ukraine, which are facing unique challenges due to the ongoing war and the need for post-war recovery.

The United Nations, through its report “Governing AI for Humanity,” has emphasized the importance of a global governance framework to manage AI’s risks while ensuring that its benefits are shared equitably. The report highlights several categories of risks, including the potential misuse of AI in military applications, the exacerbation of economic and social inequalities, and the environmental impact of large-scale AI systems. These risks are not confined to any one region or country, making international cooperation essential for effective governance.

In Ukraine, the context for AI development is shaped by both immediate security concerns and long-term challenges related to rebuilding the country after the war. The integration of AI into military systems, such as autonomous weapons and surveillance technologies, presents critical security risks, particularly in the context of ongoing conflict. At the same time, AI offers opportunities for economic recovery and development, though the country faces barriers such as technological inequality and limited access to key resources like high-performance computing and large datasets.

Moreover, the societal risks posed by AI, including the potential to deepen existing inequalities, are particularly relevant for Ukraine. The country’s digital divide, exacerbated by regional disparities and the displacement of populations due to the war, could hinder equitable access to AI-driven solutions in healthcare, education, and public services. Addressing these challenges requires not only a robust national strategy for AI governance but also alignment with international standards and frameworks, such as those proposed by the UN.

This article explores the methods of AI risk assessment outlined in the UN report and analyzes their relevance to Ukraine’s specific context. It examines the security, economic, and societal risks posed by AI, as well as the regulatory and governance gaps that need to be addressed for Ukraine to harness AI’s potential while minimizing its risks. Additionally, it considers the role AI could play in Ukraine’s post-war recovery, particularly in rebuilding human capital and infrastructure, and outlines recommendations for developing a comprehensive AI governance framework that aligns with global best practices.

Analysis of recent research and publications. The assessment and governance of risks associated with artificial intelligence (AI) have gained significant attention in recent years, driven by the rapid advancement of AI technologies and their increasing integration into critical sectors such as security, healthcare, and economic development. A wide range of literature highlights both the transformative potential of AI and the urgent need for robust governance frameworks to mitigate its associated risks. This literature overview examines key sources on AI risk assessment and governance, focusing on global frameworks, regional initiatives, and specific challenges for Ukraine.

At the global level, the United Nations (UN) report “Governing AI for Humanity” (United Nations, 2024) serves as a foundational document in AI governance discourse. The report emphasizes the necessity of an inclusive, international approach to AI risk assessment, acknowledging the transboundary nature of AI’s risks and opportunities. It identifies several key risk areas, including the misuse of AI in warfare, exacerbation of social inequalities, and environmental impacts, such as energy consumption by AI systems. The report also proposes the establishment of a global scientific panel on AI and a standards exchange to promote consistent and transparent governance frameworks across borders. This focus on global cooperation is echoed in other international initiatives, such as the Organization for Economic Co-operation and Development (OECD) AI principles, which stress fairness, transparency, and accountability in AI deployment (OECD, 2023).

Regional efforts, particularly within the European Union (EU), have also contributed to the literature on AI governance. The EU’s Artificial Intelligence Act (European Commission, 2021) represents a significant step toward establishing comprehensive AI regulations. This legislative framework classifies AI applications into different risk categories, ranging from minimal to high-risk systems, and imposes stricter requirements for transparency and safety for high-risk applications, such as AI used in healthcare or law enforcement. The act aligns with broader European regulatory efforts to promote human rights and data protection, reflecting concerns about AI’s impact on privacy and societal equity.

Several scholars have expanded on the UN and EU frameworks by exploring specific methodologies for AI risk assessment. Floridi (2020) discusses the ethical implications of AI, particularly focusing on issues of autonomy, justice, and beneficence. His work underscores the need for AI systems to operate in ways that align with human values and stresses the importance of creating governance structures that ensure accountability in AI decision-making processes. Similarly, Brynjolfsson and McAfee (2017) explore the economic implications of AI, emphasizing how AI can exacerbate economic inequalities by disproportionately benefiting those with access to advanced technologies and resources.

In the context of Ukraine, AI risk assessment and governance have been less extensively covered in the literature, but some important initiatives are emerging. The Ministry of Digital Transformation of Ukraine’s National Strategy for the Development of Artificial Intelligence (2021) outlines the country’s vision for harnessing AI in areas such as healthcare, education, and public administration. However, the strategy also acknowledges significant challenges, including limited access to AI resources, inadequate digital infrastructure, and the need for regulatory reforms to ensure the safe and equitable use of AI technologies. These challenges are compounded by the ongoing conflict in Ukraine, which introduces additional risks related to the use of AI in military applications and cybersecurity.

The vulnerability-based approach to AI risk assessment, as proposed in the UN report, is particularly relevant to Ukraine. This approach emphasizes the differential impacts of AI technologies on various sectors of society, particularly marginalized or conflict-affected communities. Scholars like Cath (2021) have argued that AI’s potential to exacerbate existing inequalities must be a central consideration in any governance framework. For Ukraine, this means that AI risk assessment must account for the unique vulnerabilities created by war, displacement, and economic instability. Moreover, the digital divide in Ukraine, exacerbated by regional disparities in infrastructure and access to technology, poses a significant challenge for equitable AI adoption.

In addition to the global and regional frameworks, case studies from other post-conflict regions offer valuable insights into how AI can be integrated into recovery efforts. For instance, AI has been used in rebuilding infrastructure and improving public services in countries like Rwanda and Bosnia and Herzegovina, providing potential models for Ukraine’s post-war reconstruction. These examples highlight the need for tailored AI governance strategies that address both the opportunities and risks associated with AI in fragile contexts.

Formulation of the article objectives (task statement). The main goal of the article is to analyze the methods of assessing AI-related risks, particularly in the context of Ukraine, using insights from the UN's “Governing AI for Humanity” report. The article aims to examine key AI risks, such as security, economic inequality, and societal impacts, and assess Ukraine’s regulatory and governance challenges. It also explores AI’s potential role in Ukraine’s post-war recovery and provides recommendations for aligning AI governance with global standards.

Methodology. The methodology of this article is based on a qualitative analysis of AI risk assessment frameworks, with a particular focus on the insights provided by the United Nations (UN) report “Governing AI for Humanity” (United Nations, 2024). The report offers a comprehensive global perspective on AI governance and risk assessment, serving as a primary source for examining AI risks in the Ukrainian context. The methodology includes several key steps: literature review, content analysis, and contextualization.

First, a detailed literature review was conducted to identify existing risk assessment frameworks and methods for artificial intelligence. This includes both global and regional frameworks, such as the Organisation for Economic Co-operation and Development (OECD) AI principles (OECD, 2023), and specific standards outlined by international bodies like the International Telecommunication Union (ITU) and UNESCO. These documents were analyzed to establish the foundation for understanding how risk assessments are structured at the global level and to identify best practices that can be applied in the Ukrainian context.

Second, content analysis of the UN report was performed, focusing on key areas such as security, societal, and economic risks. Particular attention was given to sections that discuss AI’s application in conflict zones, its potential to exacerbate inequalities, and the global digital divide. This analysis was essential to understanding the specific risks that Ukraine faces and the global recommendations for addressing these risks. Additionally, the report’s discussion of global governance models, such as the proposed AI scientific panel and the standards exchange, provided important insights into how Ukraine could align its regulatory framework with international norms (United Nations, 2024).

Third, the methodology contextualizes these global frameworks by examining Ukraine’s unique circumstances, particularly the ongoing conflict and the country’s post-war recovery needs. Ukraine’s technological and regulatory landscape was analyzed through existing national strategies, including the National Strategy for the Development of Artificial Intelligence (Ministry of Digital Transformation of Ukraine, 2021). This allowed for an assessment of how global risk assessment methods can be adapted to Ukraine’s specific challenges, such as limited access to AI resources and the urgent need to rebuild human capital and infrastructure.

Furthermore, the article applies a vulnerability-based approach to AI risk assessment, as recommended in the UN report, to address the differential impact of AI on various sectors of Ukrainian society, including marginalized communities and conflict-affected regions. This approach helps to identify specific risks related to inequality, security, and economic development, and provides a framework for prioritizing policy responses.

The methodology also integrates case studies from other post-conflict regions that have adopted AI technologies for reconstruction and development. These cases provide comparative insights into how Ukraine can harness AI’s potential while mitigating risks, especially in areas such as infrastructure rebuilding, education, and public administration.

Finally, recommendations are developed based on the findings from the literature review, content analysis, and contextual analysis. These recommendations are designed to offer actionable insights for Ukrainian policymakers, emphasizing the importance of adopting a comprehensive AI governance framework that aligns with global standards and addresses the country’s unique risks and challenges.

Presentation of the most important research material, with a full justification of the scientific results obtained.

Global Framework for AI Risk Assessment (Based on the UN Report)

The need for a comprehensive global framework for assessing and managing the risks associated with artificial intelligence (AI) has become increasingly evident as AI technologies permeate diverse sectors and affect global societies. The United Nations' report “Governing AI for Humanity” (United Nations, 2024) presents a pivotal blueprint for understanding and mitigating these risks, emphasizing the need for international cooperation and a holistic approach to governance. The report identifies several key risks posed by AI—ranging from security concerns to societal inequality—and proposes methodologies and institutional structures designed to foster global collaboration in addressing these challenges.

At the core of the UN's global framework for AI risk assessment is the principle that AI governance must be inclusive, transparent, and grounded in international law. This is essential to ensure that AI technologies are not only used ethically but also deployed in ways that benefit all countries and communities, especially those in vulnerable positions. The report underscores the importance of establishing a global scientific panel on AI, which would serve as a platform for sharing knowledge and expertise across nations. This panel would consist of multidisciplinary experts tasked with issuing annual reports on AI's capabilities, risks, and uncertainties, providing both policymakers and the public with reliable, science-based information (United Nations, 2024).

A critical element of the proposed framework is the categorization of AI-related risks into distinct domains, which allows for more targeted and effective governance responses. These domains include security risks, such as the use of AI in autonomous weapons systems, societal risks like the exacerbation of inequalities, and economic risks related to job displacement and technological monopolies. The UN report emphasizes that, without proper governance, AI’s unchecked development could lead to an “arms race” in AI military technologies, posing significant risks to global peace and security (United Nations, 2024). Thus, the framework advocates for clear international norms and regulations to prevent the misuse of AI in conflict scenarios, highlighting the need for a binding global treaty on autonomous weapons—a concept supported by a growing number of states and international organizations.

In terms of societal risks, the framework recognizes that AI has the potential to worsen existing social and economic disparities, especially in countries with limited access to AI technologies and resources. The UN report argues that AI governance must prioritize inclusivity, ensuring that AI benefits are equitably distributed and that marginalized communities are protected from the potential harms of AI, such as biased algorithms or AI-driven surveillance systems (United Nations, 2024). This approach is supported by other global initiatives, such as the OECD AI principles, which call for fairness, accountability, and transparency in AI systems (OECD, 2023).

The UN framework also addresses the issue of data governance, which is critical for the safe and effective deployment of AI technologies. Data is the foundation of AI systems, and improper data management can lead to severe consequences, such as breaches of privacy, intellectual property violations, and even threats to national security. The UN report advocates for the creation of a global AI data framework that establishes common standards for data collection, usage, and sharing, ensuring that data-driven AI systems are transparent, ethical, and secure (United Nations, 2024). This framework would not only promote interoperability across borders but also enable the development of AI technologies that respect human rights and data protection laws.

Another vital component of the global AI risk assessment framework is the creation of an AI standards exchange. This exchange would bring together representatives from standards development organizations, technology companies, civil society, and academic experts to establish a common language for AI-related terms and metrics. By standardizing definitions and criteria, the exchange would facilitate the evaluation and benchmarking of AI systems worldwide, ensuring that ethical and safety standards are consistently applied across different regions and industries (United Nations, 2024). This proposal aligns with existing efforts, such as the International Telecommunication Union’s (ITU) work on AI standardization, which aims to ensure that AI systems are safe, reliable, and aligned with human rights (ITU, 2022).

The UN report also highlights the need for capacity-building initiatives to ensure that all nations, particularly developing countries, have the knowledge and resources necessary to engage with AI technologies responsibly. The capacity development network proposed in the report aims to link regional AI research and training centers with international experts, creating a collaborative platform for knowledge sharing and skills development (United Nations, 2024). This network would provide countries with access to computational power, data, and AI training programs, enabling them to build local AI ecosystems that can contribute to global innovation while addressing national needs.

Security Risks and Challenges in Ukraine

In the context of ongoing conflict and geopolitical instability, Ukraine faces unique security risks related to the deployment of artificial intelligence (AI) technologies. The integration of AI into military systems and surveillance infrastructure presents both opportunities and significant challenges for the country, which is already grappling with the consequences of war. The United Nations' report “Governing AI for Humanity” (United Nations, 2024) highlights the growing concern over the use of AI in autonomous weapons and military applications, emphasizing the need for international governance to prevent misuse. For Ukraine, these concerns are especially pressing as AI-enhanced systems could exacerbate the already fragile security situation.

One of the most significant security risks for Ukraine is the potential use of AI in autonomous weapons systems. Autonomous weapons, capable of making decisions without human intervention, pose ethical and operational challenges, particularly in conflict zones. The deployment of such systems could lead to unintended escalations or civilian casualties, raising serious legal and humanitarian concerns. The UN report stresses the importance of establishing global norms to limit the development and use of autonomous weapons, and Ukraine’s involvement in such international dialogues is critical to ensuring that AI technologies do not further destabilize the region (United Nations, 2024).

Additionally, AI’s role in cybersecurity presents another layer of risk for Ukraine. As the country has already been a target of numerous cyberattacks, the integration of AI into defense systems increases both offensive and defensive capabilities in cyber warfare. AI can be used to automate cyberattacks, enhance the sophistication of disinformation campaigns, and exploit vulnerabilities in critical infrastructure. This is particularly dangerous for Ukraine, where the ongoing conflict with Russia has included widespread cyber operations aimed at disrupting government functions and critical services (Ministry of Digital Transformation of Ukraine, 2021). AI’s potential to amplify such cyber threats necessitates the development of robust cybersecurity protocols and international cooperation to prevent further destabilization.

AI-driven surveillance systems also present security risks, particularly regarding privacy and civil liberties. While AI can enhance national security through improved surveillance and intelligence gathering, it also raises concerns about misuse by state and non-state actors. In a conflict zone like Ukraine, where maintaining security is a priority, the potential for AI to be used in mass surveillance or to target specific populations could lead to violations of human rights. The UN report advocates for strict regulations on the use of AI in surveillance, ensuring that such technologies are used ethically and in line with international human rights laws (United Nations, 2024).

The ethical challenges surrounding AI in security contexts, particularly in autonomous weapons and surveillance, are further complicated by Ukraine’s limited regulatory infrastructure. The National Strategy for the Development of Artificial Intelligence (Ministry of Digital Transformation of Ukraine, 2021) acknowledges the importance of AI in modern security frameworks but also highlights the need for stronger governance mechanisms. Ukraine must prioritize the development of regulatory frameworks that align with global standards, as well as foster international cooperation to address the risks posed by AI in military and security applications.

Economic Challenges and Technological Inequality, Societal Risks and the Digital Divide

Artificial intelligence (AI) has the potential to drive significant economic transformation, offering opportunities for innovation, productivity growth, and improved public services. However, without equitable access and governance, AI could exacerbate existing economic challenges and deepen technological inequalities, particularly in countries like Ukraine. The United Nations’ Governing AI for Humanity report emphasizes that while AI can contribute to achieving Sustainable Development Goals (SDGs), the benefits of AI are not evenly distributed globally (United Nations, 2024). This section examines the economic challenges and technological inequalities that Ukraine faces in the context of AI, as well as the societal risks and the widening digital divide that could result from unchecked AI development.

Economic Challenges and Technological Inequality

Ukraine, as a developing economy, faces significant barriers to fully harnessing AI’s potential, primarily due to technological inequality. According to the Ministry of Digital Transformation of Ukraine’s National Strategy for the Development of Artificial Intelligence, the country is making strides in AI development, particularly in sectors like healthcare, agriculture, and public administration (Ministry of Digital Transformation of Ukraine, 2021). However, the national strategy also highlights that Ukraine’s technological infrastructure is still underdeveloped compared to global standards, and this poses a major challenge to AI adoption.

The high cost of AI technologies, including access to data, computing power, and skilled labor, creates economic barriers for many Ukrainian institutions and businesses. Large-scale AI models require significant computational resources, which are often concentrated in developed countries with advanced technological infrastructures. The UN report indicates that the global divide in access to AI technologies is a major concern, as it leaves countries like Ukraine at risk of being left behind in the global AI race (United Nations, 2024). This technological inequality limits Ukraine’s ability to fully integrate AI into its economy, putting it at a disadvantage in terms of global competitiveness.

Moreover, Ukraine’s economic instability, exacerbated by the ongoing conflict, further hinders its ability to invest in AI. The country’s fiscal constraints mean that investments in AI infrastructure, education, and research are often deprioritized in favor of immediate economic and security needs. This situation could lead to a growing gap between Ukraine and countries with more stable economies and greater access to AI-related resources. The UN report highlights the risk of an “AI divide,” where only a handful of technologically advanced countries reap the economic benefits of AI, while developing countries fall further behind (United Nations, 2024).

Societal Risks and the Digital Divide

The societal risks associated with AI are deeply intertwined with the issue of technological inequality. In Ukraine, the digital divide—the gap between those who have access to digital technologies and those who do not—is already a significant challenge, particularly in rural areas and among marginalized populations. AI has the potential to exacerbate this divide, as those without access to the necessary digital infrastructure will be unable to benefit from AI-driven advancements in education, healthcare, and public services.

One of the key societal risks highlighted in the Governing AI for Humanity report is the potential for AI to deepen social inequalities, particularly through biased algorithms and unequal access to AI tools (United Nations, 2024). In Ukraine, the risk is particularly pronounced given the regional disparities in technological access. Rural areas, which already face challenges in terms of internet connectivity and access to digital services, are likely to be further disadvantaged as AI technologies become more integrated into urban economies and public services. Without targeted policies to address these disparities, AI could widen the gap between urban and rural populations, leading to increased social and economic fragmentation.

Additionally, AI-driven automation poses significant societal risks in terms of employment. As AI technologies become more advanced, they are likely to replace jobs in industries such as manufacturing, agriculture, and services, which are critical sectors of Ukraine’s economy. The UN report stresses the importance of ensuring that AI-driven job displacement is managed through appropriate policy measures, such as reskilling programs and social safety nets (United Nations, 2024). In Ukraine, the government will need to prioritize workforce development to ensure that workers displaced by AI technologies can transition to new roles in the AI-driven economy.

The National Strategy for the Development of Artificial Intelligence acknowledges the importance of addressing the digital divide and promoting inclusive access to AI technologies. However, the strategy also notes that Ukraine’s educational system and digital infrastructure are not yet fully equipped to meet the demands of the AI era (Ministry of Digital Transformation of Ukraine, 2021). This creates a risk that certain segments of the population, particularly those in rural areas or from disadvantaged backgrounds, will be excluded from the opportunities that AI offers, further entrenching social and economic inequalities.

Another societal risk is the potential misuse of AI in ways that infringe on privacy and civil liberties. AI technologies, particularly those used in surveillance and data collection, can be employed in ways that disproportionately target vulnerable populations. In Ukraine, where political and social instability has created an environment of heightened surveillance, the deployment of AI in security and law enforcement must be carefully regulated to prevent abuses. The UN report calls for strict governance frameworks to ensure that AI technologies are used ethically and in line with international human rights standards (United Nations, 2024).

Addressing the Challenges

To mitigate the economic and societal risks associated with AI, Ukraine must adopt comprehensive policies that promote equitable access to AI technologies. This includes investing in digital infrastructure, particularly in rural areas, to ensure that all citizens can benefit from AI-driven advancements. Additionally, the government must prioritize education and workforce development programs that equip citizens with the skills needed to participate in the AI economy.

International cooperation will also be crucial for addressing the technological inequality that limits Ukraine’s access to AI resources. The Governing AI for Humanity report emphasizes the importance of global partnerships in bridging the AI divide and ensuring that AI benefits are distributed equitably across nations (United Nations, 2024). By engaging in international dialogues on AI governance and leveraging global expertise, Ukraine can work toward building an AI ecosystem that fosters innovation while addressing the risks of inequality and exclusion.

In conclusion, while AI offers significant opportunities for economic growth and societal advancement, the risks of technological inequality and the digital divide are particularly acute in Ukraine. Addressing these challenges will require targeted policies, international cooperation, and a commitment to ensuring that AI benefits all segments of society.

AI in Post-War Reconstruction and Human Capital Development

In the wake of the ongoing conflict, Ukraine faces an immense challenge in rebuilding its infrastructure, economy, and human capital. The role of artificial intelligence (AI) in this post-war reconstruction phase presents both opportunities and challenges. As highlighted in the United Nations report “Governing AI for Humanity,” AI technologies have the potential to be transformative tools in areas such as infrastructure development, education, public administration, and human capital restoration (United Nations, 2024). For Ukraine, leveraging AI effectively will be crucial in speeding up the reconstruction process, addressing gaps in human capital, and fostering long-term sustainable development.

Rebuilding Infrastructure with AI

Post-war reconstruction of physical infrastructure in Ukraine—roads, bridges, buildings, and utilities—will require substantial resources and careful planning. AI can play a vital role in optimizing this process by providing data-driven insights into infrastructure planning, resource allocation, and project management. AI-powered systems can analyze vast amounts of data, such as damage assessments, environmental factors, and resource availability, to prioritize projects based on urgency and impact. This could significantly reduce inefficiencies and ensure that rebuilding efforts are targeted at the most critical areas first.

Moreover, AI-driven technologies like autonomous construction machinery and drones can expedite rebuilding efforts. These technologies are already being used in other post-conflict and disaster-stricken regions to assess damage, clear debris, and construct infrastructure with greater speed and precision than human workers alone. As Ukraine begins to rebuild, adopting AI-powered solutions in construction and logistics could streamline operations, reduce costs, and improve safety.

Revitalizing Human Capital

One of the greatest challenges for Ukraine in the post-war era will be revitalizing its human capital. The conflict has displaced millions of people, disrupted education, and severely weakened the workforce. AI can be a powerful tool in addressing these challenges by helping to rebuild the country’s education system, retrain the workforce, and close skill gaps created by years of war.

AI-driven educational platforms can be used to provide personalized learning experiences to students and workers, ensuring that they acquire the skills necessary for Ukraine’s economic recovery. For example, AI can help create tailored learning programs that adapt to each individual's learning pace and areas of difficulty, providing a flexible and accessible solution for displaced students or those who missed formal education due to the war. This can be particularly beneficial for adults who need to reskill or upskill in preparation for re-entering the workforce.

The National Strategy for the Development of Artificial Intelligence highlights the importance of developing digital and AI literacy across all levels of the population as a means to foster economic growth and innovation (Ministry of Digital Transformation of Ukraine, 2021). Post-war, Ukraine will need to focus on building a workforce that is capable of leveraging AI technologies, both in traditional industries and emerging sectors like technology, health, and finance. AI-powered vocational training programs and virtual classrooms can provide scalable solutions to address the shortage of skilled workers and help bridge the educational gaps left by the war.

AI for Public Administration and Governance

AI can also support the development of more efficient and transparent public administration systems, which will be crucial for the success of Ukraine’s reconstruction efforts. AI-based tools can streamline bureaucratic processes, such as the distribution of humanitarian aid, management of public services, and coordination of reconstruction projects. In countries that have successfully implemented AI in governance, these technologies have improved decision-making by providing real-time data and predictive analytics for public policy planning and implementation.

Ukraine’s reconstruction will require the coordination of local, national, and international actors. AI can help manage this complexity by providing a unified platform for sharing information, monitoring progress, and ensuring accountability in the use of funds and resources. In the long run, integrating AI into public administration could also strengthen Ukraine’s institutions by reducing corruption and promoting transparency, both of which are critical to building trust in the post-war government.

Challenges and Ethical Considerations

While the potential of AI in Ukraine’s post-war reconstruction is vast, there are also significant challenges that must be addressed. First, Ukraine must overcome the technological inequality that limits access to AI tools and resources, particularly in rural and conflict-affected areas. Investments in digital infrastructure and AI literacy programs will be essential to ensure that AI-driven solutions are accessible to all citizens, not just those in urban centers.

Moreover, there are ethical considerations surrounding the use of AI in post-war reconstruction. AI-driven surveillance technologies, for example, could be misused if not properly regulated, leading to violations of privacy and civil liberties. The Governing AI for Humanity report stresses the importance of developing governance frameworks that ensure the ethical use of AI, particularly in sensitive areas like security and public administration (United Nations, 2024).

Recommendations. To fully leverage the potential of artificial intelligence (AI) in Ukraine’s reconstruction and development while mitigating its associated risks, a comprehensive and multifaceted approach to AI governance is essential. First and foremost, Ukraine must prioritize the development of a national AI governance framework that aligns with international standards. Drawing on the recommendations from the United Nations’ “Governing AI for Humanity” report, Ukraine should focus on integrating AI into its post-war recovery while ensuring that ethical principles, transparency, and accountability guide all AI-related initiatives (United Nations, 2024).

A key recommendation is to invest in digital infrastructure, particularly in rural and conflict-affected regions, to ensure equitable access to AI technologies. This will help close the digital divide and enable all citizens, regardless of their location, to benefit from AI-driven solutions in education, healthcare, and public services. Additionally, Ukraine must focus on building human capital by expanding AI literacy and providing reskilling opportunities. AI-driven educational platforms can play a crucial role in addressing the skill gaps created by years of conflict and preparing the workforce for a rapidly evolving economy (Ministry of Digital Transformation of Ukraine, 2021).

In terms of security and governance, Ukraine should adopt strict regulations on the use of AI in surveillance and military applications to prevent misuse and ensure compliance with international human rights standards. AI has the potential to strengthen public administration and enhance transparency, but it must be governed by clear ethical guidelines to avoid violations of civil liberties and misuse in security contexts.

International cooperation will also be critical for Ukraine’s success in AI governance. By engaging with global institutions, Ukraine can access technical expertise, funding, and best practices to support its AI development. Establishing partnerships with international organizations such as the UN and the OECD will enable Ukraine to foster innovation while addressing the technological and regulatory challenges that come with AI adoption.

Conclusions. In conclusion, the integration of artificial intelligence (AI) into Ukraine’s post-war recovery and future development presents significant opportunities for economic growth, societal advancement, and the rebuilding of critical infrastructure. However, these benefits come with substantial risks that must be carefully managed through comprehensive governance frameworks. The analysis, based on insights from the United Nations' “Governing AI for Humanity” report, highlights the need for Ukraine to adopt a balanced approach that fosters innovation while addressing ethical, societal, and security challenges.

One of the central conclusions of this article is that technological inequality remains a major barrier to AI’s equitable adoption in Ukraine. The ongoing conflict has exacerbated regional disparities in access to digital infrastructure, particularly in rural and war-torn areas. This digital divide risks leaving large segments of the population behind as AI technologies become more embedded in economic and social systems. Therefore, Ukraine must prioritize investments in digital infrastructure and AI literacy programs to ensure that the benefits of AI are shared by all citizens, regardless of their geographic location.

AI’s potential in Ukraine’s post-war reconstruction is undeniable, particularly in rebuilding infrastructure, enhancing public administration, and revitalizing human capital. AI-driven tools can optimize construction processes, provide personalized education and training, and improve the efficiency of public services. However, these technologies must be governed by clear ethical guidelines to prevent misuse, particularly in areas like surveillance and security, where the risk of infringing on civil liberties is high.

The development of a national AI governance framework aligned with international standards is critical for managing these risks. By engaging with global organizations and adopting best practices from other countries, Ukraine can ensure that its AI development is sustainable and inclusive. Furthermore, international cooperation will provide Ukraine with access to the necessary resources, expertise, and funding to address the unique challenges posed by its post-war context.

Ultimately, AI offers Ukraine a path toward a more resilient and innovative future. However, realizing this potential will require concerted efforts to close the digital divide, foster human capital development, and establish robust governance mechanisms that promote transparency, accountability, and respect for human rights. By taking a proactive and comprehensive approach to AI governance, Ukraine can harness the transformative power of AI to drive long-term recovery and development.

Література

1. Mykhailo Fedorov Ukraine’s AI road map seeks to balance innovation and security. Atlantic Council. 2023. URL: https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-ai-road-map-seeks-to-balance-innovation-and-security/ (дата звернення: 21.10.2024).

2. Erik Brynjolfsson, Andrew McAfee The Second Machine Age: work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company. New York, 2014.

3. Cath C. Artificial Intelligence and Inequality: From Global Governance to Local Risks. AI & Society. 2021. vol. 36(2). P. 541–559.

4. European Commission. Artificial Intelligence Act. Brussels, 2021.

5. Luciano Floridi The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford University Press. 2023.

6. Luciano Floridi, Josh Cowls A Unified Framework of Five Principles for AI in Society. Harvard Data Science Review. 2019. vol. 1(1). URL: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8 (дата звернення: 21.10.2024).

7. Nick Malter Implementing AI Governance: from Framework to Practice. Futurium. European AI Alliance. Best Practices. 2023. URL: https://futurium.ec.europa.eu/en/european-ai-alliance/best-practices/implementing-ai-governance-framework-practice (дата звернення: 21.10.2024).

8. International Telecommunication Union. AI Standardization Framework. Geneva, 2022.

9. Anna Jobin, Marcello Ienca, Effy Vayena The global landscape of AI ethics guidelines. Nature Machine Intelligence. 2019. vol. 1. P. 389–399.

10. Tobias D. Krafft, Katharina A. Zweig, Pascal D. König How to regulate algorithmic decision-making: A framework of regulatory requirements for different applications. Regulation & Governance. 2020. URL: https://onlinelibrary.wiley.com/doi/10.1111/rego.12369 (дата звернення: 21.10.2024).

11. Ministry of Digital Transformation of Ukraine. National Strategy for the Development of Artificial Intelligence. Kyiv, 2021.

12. OECD. AI Policy Observatory. AI Strategies and Policies in Ukraine. 2024. URL: https://oecd.ai/en/dashboards/countries/Ukraine (дата звернення: 21.10.2024).

13. OECD. AI Policy Observatory. OECD AI Principles. Paris: Organization for Economic Co-operation and Development, 2023. URL: https://oecd.ai/en/ai-principles (дата звернення: 21.10.2024). 

14. United Nations. Governing AI for Humanity. New York, 2024.

References

1. Fedorov, M. (2023), “Ukraine’s AI road map seeks to balance innovation and security”, Atlantic Council, [Online], available at: https://www.atlanticcouncil.org/blogs/ukrainealert/ukraines-ai-road-map-seeks-to-balance-innovation-and-security/ (Accessed 21 October 2024).

2. Brynjolfsson, E. and McAfee, A. (2014), The second machine age: work, progress, and prosperity in a time of brilliant technologies, W. W. Norton & Company, New York, USA.

3. Cath, C. (2021), “Artificial intelligence and inequality: from global governance to local risks”, AI & Society, vol. 36(2), pp. 541–559. 

4. European Commission (2021), Artificial intelligence act, Brussels, Belgium.

5. Floridi, L. (2023), The ethics of artificial intelligence: principles, challenges, and opportunities, Oxford University Press, New York, USA.

6. Floridi, L. and Cowls, J. (2019), “A unified framework of five principles for AI in society”, Harvard Data Science Review, vol. 1(1), available at: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1/release/8  (Accessed 21 October 2024).

7. Malter, N. (2023), “Implementing AI governance: from framework to practice”, Futurium. European AI Alliance. Best Practices, [Online], available at: https://futurium.ec.europa.eu/en/european-ai-alliance/best-practices/implementing-ai-governance-framework-practice (Accessed 21 October 2024).

8. International Telecommunication Union (2022), AI Standardization Framework, Geneva, Switzerland. 

9. Jobin, A. Ienca, M. and Vayena, E. (2019), “The global landscape of AI ethics guidelines”, Nature Machine Intelligence, vol. 1, pp. 389–399. 

10. Krafft, T. D. Zweig, K. A. and König, P. D. (2020), “How to regulate algorithmic decision‐making: a framework of regulatory requirements for different applications”, Regulation and Governance, available at: https://onlinelibrary.wiley.com/doi/10.1111/rego.12369 (Accessed 21 October 2024).

11. Ministry of Digital Transformation of Ukraine (2021), National Strategy for the Development of Artificial Intelligence, Kyiv, Ukraine. 

12. OECD. AI Policy Observatory (2024), “AI Strategies and Policies in Ukraine”, [Online], available at: https://oecd.ai/en/dashboards/countries/Ukraine (Accessed 21 October 2024).

13. OECD. AI Policy Observatory (2023), “OECD AI Principles”, [Online], available at: https://oecd.ai/en/ai-principles (Accessed 21 October 2024).

14. United Nations (2024), Governing AI for Humanity, New York, USA.

Iu. Perga, PhD in Historical Sciences, Associate Professor, Associate Professor of the Department of Theory and Practice of Management, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” ORCID ID: https://orcid.org/0000-0002-7636-2417

R. Pashov, PhD in Philosophy, Associate Professor, Senior Lecturer of the Department of Theory and Practice of Management, National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute” ORCID ID: https://orcid.org/0000-0001-5824-2641

Ю. М. Перга, к. і. н., доцент, доцент кафедри теорії та практики управління, Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського»

Р. І. Пашов, к. філос. н., доцент, старший викладач кафедри теорії та практики управління, Національний технічний університет України «Київський політехнічний інститут імені Ігоря Сікорського»

Бібліографічний опис для цитування:

Перга Ю. М., Пашов Р. І., Risk Assessment of Artificial Intelligence: Methods and Challenges in Ukraine Based on the UN's «Governing AI for Humanity» Report "Інвестиції: практика та досвід" № 22, листопад 2024 р. Рубрика Державне управління,  DOI: https://doi.org/10.32702/2306-6814.2024.22.256,  https://nayka.com.ua/index.php/investplan/article/view/5026, С. 256-264

Немає коментарів:

Дописати коментар