Artificial Intelligence and National Security

TAKEHIRO MASUI

Waseda University School of International Liberal Studies Freshman

2024.07

Index

Introduction

1. Growth and background of artificial intelligence technology

2. US National Security Strategy and Artificial Intelligence Technology 

3. Background to the focus on AI technology

4. AI legal and regulatory initiatives

5. Evolution of cyber-attacks

6. Military use of artificial intelligence technology; Israel and ‘Lavender’ 

7. Signs of military use in various countries

8. Artificial intelligence and national security; what laws and regulations should be implemented?

Conclusion 

Introduction

Artificial intelligence technology has developed significantly in recent years to the extent that it has a strong influence in the security sector. While improved computer performance has encouraged many research institutes and companies to develop and analyze artificial intelligence, legislation in all areas surrounding artificial intelligence technology is still in its infancy and is becoming a threat that, if used incorrectly, could seriously infringe on the sovereignty of states. Agreeing on an international framework and regulations in the security sector is now a matter of urgency. However, it is very difficult to predict the future impact of artificial intelligence technology, much less to understand how it works, because it requires not only knowledge of computer science, but also advanced cyber security knowledge, as well as geopolitics, history of conflict, military forces, political systems, national strategy, space, social background, religion and ideology and many other knowledge and perspectives are required for analysis. It is extremely complex and chaotic, and requires the construction of a new, unprecedented ‘super-integrity’. This has led to delays in the development of legislation in the security sector and artificial intelligence technology, not only in Japan but worldwide, creating a situation where countries are not aligned and leading to a shortage of human resources in the new area.

This paper summarizes the efforts that countries have made in the field of artificial intelligence technology in the security sector and their plans for the future, and provides some personal observations.

 

1. Growth and background of artificial intelligence technology

Artificial intelligence technology has been developing rapidly in recent years, and with the release of ChatGPT in November 2022, it has moved beyond the realm of a research subject and established itself as a practical technology that can be widely applied in everyday life. At the same time, the development of image generation AI has also seen a rapid increase, making it possible to generate extremely complete images and illustrations in a very short period of time.

This rapid growth in artificial intelligence technology is the result of several factors in addition to the aforementioned improvements in hardware performance. The first is due to the development of large-scale language models, including multimodal capabilities[1]; the second is the rapid progress in research on methods for learning artificial intelligence. Finally, there has been an increase in the amount of data uploaded online.

The original large-scale language models for language-generating artificial intelligence, such as ChatGPT, were completed one after another, starting with GPT-3 published by OpenAI in 2020 ChatGPT, Claude and Bing AI became the basis for the recent generation AI. The emergence of multimodal models such as GPT-4 and PaLM2, which enable not only language generation but also image and speech generation, has significantly improved the range of applications of artificial intelligence technology.

In addition, improvements in the Transformer architecture[2] and the development of more efficient learning methods have contributed to improved performance, and techniques such as instructional coordination[3] and contextual learning have played a role in facilitating the coordination of task-specific artificial intelligence.

Most importantly, the absolute volume of all data uploaded online - images, videos, articles, academic papers, etc. - has increased significantly due to the global surge in internet users. The lockdowns implemented from 2020 to contain the spread of the new coronavirus, particularly in developed countries, also activated online data and encouraged the active exchange of information. Other factors such as the spread of digital transformation, the proliferation of IoT devices and the penetration of social media ultimately led to the creation of huge warehouses of data.

2. US National Security Strategy and Artificial Intelligence Technology

While it is difficult to define exactly when artificial intelligence technology began to have an impact on the security sector, it was probably in 2017 that it became clear that artificial intelligence technology would play an important role in the security sector. The National Security Strategy published in December of that year under President Trump's administration mentions ‘artificial intelligence technology’ in the document for the first time for the United States. The National Security Strategy published under the Biden administration in August 2022 also makes reference to the ‘disinformation threat’ in addition to artificial intelligence technologies.

This was researched from the archives of the NATIONAL SECURITY STRATEGY in 2015 and 2017; the archive has not been uploaded since 2018 and was researched as far as it was uploaded in the official NEWS of the National Security Strategy. If the US archives can be researched, it is likely that there are other references.

On that basis, the original text on artificial intelligence in 2017 and 2022 is as follows.

2017

p.30 Lead in Research, Technology, Invention, and Innovation

The United States will build on the ingenuity that has launched industries, created jobs, and improved the quality of life at home and abroad. To maintain our competitive advantage, the United States will prioritize emerging technologies critical to economic growth and security, such as data science, encryption, autonomous technologies, gene editing, new materials, nanotechnology, advanced computing technologies, and artificial intelligence. From self-driving cars to autonomous weapons, the fi eld of artificial intelligence, in particular, is progressing rapidly.

p.44 Information Statecraft

Risks to U.S. national security will grow as competitors integrate information derived from personal and commercial sources with intelligence collection and data analytic capabilities based on Artificial Intelligence (AI) and machine learning.

2022

p.21 As emerging technologies transform warfare and pose novel threats to the United States and our allies and partners, we are investing in a range of advanced technologies including applications in the cyber and space domains, missile defeat capabilities, trusted artificial intelligence, and quantum systems, while deploying new capabilities to the battlefield in a timely manner.

p.33 These investments will enable the United States to anchor an allied techno-industrial base that will safeguard our shared security, prosperity and values. This means working with allies and partners to harness and scale new technologies, and promote the foundational technologies of the 21st century, especially microelectronics, advanced computing and quantum technologies, artificial intelligence, biotechnology and biomanufacturing, advanced telecommunications, and clean energy technologies. We also will partner with like-minded nations to co-develop and deploy technologies in a way that benefits all, not only the powerful, and build robust and durable supply chains so that countries cannot use economic warfare to coerce others.

We are already rallying like-minded actors to advance an international technology ecosystem that protects the integrity of international standards development and promotes the free flow of data and ideas with trust, while protecting our security, privacy, and human rights, and enhancing our competitiveness. That includes work through the U.S.-EU Trade and Technology Council to foster transatlantic coordination on semiconductor and critical mineral supply chains, trustworthy artificial intelligence, disinformation, the misuse of technology threatening security and human rights, export controls, and investment screening, as well as through the Indo-Pacific Quad on critical and emerging technologies, open, next-generation digital infrastructure, and people-to-people exchanges.

The following chapters provide some background as to why the term ‘artificial intelligence technology’ was mentioned even only twice in 2017, without any specific measures to address it, and why in 2022 security issues are described as key challenges that should be addressed jointly with allies.

3. Background to the focus on AI technology

The use of artificial intelligence technology in the cyber domain has already seen the development of methods that enable more precise attacks using advanced learning capabilities.

One of the notable developments in 2017 was an AI system called PassGAN. This was publicly announced by the US company Home Security Heros[4] on its website in 2017. The company, which is essentially a research and development company in a wide range of security fields, not only in the field of artificial intelligence, developed GAN (Hostile Generation Network) as a tool for generating and analysing stronger passwords. This is a password strength diagnostic tool, which can be used by anyone.

This advanced technology can identify a complex eight-digit password, including numbers, upper and lower case letters and symbols, in seven hours. The technology can be used to check the strength of random passwords.

While this technology is advanced and makes a significant contribution to society in terms of data security, the fact that some models are publicly available[5] raises concerns that they could simultaneously be used for criminal purposes.

Thus, the advanced AI password analysis technology represented by PassGAN has two aspects: enhanced security and the potential for criminal use. Therefore, when developing and using this technology, it is essential that multifaceted studies are conducted by security experts, legal experts and ethicists. It will also be important to educate the general public on the importance of stronger password management and to promote the introduction of additional security measures such as multi-factor authentication.

A possible background to 2022 is the trend towards regulatory and cooperation arrangements, particularly in Europe, as discussed in Chapter 4.

4. AI legal and regulatory initiatives

With the significant technological development of AI, countries are moving to develop the required ethical guidelines. First, in terms of developing a legal framework, legislation is being enacted to regulate the development and use of AI, such as the EU's AI Bill.

The AI Bill is a comprehensive AI regulatory bill proposed by the EU for 2021, characterized by its very specificity, which classifies AI systems according to their risk level and establishes strict regulations for high-risk AI. For example, it includes a ban on social credit scoring systems and an obligation to register high-risk AI systems.

EU AI Bill (formally known as the Draft Regulation on Artificial Intelligence).

1. Proposed timing and purpose:

Proposed by the European Commission in April 2021, aiming to make the EU a world leader in AI technology while protecting its citizens from the potential risks of AI.

2. Risk-based approach:

AI systems are classified into four risk categories.

- Unacceptable risk: total ban

- High risk: strict regulation

- Limited risk: transparency obligations

- Minimal risk: free use

3. prohibited AI practices:

- Social credit scoring systems

- AIs that manipulate children or vulnerable groups

- AI that predicts or manipulates human behavior (with some exceptions)

4. requirements for high-risk AIs:

- Implement risk assessment and management systems

- Use of high-quality data sets

- Detailed documentation

- Human monitoring

- High level of accuracy, robustness, and cybersecurity

5. transparency obligations:

AI systems that interact with humans, such as chatbots, must be clearly identified as AI.

6. sandbox system:

Provides an environment where new technologies can be tested outside of the regulatory framework to promote AI innovation.

7. penalties:

Fines of up to 30 million euros or 6% of annual global sales (whichever is higher) for violations.

The bill aims to strengthen the EU's digital sovereignty while promoting the ethical and responsible use of AI. It is also unique in that it considers very draconian measures.

In addition to the AI bill mentioned above, international coordination on AI regulation is also being sought, especially from 2023, post-Corona, in forums such as the G7 and OECD, which are forums for international cooperation.

5. Evolution of cyber-attacks

Researchers from IBM Research, the research and development arm of IBM Corporation, presented DeepLocker[6], an innovative method for targeted malware-based attacks, at Black Hat USA, a key information security event held in August and September 2018. IBM Research has described this attack tool as

IBM Research explains the main purpose of the development as “to deepen our knowledge of new types of malware that are more advanced and difficult to detect by merging existing artificial intelligence models with traditional malware.

A notable feature of DeepLocker is its ability to cleverly evade conventional anti-virus software and malware detection systems by disguising existing malware as harmless applications (benign applications). This advanced technology allows DeepLocker to quietly and reliably infiltrate target systems, precisely identify targets, and carry out effective attacks.

IBM Research emphasizes that the core mechanism of this attack tool is the application of deep neural networks (DNNs), which are a novel strategy of “embedding the attack in the AI mechanism itself (AI-embedded attack)” whereas conventional malware takes a “spray and pray” approach that is indiscriminate and probabilistic. In contrast, DeepLocker adopts a novel strategy of “AI-embedded attack.

Briefly explaining Deep Learning technology, it is a field of artificial intelligence that automatically extracts features from large amounts of data and performs advanced pattern recognition and prediction using a multi-layered neural network that mimics the neural circuits of the human brain. It has led to advances in a wide range of fields, including image recognition, natural language processing, and speech recognition.

The multi-layer neural network, or deep neural network (DNN) as IBM Research refers to it, is an innovative architecture that forms the basis of Deep Learning technology. It is an advanced machine learning model consisting of multiple layers (layers) that mimic the neural circuits of the human brain. the DNN structure consists of an input layer, multiple hidden layers, and an output layer, each layer consisting of many nodes (neurons). The main features of DNN are as follows;

・Automatic feature extraction

・High expressive power

・End-to-end learning

・Ease of transfer learning

 

These characteristics make DNNs capable of capturing complex patterns and relationships, and have led to advanced results in various fields.

DeepLocker uses DNNs because of the “complexity” and “opacity” of the model: the decision-making process of a DNN is called a “black box,” and it is extremely difficult to fully understand what is going on inside it[7]. This characteristic makes malware behavior more difficult to predict and allows it to bypass traditional detection methods.

On the other hand, this complexity of DNNs also opens up new possibilities for cybersecurity defenders. By utilizing DNNs in the areas of anomaly detection and threat intelligence, it is expected that more advanced and adaptive defense systems can be built.

DNNs have played a central role in the development of AI, and their applications have spanned both good and bad. As the technology advances, striking a balance between ethical use and appropriate regulation will become increasingly important.

6. Military use of artificial intelligence technology; Israel and ‘Lavender’

+972 magazine[8] is an online magazine founded in 2010. The media outlet is known as a left-leaning media outlet dealing with Middle East affairs, with a focus on Israeli-Palestinian issues. It often takes a critical stance on Israel's occupation policy and human rights issues, and targets an international readership.

Its articles are written mainly by Israeli and Palestinian reporters and bloggers, and it also operates a sister site, Local Call, in Hebrew. It is unique in that it operates as a non-profit organization and is funded by donations and grants.

According to +972magazine, Lavender is an AI-based target selection system developed by the Israeli military. The system analyzes information collected through a mass surveillance system on some 2.3 million residents of the Gaza Strip and rates on a scale of 1 to 100 the likelihood that each individual is affiliated with Hamas or the military wing of the Palestinian Islamic Jihad Organization (PIJ).

+972magazine mentioned in the article an article acknowledging the existence of lavender by the Israeli Defense Ministry, but that article has now been removed.

The Israeli military has not officially acknowledged the existence of this AI, but has claimed that it is “just a database” and has made a few vague statements about whether it is target specific. It has also said that the final decision is made by humans and has not made a clear statement as to whether the system is artificial intelligence or not. (From Washington post[9])

To follow this +972magazine original, Lavender learns the characteristics of known Hamas and PIJ members and identifies individuals with these characteristics in the general population. Individuals with multiple “suspicious” characteristics are ranked highly and automatically become potential targets for assassination.

Following the October 7, 2023 Hamas attack, the Israeli military designated all members of Hamas' military wing as “human targets” regardless of rank or military importance. This change in policy led to the full-scale use of Lavender, which marked approximately 37,000 Palestinians as “Hamas members” for assassination.

The use of Lavender was approved approximately two weeks after the war began. If the system determined that an individual was a Hamas member, it was treated as an “order” and did not require independent human verification. This allowed for the mass generation of targets.

However, Lavender's accuracy was already known to be about 90%, with 10% false positives. Examples of false targets included police and civil defense workers, relatives of constituents, and residents of the same name.

The use of this system resulted in the killing of approximately 15,000 Palestinians in the first six weeks of the war. This represents almost half of the total number of deaths to date. (as of July 2024)

While the use of Lavender demonstrates the efficiency of AI-based target selection, it also raises serious ethical issues, including inaccuracy, the risk of false targeting, and the potential for mass civilian casualties.

It provokes an important debate about the dangers of delegating human judgment to machines and the appropriateness of using AI in a wartime context.

In my opinion, I assume that the Israeli military chain of command strongly desired a short war, assisted by artificial intelligence, to deal with the incapacitation of a large number of Hamas commanders at the beginning of the war. I suspect that the Israeli military's reliance on AI technology has escalated to the point where it has become more reliant on artificial intelligence. escalation, and a battlefield unlike any we have ever seen before is now being expanded.

7. Signs of military use in various countries

One military technology that was identified as an early issue in international discussions was LAWS (autonomous lethal weapon systems).

From 2014 to 2016, informal meetings on LAWS were held in the framework of the Convention on the Prohibition of the Use of Certain Conventional Weapons (CCW).

In particular, since 2017, the Governmental Experts Group on LAWS (GGE) has been held in the framework of the CCW to promote high-level consultations among experts. This meeting was held for two years, in 2020 and 2021, and also agreed on a set of guidelines for LAWS consisting of 11 items. It was decided that the discussions, including the guidelines, would be used as a basis for recommendations regarding the clarification, review, and development of the normative and operational framework.

In February 2023, the “Responsible AI Application in the Military Domain (REAIM)” summit was held in the Netherlands. This was followed by the first meeting on the “Political Declaration on Responsible Military Use of AI and Autonomy” in New York in November of the same year. The background for the Political Declaration was the progress in the development of LAWS and the realization of its usefulness in the military field. The Political Declaration calls on the participating countries to The following is an excerpt from the Declaration

European Parliament “Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy”[10] (original)

A. States should ensure that military organizations adopt and implement these principles in the responsible development, deployment, and use of AI capabilities.

B. States should take appropriate measures, including legal review, to ensure compliance with their obligations under international law, particularly international humanitarian law, and should verify the use of military AI capabilities. They should also consider how to use military AI capabilities to strengthen the implementation of international humanitarian law and improve the protection of civilians and civilian objects in armed conflicts.

C. States should ensure that senior officials effectively and appropriately oversee the development and deployment of high-impact military-use AI capabilities.

D. States should take proactive steps to minimize unintended bias in military AI capabilities.

E. States should ensure that stakeholders exercise appropriate caution in the development, deployment, and use of military AI capabilities. This includes weapon systems that incorporate such capabilities.

F. States should ensure that military AI capabilities are developed in a transparent and auditable manner, data sources, design procedures, and documentation.

G. States should ensure that personnel using or approving military AI capabilities are adequately trained to fully understand the capabilities and limitations of those systems, to make appropriate contextual judgments, and to mitigate the risk of automation bias.

H. States should ensure that military AI capabilities have clear, well-defined uses and address design and engineering to ensure they perform their intended functions.

I. States should ensure that the safety, security, and effectiveness of military AI capabilities are subject to appropriate and rigorous testing and assurance during their use and throughout their entire life cycle. For military AI capabilities that are self-learning or continually updating themselves, they should ensure that critical safety functions are not degraded through monitoring or other processes.

J. States should implement appropriate safeguards to mitigate the risk of failure of military AI capabilities. This includes the ability to detect and avoid unintended consequences, and responses such as deactivation or disabling when operational systems exhibit unintended behavior.

In light of the above declaration, I am particularly concerned that it does not call for a blanket ban on the application of AI to military weapons, but rather states that it should strive to act responsibly and be used to protect human life to the extent possible.

This is due in part to the dual-use nature of the technologies manufactured and developed by LAWS. Since the definition of LAWS is still unclear, it is likely that a total ban would affect the development of drones and autonomous robots with artificial intelligence.

However, there is no doubt that an immediate international discussion of the obvious military uses would avoid any eventuality of harm to civilians.

8. Artificial intelligence and national security; what laws and regulations should be implemented?

Unlike nuclear weapons, which have distinguished themselves from conventional weapons by their immense power since the end of World War II, artificial intelligence technology is a field that requires a greater sense of urgency from the international community in that the actual human loss is unclear and the boundary between civilian and military use cannot be clearly demarcated.

We believe that what is required of modern society in the future is the ability to respond flexibly to rapidly growing new areas of technology. Rather than focusing on definitions and the scope of regulation, strong initiative is needed regarding humanitarian issues and violations of rights that are occurring at this stage and are expected to occur in the very near future. Technology is developing rapidly, and it is difficult to establish broad regulations in line with the development of technology in the security field, artificial intelligence field, and other fields such as space and electromagnetic waves, each with its own specialized personnel, independently. We believe that an international arrangement for AI that anticipates some future development and focuses on universal values is essential.

Conclusion

As indicated above, modern society, which is becoming increasingly high-tech in all areas, requires international cooperation for the appropriate regulation and management of artificial intelligence technology. Efforts to minimize potential risks while maximizing the benefits of AI, while striking a balance between technological advancement and social safety, will become increasingly important in the future.

However, as has already been mentioned in many fields, a large gap in knowledge domains exists between technologists and security policy experts in the regulatory and framework-making arena, and difficulties persist in determining concrete measures even in high-level discussions. To address this situation, new human resources with more cross-disciplinary professionalism are needed. There is a strong need for individuals with deep technical knowledge who can play an important role in security environment discussions.


[1] Multimodal models are ML (machine learning) models that can process information from different modalities, such as images, video, and text. (Source: Google Cloud)

[2] A transformer is a type of neural network architecture that transforms or modifies an input sequence into an output sequence. (Source: Amazon Web Service)

[3] Instruction tuning is a technique for fine-tuning large language models (LLMs) on a labeled dataset of instructional prompts and corresponding outputs. It improves model performance not only on specific tasks, but on following instructions in general, thus helping adapt pre-trained models for practical use. (Source:IBM)

[4] https://www.homesecurityheroes.com/ai-password-cracking/

[5] https://github.com/beta6/PassGAN

[6] https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/

[7] https://securityintelligence.com/deeplocker-how-ai-can-power-a-stealthy-new-breed-of-malware/

https://www.mbsd.jp/research/20190311/deeplocker/

[8] https://www.972mag.com/lavender-ai-israeli-army-gaza/

[9] https://www.washingtonpost.com/world/2024/04/05/israel-idf-lavender-ai-militarytarget/

[10] https://www.mofa.go.jp/mofaj/files/100580933.pdf

この記事が気に入ったらサポートをしてみませんか?