AIの信頼性確保に向けて:データのバイアス、データのプライバシーの問題を解決していく必要がある。

How Ethical Hackers Could Help Us Build Trust in AI

By Edd Gent / Dec 13, 2021 / SingularityHub

AIの信頼性の確保は、今後、益々重要となるが、そのための具体的方策が追いついていないのが現状である。
AIは、膨大な量のdataからlerningして、その能力を発揮するが、信頼性を確保するには、learningするdataの「biasの問題」、「privacyの問題」を解決していく必要がある。

Ethical Hackers 

倫理的ハッキング 《 システムの安全性 点検 などの ために 契約 に基づいて 行なわれる システム 破 り などの ハッカー行為 》

Role of an ethical hacker

Ethical hackers can be independent freelance consultants, employed by a firm that specializes in simulated offensive cybersecurity services, or they can be an in-house employee protecting a company’s website or apps. Knowledge of current attack methods and tools is a requirement across these employment options, however, the in-house ethical hacker may be required to have an intimate knowledge of only a single software or digital asset type. 

While relatively new to the security industry, [one advantage] that an in-house red team (ある組織の効率性を改善するように挑む独立したグループで、対象組織に敵対した役割あるいは視点を持つことを前提とする) may provide is [that the team will necessarily have a more intimate understanding of how their own systems and applications are constructed than would an independent consultant have]. This insider knowledge provides the red team an advantage, as long as they can avoid becoming myopic (近視眼的な、目先のことしか見えない、目先のことばかり考える、短絡的な、先見の明のない/maiɑ́pik) in their view. It would take real attackers years to replicate this advantage. In-house teams are largely thought to be less expensive than the continuous use of a consulting firm as well.

Conversely (逆に/kənvə́ːrsli), a benefit that an external ethical hacker may provide is a fresh set of eyes to identify vulnerabilities that may be overlooked by the internal team. Even organizations that employ an internal red team may occasionally contract an external ethical hacker to provide this fresh look at their defenses. 

わたしのnoteにおいては、最新の科学・経済・社会等の問題に関して、英語の記事を引用し、その英文が読み易いように加工し、英語の勉強ツールと最新情報収集ツールとしてご利用頂くことをmain missionとさせて頂きます。勿論、私論を書かせて頂くこともしばしです。

How Ethical Hackers Could Help Us Build Trust in AI

By Edd Gent / Dec 13, 2021 / SingularityHub

AI is exerting an ever greater influence on our lives, which is leading to growing concern over whether we can trust it to act fairly and reliably. Ethical hackers, AI audits, and “bias bounties (bias bounties would reward users for identifying bias in AI systems).” could help us keep a lid (~が起こらないように防ぐ) on the potential harms, say researchers.

There’s increasing awareness of the dangers posed by our reliance on AI. These systems have a worrying (厄介な、面倒な、心配な、気をもむ) knack (才覚、こつ、要領の良さ/nǽk) for picking up and replicating the biases already present in our society, which can entrench (定着・固定させる、侵食する/entréntʃ) the marginalization (周辺化、周縁化:周縁化とは、周縁化の行動と効果、つまり、人、グループ、問題、またはトピックを文脈から外すことの行動と効果です。/mɑ̀ːrdʒnələzéiʃən) of certain groups.

The data-heavy nature of current deep learning systems also raises privacy concerns, both due to their encouragement of widespread surveillance and the possibility of data breaches. And the black box nature of many AI systems also makes it hard to assess whether they’re working correctly, which can have serious implications in certain domains.

Now, a new paper in Science proposes some tangible (実際の、具体的な/tǽndʒəbl) steps that the industry could take to increase trust in AI technology. A failure to do so could lead to a “tech-lash (巨大なテクノロジー企業[technology company]の活動に反発[backlash]すること)” that severely hampers progress in the field, say the researchers.

“Governments and the public need to be able to easily tell apart between the trustworthy, the snake-oil salesmen (someone who deceives people in order to get money from them), and the clueless (手掛かりがない、糸口がつかめない/klúːləs),” lead author Shahar Avin, from Cambridge University, said in a press release. “Once you can do that, there is a real incentive to be trustworthy. But while you can’t tell them apart, there is a lot of pressure to cut corners (手を抜く、工程をはしょる).”

The researchers borrow some tried and tested ideas from cybersecurity, which has grappled with (〔問題・課題など〕に取り組む/grǽpəl) the issue of getting people to trust software for decades. One popular approach is to use “red teams (ある組織の効率性を改善するように挑む独立したグループで、対象組織に敵対した役割あるいは視点を持つことを前提とする)” of ethical hackers who attempt to find vulnerabilities in systems so that the designer can patch them before they’re released.

AI red teams already exist within large industry and government labs, the authors note, but they suggest that sharing experiences across organizations and domains could make this approach far more powerful and accessible to more AI developers.

Software companies also frequently offer “bug bounties <脆弱性報奨金制度は、製品やサービスを提供する企業が、その製品の脆弱性(特にセキュリティなどに関して)に関する報告を外部の専門家や研究者から受け、その対価として報奨金を支払う制度>,” which provide a financial reward if a hacker finds flaws in their systems and tells them about it privately so they can fix it. The authors suggest that AI developers should adopt similar practices, offering people rewards for finding out if their algorithms are biased or making incorrect decisions.

They point to [a recent competition Twitter held] that offered rewards to anyone who could find bias in their image-cropping algorithm as an early example of how this approach could work.

As cybersecurity attacks become more common, governments are increasingly mandating (命じる、命令する/mǽndèit) the reporting of data breaches and hacks. The authors suggest similar ideas could be applied to incidents where AI systems cause harm. While voluntary, anonymous sharing—such as that enabled by the AI Incident Database—is a useful starting point, they say this could become a regulatory requirement.

The world of finance also has some powerful tools for ensuring trust, most notably the idea of third-party audits. This involves granting an auditor access to restricted information so they can assess whether the owner’s public claims match their private records. Such an approach could be useful for AI developers who generally want to keep their data and algorithms secret.

Audits only work if the auditors can be trusted and there are meaningful consequences for a failure to pass them, though, say the authors. They are also only possible if developers follow common practices for documenting their development process and their system’s makeup and activities.

At present, guidelines for how to do this in AI are lacking, but early work on ethical frameworks, model documentation, and continuous monitoring of AI systems is a useful starting place.

The AI industry is also already working on approaches that could boost trust in the technology. Efforts to improve the explain-ability and interpretability of AI models are already underway, but [common standards and tests] that measure compliance to those standards would be [useful additions] to this field.

Similarly, privacy-preserving machine learning, which aims to better protect the data used to train models, is a booming area of research. But they’re still rarely put into practice by industry, so the authors recommend more support for these efforts to boost adoption.

Whether companies can really be prodded (駆り立てる、促す) into taking concerted action on this problem is unclear. Without regulators breathing down their necks (To follow or supervise someone too closely, causing discomfort for that person), many will be unwilling to take on (〔仕事・責任などを〕引き受ける、請け負う) the onerous (面倒な、厄介な、煩わしい/ɑ́nərəs) level of attention and investment that these approaches are likely to require. But the authors warn that the industry needs to recognize the importance of public trust and give it due weight (~に相応の重要性を与える、~をしかるべく考慮する).

“Lives and livelihoods are ever more reliant on AI that is closed to (閉鎖的、非公開) scrutiny (精査、監視/skrúːtəni), and that is a recipe (要因、原因/résəpi) for a crisis of trust,” co-author Haydn Belfield, from Cambridge University, said in the press release. “It’s time for the industry to move beyond well-meaning ethical principles and implement real-world mechanisms to address this.”


この記事が気に入ったらサポートをしてみませんか?