見出し画像

“the biggest act of copyright theft in history” by ChatGPT

From Guardian Australia:
https://www.theguardian.com/australia-news/2023/sep/28/australian-books-training-ai-books3-stolen-pirated
(Thousands of books from some of Australia’s most celebrated authors have potentially been caught up in what Booker prize-winning novelist Richard Flanagan has called “the biggest act of copyright theft in history”.)
(The works have allegedly been pirated by the US-based Books3 dataset and used to train generative AI for corporations such as Meta and Bloomberg. .. “This is the biggest act of copyright theft in history.”)
(Litigation in the US against ChatGPT creator OpenAI over the use of allegedly pirated book datasets, Books1 and Books2 (which do not appear to be affiliated with Books3) has already commenced.)
(“If it cares for our culture it must now stand up and fight for it.”)
.. above, quotation/


For example, suppose you view this article from the perspective of the world 20 years from now. Even “ChatGPT”, which is nothing more than “weak AI: no intention, no thinking ability”, became such an “incident”! You in future might be surprised.

Human attitudes toward AI may be categorized into three.
1 There is no sense of crisis. It may be more common among AI researchers and developers.
2 There is a sense of crisis. Even the possibility of human extinction (e.g. Stephen Hawking)
3 Stop judging (I don't know/I can't help it either way) The majority of humans.

I think the threshold for distinguishing between 1 and 2 above is how much you ``believe'' in the possibility of ``strong AI: having a will and the ability to think.''
1 “Strong AI” might be unrealistic to impossible
2 “Strong AI” might be realistic-possible

If you look at it from the standpoint that the majority of AI researchers and developers in the real world are (it seems) focused on the current situation of "ChatGPT", I suppose maybe:
1 “Strong AI” might be unrealistic to impossible
They think, believe, or are convinced of at their bottom.
3 Stop judging (I don't know/I can't help it either way).
The more they know about the internal structure system of the "weak AI" called "ChatGPT", the more likely it is.

Rather, some researchers in other fields who are not AI researchers/developers (for example, theoretical physicist Stephen Hawking)
2 “Strong AI” might be realistic-possible
Because he thought, believed, and was convinced that this was the case, he was sounding a warning to humans, saying, ``There is even a possibility that human beings will go extinct.''

Regardless of whether "strong AI" might be unrealistic/impossible or realistic/possible, "weak AI" and "ChatGPT", as mentioned in the Guardian Australia article at the beginning, have the greatest impact on human being’s “culture”, i.e.“the biggest act of copyright theft in history” has been filed.

Let's think within the realm of "weak AI."
It would be possible to incorporate almost all human beings’ "cultural" assets into a "weak AI" like "ChatGPT" actually.
Furthermore, it would be possible to incorporate almost all information about human beings on Earth into "weak AI." In reality, supercomputers such as GAFAM are actuating increasingly using "weak AI."

Specifically, the supercomputer's "weak AI" will essentially "recognize" and "understand" you better than you know/think you are. If you look at the current communist dictatorship of China, this is obvious.

Furthermore, "autonomous AI weapons" are becoming a reality within the capabilities/brain power of "weak AI." The era when we will be killed by "weak AI" weapons on the battlefield will very soon be upon us.
I think it is extremely common sense and normal to feel a sense of crisis in this current situation.

However, as mentioned above, the majority of AI researchers and developers in reality and the field have no sense of crisis. Therefore they can do research and development with funds from the sponsor ``nation'' or the ``world economy'' (``capitalism'' in its final stages) or non-socialist (non-socialist).

The reason is that, as mentioned above, they think, believe, or are convinced of at their bottom.
1 “Strong AI” might be unrealistic to impossible
3 Stop judging (I don't know/I can't help it either way).
Maybe it's because...?

Furthermore, because they are backed up by the "nation" or "world economy," their "self-confidence" and "assurance" may be giving them delusions of grandeur.

The root of these problems may lie in the complacency with an elite mentality of pseudo-“specialists” created by the division of labor and stratification in modern urban society.

One last thing: There is probably a huge gap like heaven and earth between "weak AI" and "strong AI."
At the very least, it might be clear that "strong AI" is not an extension of "weak AI." This is because we(human beings having a will and the ability to think) do not even know the basis/ground of our existence, so why and how could we(human beings) create a "strong AI" that is equal to or exceeds us(human beings)?

In any case, I'm looking forward to seeing what the world will look like in 20 years.

この記事が気に入ったらサポートをしてみませんか?