見出し画像

Googleの新プロジェクトEとは?クローンAIや個人情報の問題は?E計画を活用してGD157がもたらす新しい脳の進化を実現。新人類の

今回この内容を見て80年代に死者を蘇らす軍事研究のことを想起。だが今回このプロジェクトを見て、SMS仮説理論が可能になるまた検証できるのでは? 我々は日々瞬間瞬間あらゆるものを選択している。その選択の結果、脳はその人の性格や人生を決定させている。これは量子効果やエンタングルメントに大きな影響を及ぼしている。
もう一つのパラレルを確認できるということ。まずはE計画を解説して、それを応用してGD157でどのように実装するのか解説する。


“Project Ellmann” is an internal Google proposal to use artificial intelligence to help users get a “bird’s-eye view” of their life stories.
The idea would be to use LLMs like Gemini to ingest search results, spot patterns in a user’s photos, create a chatbot and “answer previously impossible questions” about a person’s life.
The team also demonstrated “Ellmann Chat,” with the description “Imagine opening ChatGPT but it already knows everything about your life.”


Dubbed “Project Ellmann,” after biographer and literary critic Richard David Ellmann, the idea would be to use LLMs like Gemini to ingest search results, spot patterns in a user’s photos, create a chatbot and “answer previously impossible questions,” according to a copy of a presentation viewed by CNBC. Ellmann’s aim, it states, is to be “Your Life Story Teller.”

It’s unclear if the company has plans to produce these capabilities within Google Photos, or any other product. Google Photos has more than 1 billion users and 4 trillion photos and videos, according to a company blog post.

Project Ellman is just one of many ways Google is proposing to create or improve its products with AI technology. On Wednesday, Google launched its latest “most capable” and advanced AI model yet, Gemini, which in some cases outperformed OpenAI’s GPT-4. The company is planning to license Gemini to a wide range of customers through Google Cloud for them to use in their own applications. One of Gemini’s standout features is that it’s multimodal, meaning it can process and understand information beyond text, including images, video and audio.

A product manager for Google
Photos presented Project Ellman alongside Gemini teams at a recent internal summit, according to documents viewed by CNBC. They wrote that the teams spent the past few months determining that large language models are the ideal tech to make this bird’s-eye approach to one’s life story a reality.

Ellmann could pull in context using biographies, previous moments and subsequent photos to describe a user’s photos more deeply than “just pixels with labels and metadata,” the presentation states. It proposes to be able to identify a series of moments like university years, Bay Area years and years as a parent.

“We can’t answer tough questions or tell good stories without a bird’s-eye view of your life,” one description reads alongside a photo of a small boy playing with a dog in the dirt.

“We trawl through your photos, looking at their tags and locations to identify a meaningful moment,” a presentation slide reads. “When we step back and understand your life in its entirety, your overarching story becomes clear.”

The presentation said large language models could infer moments like a user’s child’s birth. “This LLM can use knowledge from higher in the tree to infer that this is Jack’s birth, and that he’s James and Gemma’s first and only child.”

“One of the reasons that an LLM is so powerful for this bird’s-eye approach, is that it’s able to take unstructured context from all different elevations across this tree, and use it to improve how it understands other regions of the tree,” a slide reads, alongside an illustration of a user’s various life “moments” and “chapters.”

Presenters gave another example of determining one user had recently been to a class reunion. “It’s exactly 10 years since he graduated and is full of faces not seen in 10 years so it’s probably a reunion,” the team inferred in its presentation.

The team also demonstrated “Ellmann Chat,” with the description: “Imagine opening ChatGPT but it already knows everything about your life. What would you ask it?”

It displayed a sample chat in which a user asks “Do I have a pet?” To which it answers that yes, the user has a dog which wore a red raincoat, then offered the dog’s name and the names of the two family members it’s most often seen with.

Another example for the chat was a user asking when their siblings last visited. Another asked it to list similar towns to where they live because they are thinking of moving. Ellmann offered answers to both.

Ellmann also presented a summary of the user’s eating

ここから先は

2,107字
このマガジンを登録するとほぼすべての記事を毎日見ることができます。またこちらの内容は、ZOOMセミナーのプレゼン用に使う機会が多いのでyoutube動画と併用してご覧ください。また同じ記事を【 https://yoor.jp/door/Syncreatecollective 】で詳しくご覧になれます。

2144年9月22日に私たち組織が目指す人類、地球、宇宙、11次元につづく壮大な計画(Handred Clubの未来計画参照)を実現するま…

期間限定 PayPay支払いすると抽選でお得に!

この記事が気に入ったらサポートをしてみませんか?