見出し画像

Ollama : ローカル環境で容易にllamaを利用可能にるするAIチャットプログラム

特徴 (ChatGPTによる調査)

  • コマンドラインインターフェースを通じて大規模言語モデル(LLM)とやり取り可能なAIチャットプログラム

  • LlamaやCode Llamaなど、さまざまなオープンソースモデルをサポート

  • モデルのパラメーターやサイズが異なり、計算リソースに応じたAIモデルの実行を柔軟に対応

  • Dockerがインストールされたシステムで利用可能で、Nvidia GPUのGPUアクセラレーションをサポート(CPU上でも実行可能)

  • パフォーマンスはハードウェアに依存し、例えばLlama 2の7Bモデルを実行するには最低15GBのRAMと4つのCPUコアが必要

  • MacOSとLinux用のデスクトップアプリケーションがあり、Windows版が開発中

  • ユーザーフレンドリーでテスト環境での効果が実証されており、モデルのダウンロードが簡単

  • コマンドラインの性質により、ターミナルで直接迅速にチャットボットのセットアップと使用が可能

  • モデルのカスタマイズやRest APIの利用など、さらに詳細な機能についてはOllamaのウェブサイト、GitHub、またはDiscordコミュニティを参照可能

Links

実装(LangChainと組み合わせる)

ollamaのダウンロードとインストール

curl https://ollama.ai/install.sh | sh

Ollamaを起動

# サーバーの起動
ollama serve &
# モデルを指定してOllamaを起動 
ollama run llama2:7b-chat

LangChainからメッセージを渡す

from langchain.chat_models import ChatOllama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

chat_model = ChatOllama(
    model="llama2:7b-chat",
    callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
)
from langchain.schema import HumanMessage

messages = [HumanMessage(content="Tell me about the history of AI")]
chat_model(messages)

実行結果

The history of artificial intelligence (AI) dates back to ancient times, with early myths and legends featuring anthropomorphic gods and creatures. However, the modern field of AI began in the mid-20th century, driven by advances in computer science and the availability of large amounts of data. Here are some key milestones in the history of AI:

1. Ancient Greece (circa 500 BCE): The ancient Greeks developed early concepts of AI through their mythology and literature, such as the story of Hephaestus, the god of fire and blacksmiths, who created robots to assist him in his workshop.
2. 19th century: The term "automaton" was coined during this time to describe a machine that could perform tasks autonomously. The first automaton was created by Jacques de Vaucanson in France in the late 18th century, capable of playing music and performing simple tasks.
3. Early 20th century: The term "Artificial Intelligence" was coined in the early 20th century by John McCarthy, an American mathematician and computer scientist. McCarthy organized the first AI conference at Dartmouth College in 1956, which marked the beginning of the modern field of AI research.
4. 1950s-1960s: The development of the first AI programs, such as Logical Theorist and the ELIZA chatbot, demonstrated the possibility of creating machines that could simulate human thought processes. These early AI systems were primarily focused on problem-solving and decision-making.
5. 1970s: The development of expert systems, which were designed to mimic the decision-making abilities of a human expert in a particular domain, marked a significant milestone in the history of AI. These systems were able to solve complex problems by leveraging knowledge represented in a knowledge base.
6. 1980s: The rise of neural networks and machine learning algorithms enabled AI systems to learn from data and improve their performance over time. This led to the development of applications such as speech recognition, image recognition, and natural language processing.
7. 1990s-present: The field of AI has continued to expand and evolve, with advances in areas such as deep learning, robotics, and computer vision. The rise of big data and the availability of large datasets have enabled AI systems to perform tasks such as machine translation, sentiment analysis, and predictive modeling.
8. 2010s: The development of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) led to significant improvements in image recognition and natural language processing tasks. These advances have enabled AI systems to perform tasks such as self-driving cars, facial recognition, and chatbots.
9. Present day: The field of AI is rapidly evolving, with new breakthroughs and innovations being made regularly. Areas such as generative models, transformers, and Explainable AI (XAI) are pushing the boundaries of what is possible with AI.

Throughout its history, AI has been driven by advances in computer science, mathematics, and the availability of data. As the field continues to evolve, it is likely that AI will become increasingly integrated into various aspects of our lives, from healthcare and education to transportation and entertainment.AIMessage(content='The history of artificial intelligence (AI) dates back to ancient times, with early myths and legends featuring anthropomorphic gods and creatures. However, the modern field of AI began in the mid-20th century, driven by advances in computer science and the availability of large amounts of data. Here are some key milestones in the history of AI:\n\n1. Ancient Greece (circa 500 BCE): The ancient Greeks developed early concepts of AI through their mythology and literature, such as the story of Hephaestus, the god of fire and blacksmiths, who created robots to assist him in his workshop.\n2. 19th century: The term "automaton" was coined during this time to describe a machine that could perform tasks autonomously. The first automaton was created by Jacques de Vaucanson in France in the late 18th century, capable of playing music and performing simple tasks.\n3. Early 20th century: The term "Artificial Intelligence" was coined in the early 20th century by John McCarthy, an American mathematician and computer scientist. McCarthy organized the first AI conference at Dartmouth College in 1956, which marked the beginning of the modern field of AI research.\n4. 1950s-1960s: The development of the first AI programs, such as Logical Theorist and the ELIZA chatbot, demonstrated the possibility of creating machines that could simulate human thought processes. These early AI systems were primarily focused on problem-solving and decision-making.\n5. 1970s: The development of expert systems, which were designed to mimic the decision-making abilities of a human expert in a particular domain, marked a significant milestone in the history of AI. These systems were able to solve complex problems by leveraging knowledge represented in a knowledge base.\n6. 1980s: The rise of neural networks and machine learning algorithms enabled AI systems to learn from data and improve their performance over time. This led to the development of applications such as speech recognition, image recognition, and natural language processing.\n7. 1990s-present: The field of AI has continued to expand and evolve, with advances in areas such as deep learning, robotics, and computer vision. The rise of big data and the availability of large datasets have enabled AI systems to perform tasks such as machine translation, sentiment analysis, and predictive modeling.\n8. 2010s: The development of convolutional neural networks (CNNs) and recurrent neural networks (RNNs) led to significant improvements in image recognition and natural language processing tasks. These advances have enabled AI systems to perform tasks such as self-driving cars, facial recognition, and chatbots.\n9. Present day: The field of AI is rapidly evolving, with new breakthroughs and innovations being made regularly. Areas such as generative models, transformers, and Explainable AI (XAI) are pushing the boundaries of what is possible with AI.\n\nThroughout its history, AI has been driven by advances in computer science, mathematics, and the availability of data. As the field continues to evolve, it is likely that AI will become increasingly integrated into various aspects of our lives, from healthcare and education to transportation and entertainment.')
messages = [HumanMessage(content="Could you discuss a major breakthrough in AI from the last decade?")]
chat_model(messages)
There have been many significant breakthroughs in artificial intelligence (AI) over the past decade, but one of the most notable ones is the development and widespread adoption of deep learning algorithms. Deep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and learn from large amounts of data. It has been instrumental in achieving state-of-the-art performance in various AI applications, including image and speech recognition, natural language processing, and game playing.

Deep learning algorithms have been able to achieve these breakthroughs by exploiting the representation power of neural networks, which can learn complex patterns in data. This has led to significant improvements in a wide range of areas, such as:

1. Image Recognition: Deep learning algorithms have enabled computers to recognize objects in images with unprecedented accuracy. For example, in 2015, a team of researchers from the University of Michigan and Google developed a deep learning algorithm that could identify objects in images with an error rate of less than 1%.
2. Natural Language Processing: Deep learning has revolutionized natural language processing by enabling computers to understand and generate human-like language. For example, in 2016, a team of researchers from the University of Toronto developed a deep learning algorithm that could generate text as well as a human writer.
3. Speech Recognition: Deep learning has significantly improved speech recognition systems, allowing computers to transcribe spoken language with high accuracy. For example, in 2017, a team of researchers from Google developed a deep learning algorithm that could transcribe audio recordings with an error rate of less than 5%.
4. Game Playing: Deep learning has enabled computers to play games like Go and poker at a world-class level. In 2016, a team of researchers from Google developed a deep learning algorithm that could defeat a professional Go player in a tournament.

These breakthroughs have been achieved through the development of new algorithms, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are designed to handle complex data structures like images and time-series data. Additionally, there has been significant advancements in the field of transfer learning, which allows deep learning models to learn from pre-trained models and adapt to new tasks with high accuracy.

Overall, the last decade has seen a remarkable increase in the capabilities of AI systems, thanks to the development of deep learning algorithms. These breakthroughs have opened up new possibilities for AI applications in various fields, including healthcare, finance, and transportation, among others.AIMessage(content='\nThere have been many significant breakthroughs in artificial intelligence (AI) over the past decade, but one of the most notable ones is the development and widespread adoption of deep learning algorithms. Deep learning is a subset of machine learning that involves the use of artificial neural networks to analyze and learn from large amounts of data. It has been instrumental in achieving state-of-the-art performance in various AI applications, including image and speech recognition, natural language processing, and game playing.\n\nDeep learning algorithms have been able to achieve these breakthroughs by exploiting the representation power of neural networks, which can learn complex patterns in data. This has led to significant improvements in a wide range of areas, such as:\n\n1. Image Recognition: Deep learning algorithms have enabled computers to recognize objects in images with unprecedented accuracy. For example, in 2015, a team of researchers from the University of Michigan and Google developed a deep learning algorithm that could identify objects in images with an error rate of less than 1%.\n2. Natural Language Processing: Deep learning has revolutionized natural language processing by enabling computers to understand and generate human-like language. For example, in 2016, a team of researchers from the University of Toronto developed a deep learning algorithm that could generate text as well as a human writer.\n3. Speech Recognition: Deep learning has significantly improved speech recognition systems, allowing computers to transcribe spoken language with high accuracy. For example, in 2017, a team of researchers from Google developed a deep learning algorithm that could transcribe audio recordings with an error rate of less than 5%.\n4. Game Playing: Deep learning has enabled computers to play games like Go and poker at a world-class level. In 2016, a team of researchers from Google developed a deep learning algorithm that could defeat a professional Go player in a tournament.\n\nThese breakthroughs have been achieved through the development of new algorithms, such as convolutional neural networks (CNNs) and recurrent neural networks (RNNs), which are designed to handle complex data structures like images and time-series data. Additionally, there has been significant advancements in the field of transfer learning, which allows deep learning models to learn from pre-trained models and adapt to new tasks with high accuracy.\n\nOverall, the last decade has seen a remarkable increase in the capabilities of AI systems, thanks to the development of deep learning algorithms. These breakthroughs have opened up new possibilities for AI applications in various fields, including healthcare, finance, and transportation, among others.')

最後に

組織内でプライベートデータを扱う際、OpenAIのようなクローズドモデルAPIの使用が難しい要件に直面することがあります。このような状況で、OllamaとLangChainを組み合わせることにより、Llamaベースのオープンソースモデルを活用したプライベートアプリケーションを簡単に構築できると考えられます。

https://github.com/langchain-ai/langchain/tree/master/templates/rag-chroma-private#rag-chroma-private

この記事が気に入ったらサポートをしてみませんか?