自学内容网 自学内容网

引入 LangChain4j 来简化 LLM 与 Java 应用程序的集成

作者:来自 Elastic David Pilato

LangChain4j 框架于 2023 年创建,其目标如下:

LangChain4j 的目标是简化将 LLM 集成到 Java 应用程序的过程。

LangChain4j 提供了一种标准方法:

  • 根据给定内容(例如文本)创建嵌入(向量)
  • 将嵌入存储在嵌入存储中
  • 在嵌入存储中搜索类似的向量
  • 与 LLMs 讨论
  • 使用聊天记忆来记住与 LLM 讨论的上下文

此列表并不详尽,LangChain4j 社区一直在实现新功能。

本文将介绍该框架的第一个主要部分。

将 LangChain4j OpenAI 添加到我们的项目中

与所有 Java 项目一样,这只是一个依赖关系问题。在这里我们将使用 Maven,但使用任何其他依赖管理器也可以实现相同的效果。

作为我们想要在此构建的项目的第一步,我们将使用 OpenAI,因此我们只需添加 langchain4j-open-ai artifact:

<properties>
  <langchain4j.version>0.34.0</langchain4j.version>
</properties>

<dependencies>
  <dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j-open-ai</artifactId>
    <version>${langchain4j.version}</version>
  </dependency>
</dependencies>

对于其余代码,我们将使用我们自己的 API 密钥(你可以通过注册 OpenAI 帐户获取),或者使用 LangChain4j 项目提供的密钥(仅用于演示目的):

static String getOpenAiApiKey() {
  String apiKey = System.getenv(API_KEY_ENV_NAME);
  if (apiKey == null || apiKey.isEmpty()) {
    Logger.warn("Please provide your own key instead using [{}] env variable", API_KEY_ENV_NAME);
    return "demo";
  }
  return apiKey;
}

我们现在可以创建 ChatLanguageModel 的实例:

ChatLanguageModel model = OpenAiChatModel.withApiKey(getOpenAiApiKey());

最后我们可以提出一个简单的问题并得到答案:

String answer = model.generate("Who is Thomas Pesquet?");
Logger.info("Answer is: {}", answer);

给出的答案可能是这样的:

Thomas Pesquet is a French aerospace engineer, pilot, and European Space Agency astronaut.
He was selected as a member of the European Astronaut Corps in 2009 and has since completed 
two space missions to the International Space Station, including serving as a flight engineer 
for Expedition 50/51 in 2016-2017. Pesquet is known for his contributions to scientific 
research and outreach activities during his time in space.

如果你想运行此代码,请查看 Step1AiChatTest.java 类。

提供更多上下文

让我们添加 langchain4j artifact:

<dependency>
  <groupId>dev.langchain4j</groupId>
  <artifactId>langchain4j</artifactId>
  <version>${langchain4j.version}</version>
</dependency>

这个提供了一个工具集,可以帮助我们构建更高级的 LLM 集成来构建我们的助手。在这里,我们将创建一个 Assistant 接口,它提供聊天方法,该方法将自动调用我们之前定义的 ChatLanguageModel:

interface Assistant {
  String chat(String userMessage);
}

我们只需要要求 LangChain4j AiServices 类为我们构建一个实例:

Assistant assistant = AiServices.create(Assistant.class, model);

然后调用 chat(String) 方法:

String answer = assistant.chat("Who is Thomas Pesquet?");
Logger.info("Answer is: {}", answer);

这与之前的行为相同。那么我们为什么要更改代码呢?首先,它更优雅,但更重要的是,你现在可以使用简单的注释(annotation)向 LLM 发出一些指令:

interface Assistant {
  @SystemMessage("Please answer in a funny way.")
  String chat(String userMessage);
}

现在给出的是:

Ah, Thomas Pesquet is actually a super secret spy disguised as an astronaut! 
He's out there in space fighting aliens and saving the world one spacewalk at a time. 
Or maybe he's just a really cool French astronaut who has been to the International 
Space Station. But my spy theory is much more exciting, don't you think?

如果你想运行此代码,请查看 Step2AssistantTest.java 类。

切换到另一个 LLM

我们可以使用出色的 Ollama 项目。它有助于在你的机器上本地运行 LLM。

让我们添加 langchain4j-ollama artifact:

<dependency>
  <groupId>dev.langchain4j</groupId>
  <artifactId>langchain4j-ollama</artifactId>
  <version>${langchain4j.version}</version>
</dependency>

当我们使用测试运行示例代码时,我们将 Testcontainers 添加到我们的项目中:

<dependency>
  <groupId>org.testcontainers</groupId>
  <artifactId>ollama</artifactId>
  <version>1.20.1</version>
  <scope>test</scope>
</dependency>

我们现在可以启动/停止 Docker 容器:

static String MODEL_NAME = "mistral";
static String DOCKER_IMAGE_NAME = "langchain4j/ollama-" + MODEL_NAME + ":latest";

static OllamaContainer ollama = new OllamaContainer(
  DockerImageName.parse(DOCKER_IMAGE_NAME).asCompatibleSubstituteFor("ollama/ollama"));

@BeforeAll
public static void setup() {
  ollama.start();
}

@AfterAll
public static void teardown() {
  ollama.stop();
}

我们 “只” 需要将 model 对象更改为 OllamaChatModel,而不是我们之前使用的 OpenAiChatModel:

OllamaChatModel model = OllamaChatModel.builder()
  .baseUrl(ollama.getEndpoint())
  .modelName(MODEL_NAME)
  .build();

请注意,提取图像及其模型可能需要一些时间,但过一会儿,你就可以得到答案:

Oh, Thomas Pesquet, the man who single-handedly keeps the French space program running 
while sipping on his crisp rosé and munching on a baguette! He's our beloved astronaut 
with an irresistible accent that makes us all want to learn French just so we can 
understand him better. When he's not floating in space, he's probably practicing his 
best "je ne sais quoi" face for the next family photo. Vive le Thomas Pesquet! 
🚀🌍🇫🇷 #FrenchSpaceHero

记忆力更好

如果我们问多个问题,默认情况下系统不会记住之前的问题和答案。因此,如果我们在第一个问题之后问 “When was he born?”,我们的应用程序将回答:

Oh, you're asking about this legendary figure from history, huh? Well, let me tell 
you a hilarious tale! He was actually born on Leap Year's Day, but only every 400 
years! So, do the math... if we count backwards from 2020 (which is also a leap year), 
then he was born in... *drumroll please* ...1600! Isn't that a hoot? But remember 
folks, this is just a joke, and historical records may vary.

这太荒谬了。相反,我们应该使用 Chat Memory

ChatMemory chatMemory = MessageWindowChatMemory.withMaxMessages(10);
Assistant assistant = AiServices.builder(Assistant.class)
  .chatLanguageModel(model)
  .chatMemory(chatMemory)
  .build();

现在运行同样的问题会得到一个有意义的答案:

Oh, Thomas Pesquet, the man who was probably born before sliced bread but after dinosaurs! 
You know, around the time when people started putting wheels on suitcases and calling it 
a revolution. So, roughly speaking, he came into this world somewhere in the late 70s or 
early 80s, give or take a year or two - just enough time for him to grow up, become an 
astronaut, and make us all laugh with his space-aged antics! Isn't that a hoot? 
*laughs maniacally*

结论

在下一篇文章中,我们将了解如何使用 Elasticsearch 作为嵌入存储向我们的私有数据集提问。这将为我们提供一种方法,将我们的应用程序搜索提升到一个新的水平。

准备好自己尝试一下了吗?开始免费试用
Elasticsearch 集成了 LangChain、Cohere 等工具。加入我们的高级语义搜索网络研讨会,构建您的下一个 GenAI 应用程序!

原文:LangChain4j: Simple LLM integration into Java apps — Search Labs


原文地址:https://blog.csdn.net/UbuntuTouch/article/details/142639007

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!