如何在百度里做推广网站,单位网站维护 网站建设岗位,网站建设策划方案t,红帽linux安装wordpress一、模型调用#xff08;生成#xff09;1、ChatLanguageModel生成最常用的接口#xff0c;用于与 LLM 聊天#xff08;非流式#xff09;。ChatLanguageModel是 LangChain4j 中的一个 抽象接口/基#xff0c;用于表示 对话式语言模型。它是对所有聊天模型#xff08;Ch…一、模型调用生成1、ChatLanguageModel生成最常用的接口用于与 LLM 聊天非流式。ChatLanguageModel是LangChain4j中的一个抽象接口/基用于表示对话式语言模型。它是对所有聊天模型Chat Models的统一抽象让你在 Java 项目中更方便地调用 OpenAI 的 Chat API而不需要自己写 HTTP 请求。所有聊天类模型OpenAI、Azure、Ollama、Moonshot、Qwen都实现了此接口。常用实现1OpenAiChatModel严格来说官方 LangChain4j 的OpenAiChatModel默认只支持 OpenAI 的模型比如 GPT-3.5、GPT-4 系列。但是如果第三方服务提供了 完全兼容 OpenAI REST API 的端点也可以用OpenAiChatModel比如siliconflow。2AnthropicChatModel3DisabledChatLanguageModel4OllamaChatModelChatLanguageModel model OpenAiChatModel.builder() .apiKey(sk-xxx) .modelName(gpt-4o-mini) .build(); String reply model.generate(写一首关于秋天的短诗); System.out.println(reply);2、StreamingChatLanguageModel流式生成流式输出接口StreamingChatLanguageModel streamModel OpenAiStreamingChatModel.builder() .apiKey(sk-xxx) .modelName(gpt-4o-mini) .build(); streamModel.stream(介绍一下LangChain4j, new StreamingResponseHandlerAiMessage() { Override public void onNext(String token) { System.out.print(token); // 流式输出实时打印token } Override public void onComplete(ResponseAiMessage response) { System.out.println(\n完成: response.content().text()); } Override public void onError(Throwable error) { error.printStackTrace(); } });流式输出如何把结果流式传给前端可以采用websocket的方式。// 1. 定义一个 StreamingChatModel StreamingChatLanguageModel model OpenAiChatModel.builder() .apiKey(你的OPENAI_API_KEY) .model(gpt-3.5-turbo) .streaming(true) // 开启流式 .build(); // 2. 定义 WebSocket 端点 ServerEndpoint(/chat) public class ChatWebSocket { OnMessage public void onMessage(Session session, String prompt) { model.stream(prompt, token - { try { // 每生成一个 token 就发送给前端 session.getBasicRemote().sendText(token); } catch (IOException e) { e.printStackTrace(); } }); } }版本兼容在新版本的langchan4j中如1.0.0API有破坏性的改动ChatLanguageModel改成了ChatModelStreamingChatLanguageModel改成了StreamingChatModel。且generate方法改成了chat返回结构也有变化。二、demo下面以免费的siliconflow模型为例主体代码由cursor生成ai-demo/├── pom.xml├── src/│ └── main/│ ├── java/│ │ └── com/│ │ └── demo/│ │ └── ai/│ │ ├── AiDemoApplication.java│ │ └── gateway/│ │ ├── controller/│ │ │ └── LlmGatewayController.java│ │ ├── model/│ │ │ ├── ChatRequest.java│ │ │ └── ChatResponse.java│ │ └── service/│ │ ├── impl/│ │ │ ├── DeepSeekProvider.java│ │ │ ├── OpenAIProvider.java│ │ │ └── SiliconFlowProvider.java│ │ ├── LlmGatewayService.java│ │ └── LlmProvider.java│ └── resources/│ └── application.properties1、pom?xml version1.0 encodingUTF-8? project xmlnshttp://maven.apache.org/POM/4.0.0 xmlns:xsihttp://www.w3.org/2001/XMLSchema-instance xsi:schemaLocationhttp://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd modelVersion4.0.0/modelVersion groupIdcom.demo/groupId artifactIdai-demo/artifactId version1.0.0/version packagingjar/packaging nameAI Demo - LLM Gateway/name descriptionLLM Gateway service providing unified access to multiple LLM providers/description properties java.version17/java.version maven.compiler.source17/maven.compiler.source maven.compiler.target17/maven.compiler.target project.build.sourceEncodingUTF-8/project.build.sourceEncoding spring-boot.version3.2.0/spring-boot.version langchain4j.version0.29.1/langchain4j.version /properties dependencyManagement dependencies dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-dependencies/artifactId version${spring-boot.version}/version typepom/type scopeimport/scope /dependency /dependencies /dependencyManagement dependencies !-- Spring Boot Starter Web -- dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-starter-web/artifactId /dependency !-- Spring Boot Starter Validation -- dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-starter-validation/artifactId /dependency !-- LangChain4j Core -- dependency groupIddev.langchain4j/groupId artifactIdlangchain4j/artifactId version${langchain4j.version}/version /dependency !-- LangChain4j OpenAI -- dependency groupIddev.langchain4j/groupId artifactIdlangchain4j-open-ai/artifactId version${langchain4j.version}/version /dependency !-- LangChain4j Anthropic -- !-- dependency groupIddev.langchain4j/groupId artifactIdlangchain4j-anthropic/artifactId version${langchain4j.version}/version /dependency lt;!ndash; LangChain4j Ollama ndash;gt; dependency groupIddev.langchain4j/groupId artifactIdlangchain4j-ollama/artifactId version${langchain4j.version}/version /dependency-- !-- Jackson for JSON -- dependency groupIdcom.fasterxml.jackson.core/groupId artifactIdjackson-databind/artifactId /dependency !-- Lombok -- dependency groupIdorg.projectlombok/groupId artifactIdlombok/artifactId optionaltrue/optional /dependency !-- Spring Boot Configuration Processor -- dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-configuration-processor/artifactId optionaltrue/optional /dependency !-- Spring Boot Test -- dependency groupIdorg.springframework.boot/groupId artifactIdspring-boot-starter-test/artifactId scopetest/scope /dependency /dependencies build plugins plugin groupIdorg.springframework.boot/groupId artifactIdspring-boot-maven-plugin/artifactId version${spring-boot.version}/version configuration excludes exclude groupIdorg.projectlombok/groupId artifactIdlombok/artifactId /exclude /excludes /configuration /plugin plugin groupIdorg.apache.maven.plugins/groupId artifactIdmaven-compiler-plugin/artifactId version3.11.0/version configuration source17/source target17/target encodingUTF-8/encoding /configuration /plugin /plugins /build /project2、配置文件和启动类server.port8888 server.servlet.context-path/aiDemo # SiliconFlow llm.siliconflow.api-key sk-****** llm.siliconflow.base-url https://api.siliconflow.cn/v1 #llm.openai.api-keyxxx #llm.openai.base-urlhttps://api.openai.com/v1 #llm.anthropic.api-keyxxxx #llm.anthropic.base-urlhttps://api.anthropic.com #llm.ollama.base-urlhttp://localhost:11434SpringBootApplication public class AiDemoApplication { public static void main(String[] args) { SpringApplication.run(AiDemoApplication.class, args); } }3、模型路由package com.demo.ai.gateway.service; import com.demo.ai.gateway.model.ChatRequest; import com.demo.ai.gateway.model.ChatResponse; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.stereotype.Service; import java.util.List; import java.util.Map; import java.util.stream.Collectors; Slf4j Service public class LlmGatewayService { private final MapString, LlmProvider providers; Autowired public LlmGatewayService(ListLlmProvider providerList) { this.providers providerList.stream() .collect(Collectors.toMap( LlmProvider::getProviderName, provider - provider )); log.info(Initialized LLM Gateway with {} providers: {}, providers.size(), providers.keySet()); } /** * 路由到对应的 LLM 提供商 */ public LlmProvider route(String providerName, String modelName) throws Exception { // 如果未指定提供商尝试自动检测 if (providerName null || providerName.isEmpty()) { providerName detectProvider(modelName); } LlmProvider provider providers.get(providerName); return provider; } /** * 自动检测提供商基于模型名称 */ private String detectProvider(String model) { if (model null) { throw new IllegalArgumentException(Model cannot be null); } for (LlmProvider provider : providers.values()) { if (provider.supports(model)) { return provider.getProviderName(); } } throw new IllegalArgumentException( Cannot detect provider for model: model . Please specify provider explicitly.); } /** * 获取所有可用的提供商 */ public ListString getAvailableProviders() { return providers.values().stream() .map(LlmProvider::getProviderName) .collect(Collectors.toList()); } }4、模型能力封装接口package com.demo.ai.gateway.service; import com.demo.ai.gateway.model.ChatRequest; import com.demo.ai.gateway.model.ChatResponse; public interface LlmProvider { /** * 获取提供商名称 */ String getProviderName(); /** * 检查是否支持指定的模型 */ boolean supports(String model); /** * 发送聊天请求 */ ChatResponse chat(ChatRequest request) throws Exception; /** * 流式输出 */ ChatResponse streamChat(ChatRequest request) throws Exception; }实现类package com.demo.ai.gateway.service.impl; import com.demo.ai.gateway.model.ChatRequest; import com.demo.ai.gateway.model.ChatResponse; import com.demo.ai.gateway.service.LlmProvider; import dev.langchain4j.data.message.AiMessage; import dev.langchain4j.model.chat.ChatLanguageModel; import dev.langchain4j.model.chat.StreamingChatLanguageModel; import dev.langchain4j.model.openai.OpenAiStreamingChatModel; import dev.langchain4j.model.openai.OpenAiChatModel; import dev.langchain4j.model.output.Response; import dev.langchain4j.model.StreamingResponseHandler; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Value; import org.springframework.stereotype.Service; import java.time.Duration; import java.util.concurrent.CompletableFuture; import java.util.concurrent.atomic.AtomicReference; Slf4j Service public class SiliconFlowProvider implements LlmProvider { Value(${llm.siliconflow.api-key}) private String apiKey; Value(${llm.siliconflow.base-url}) private String baseUrl; Override public String getProviderName() { return siliconflow; } Override public boolean supports(String model) { // SiliconFlow 支持多种模型可以根据需要调整判断逻辑 return model ! null !model.isEmpty(); } Override public ChatResponse chat(ChatRequest request) throws Exception { ChatLanguageModel chatModel OpenAiChatModel.builder() .apiKey(apiKey) .baseUrl(baseUrl) .timeout(Duration.ofSeconds(60)) .modelName(request.getModelName()) .build(); String responseText chatModel.generate(request.getMessage()); return ChatResponse.builder().content(responseText).build(); } Override public ChatResponse streamChat(ChatRequest request) throws Exception { // 创建流式聊天模型 StreamingChatLanguageModel streamingModel OpenAiStreamingChatModel.builder() .apiKey(apiKey) .baseUrl(baseUrl) .modelName(request.getModelName()) .temperature(0.7) .timeout(Duration.ofSeconds(60)) .build(); // 用于收集流式响应 AtomicReferenceStringBuilder contentBuilder new AtomicReference(new StringBuilder()); CompletableFutureVoid future new CompletableFuture(); // 流式生成响应 streamingModel.generate(request.getMessage(), new StreamingResponseHandlerAiMessage() { Override public void onNext(String token) { // 接收到每个 token追加到内容中 contentBuilder.get().append(token); log.info(接收token: {}, token); } Override public void onComplete(ResponseAiMessage response) { // 流式响应完成 log.info(完成{}, response.content().text()); future.complete(null); } Override public void onError(Throwable error) { // 发生错误 log.error(Streaming error, error); future.completeExceptionally(error); } }); // 等待流式响应完成 future.get(); // 构建并返回完整的响应 String fullContent contentBuilder.get().toString(); return ChatResponse.builder() .content(fullContent) .build(); } }5、入参出参dtopackage com.demo.ai.gateway.model; import lombok.Data; Data public class ChatRequest { // openai, anthropic, ollama, deepseek, siliconflow,本项目默认使用siliconflow //NotBlank(message Provider is required) private String provider; //代码中指定使用默认的 //NotBlank(message Model is required) private String modelName; String message; /*NotNull(message Messages are required) private ListMessage messages;*/ /* private Double temperature; private Integer maxTokens; private MapString, Object extraParams;*/ }package com.demo.ai.gateway.model; import lombok.AllArgsConstructor; import lombok.Builder; import lombok.Data; import lombok.NoArgsConstructor; Data Builder NoArgsConstructor AllArgsConstructor public class ChatResponse { private String content; }6、controller接口package com.demo.ai.gateway.controller; import com.demo.ai.gateway.model.ChatRequest; import com.demo.ai.gateway.model.ChatResponse; import com.demo.ai.gateway.service.LlmGatewayService; import jakarta.validation.Valid; import lombok.extern.slf4j.Slf4j; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.http.ResponseEntity; import org.springframework.web.bind.annotation.*; Slf4j RestController RequestMapping(/api/v1/llm) public class LlmGatewayController { Autowired private LlmGatewayService gatewayService; /** * 聊天接口 */ PostMapping(/chat) public ResponseEntity? chat(Valid RequestBody ChatRequest request) { try { request.setProvider(siliconflow); request.setModelName(Qwen/Qwen2.5-72B-Instruct); ChatResponse response gatewayService.route(request.getProvider(), request.getModelName()).chat(request); return ResponseEntity.ok(response); } catch (Exception e) { log.error(Invalid request: {}, e.getMessage()); } return null; } PostMapping(/streamChat) public ResponseEntity? streamChat(Valid RequestBody ChatRequest request) { try { request.setProvider(siliconflow); request.setModelName(Qwen/Qwen2.5-72B-Instruct); ChatResponse response gatewayService.route(request.getProvider(), request.getModelName()).streamChat(request); return ResponseEntity.ok(response); } catch (Exception e) { log.error(Invalid request: {}, e.getMessage()); } return null; } /* *//** * 获取可用的提供商列表 *//* GetMapping(/providers) public ResponseEntityMapString, Object getProviders() { ListString providers gatewayService.getAvailableProviders(); MapString, Object response new HashMap(); response.put(providers, providers); response.put(count, providers.size()); return ResponseEntity.ok(response); } *//** * 健康检查 *//* GetMapping(/health) public ResponseEntityMapString, String health() { MapString, String response new HashMap(); response.put(status, UP); response.put(service, LLM Gateway); return ResponseEntity.ok(response); } */ }7、测试ChatLanguageModel简单输出StreamingChatLanguageModel流式输出我这里没有使用websocket而是把结果直接拼接返回了从后台日志可以看到“流式输出”的效果