Responses API SDK使用说明
bella-openai4j 库完整支持 OpenAI Response API,包括流式(SSE)和非流式两种模式。
目录
快速开始
Maven 依赖
<dependency>
<groupId>top.bella</groupId>
<artifactId>openai-service</artifactId>
<version>${bella-openai.version}</version>
</dependency>
获取最新版本号:
- 访问 Maven 中央仓库:https://repo1.maven.org/maven2/top/bella/openai-service/
- 选择最新的版本目录即可获取当前可用的最新版本号
- 0.23.83之后的版本支持response api client
初始化
import com.theokanning.openai.service.OpenAiService;
// 从环境变量读取 API Key
OpenAiService service = new OpenAiService();
// 或者直接指定 API Key
OpenAiService service = new OpenAiService("your-api-key");
// 自定义超时时间
OpenAiService service = new OpenAiService(Duration.ofSeconds(60));
非流式模式
基本用法
import com.theokanning.openai.response.*;
// 创建请求
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Hello, how are you?"))
.build();
// 发送请求并获取完整响应
Response response = service.createResponse(request);
// 获取响应内容
System.out.println("Response ID: " + response.getId());
System.out.println("Status: " + response.getStatus());
System.out.println("Output: " + response.getOutput());
带指令的请求
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("What is 2+2?"))
.instructions("You are a helpful math teacher. Be concise.")
.temperature(0.7)
.maxOutputTokens(100)
.build();
Response response = service.createResponse(request);
对话上下文
使用previous_response_id
// 第一轮对话
CreateResponseRequest request1 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("My name is Alice"))
.build();
Response response1 = service.createResponse(request1);
// 第二轮对话 - 引用前一个 response
String previousId = response1.getId();
CreateResponseRequest request2 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("What is my name?"))
.previousResponseId(previousId)
.build();
Response response2 = service.createResponse(request2);
使用conversationId
// 第一轮对话
CreateResponseRequest request1 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("My name is Alice"))
.build();
Response response1 = service.createResponse(request1);
// 第二轮对话 - 引用前一个 response
String conversationId = response1.getConversation().getStringValue();
CreateResponseRequest request2 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("What is my name?"))
.conversation(ConversationValue.of(conversationId))
.build();
Response response2 = service.createResponse(request2);
获取已创建的 Response
String responseId = "resp_xxxxx";
Response response = service.getResponse(responseId);
添加元数据
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Hello"))
.metadata(Map.of(
"user_id", "user123",
"session", "session456"
))
.build();
Response response = service.createResponse(request);
存储控制
Response API支持两种存储模式,通过store
参数控制:
Store模式(默认,store = true)
对话历史会持久化到数据库,响应中包含conversation
字段,支持续接对话:
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("北京天气怎么样?"))
.store(true) // 默认值,可以省略
.build();
Response response = service.createResponse(request);
// 响应中包含conversation
String conversationId = response.getConversation().getStringValue();
System.out.println("Conversation ID: " + conversationId);
适用场景:
- 多轮对话应用
- 需要查看历史记录
- 会话管理和分析
非Store模式(store = false)
对话不会持久化,适合单次查询和隐私敏感场景:
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("翻译成英文:你好世界"))
.store(false) // 不保存对话历史
.build();
Response response = service.createResponse(request);
// response.getConversation() 将返回null
适用场景:
- 单次查询(翻译、格式转换等)
- 隐私敏感场景
- 客户端自行管理上下文
重要限制:非store模式不能使用previousResponseId
或conversation
续接对话,否则会抛出异常:
// ❌ 错误示例:非store模式不能续接对话
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("继续上面的话题"))
.previousResponseId("resp_xxx") // 错误:与store=false冲突
.store(false)
.build();
// 抛出异常: "store can not be set `false` when you request with previous_response_id or conversation"
// ✅ 正确示例:store模式可以续接
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("继续上面的话题"))
.previousResponseId("resp_xxx")
.store(true) // 必须为true
.build();
客户端管理对话示例
如果在客户端自己管理对话历史,可以使用非store模式:
List<Message> conversationHistory = new ArrayList<>();
// 第一轮对话
conversationHistory.add(createUserMessage("第一个问题"));
CreateResponseRequest request1 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of(conversationHistory))
.store(false)
.build();
Response response1 = service.createResponse(request1);
conversationHistory.add(extractAssistantMessage(response1));
// 第二轮对话
conversationHistory.add(createUserMessage("第二个问题"));
CreateResponseRequest request2 = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of(conversationHistory))
.store(false) // 仍然不存储
.build();
Response response2 = service.createResponse(request2);
流式模式
基本流式处理
import io.reactivex.Flowable;
import com.theokanning.openai.service.response_stream.ResponseSSE;
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Write a poem about coding"))
.stream(true) // 开启流式模式
.build();
// 获取流式响应
Flowable<ResponseSSE> stream = service.createResponseStream(request);
// 订阅并处理事件
stream.subscribe(
sse -> {
System.out.println("Event type: " + sse.getType());
BaseStreamEvent event = sse.getEvent();
// 处理事件
},
error -> System.err.println("Error: " + error.getMessage()),
() -> System.out.println("Stream completed")
);
使用 ResponseStreamManager
ResponseStreamManager
提供了更高级的流式处理能力,包括事件分发、文本累积等。
异步模式
import com.theokanning.openai.service.response_stream.*;
import com.theokanning.openai.response.stream.*;
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Explain quantum computing"))
.stream(true)
.build();
// 创建事件处理器
ResponseEventHandler handler = new ResponseEventHandler() {
@Override
public void onResponseCreated(ResponseCreatedEvent event) {
System.out.println("Response created");
}
@Override
public void onOutputTextDelta(OutputTextDeltaEvent event) {
// 实时输出文本增量
System.out.print(event.getDelta());
}
@Override
public void onResponseCompleted(ResponseCompletedEvent event) {
System.out.println("\n\nResponse completed!");
}
@Override
public void onError(Throwable error) {
System.err.println("Error: " + error.getMessage());
}
};
// 启动异步流管理器
ResponseStreamManager manager = ResponseStreamManager.start(
service.createResponseStream(request),
handler
);
// 等待完成
manager.waitForCompletion();
// 获取累积的文本
String fullText = manager.getAccumulatedText().orElse("");
System.out.println("Final text: " + fullText);
同步模式
// 同步模式会阻塞直到流处理完成
ResponseStreamManager manager = ResponseStreamManager.syncStart(
service.createResponseStream(request),
handler
);
// 执行到这里时,流已经完全处理完毕
System.out.println("Stream completed synchronously");
String fullText = manager.getAccumulatedText().orElse("");
简化的事件处理
如果只需要获取文本输出:
StringBuilder output = new StringBuilder();
ResponseStreamManager.start(
service.createResponseStream(request),
new ResponseEventHandler() {
@Override
public void onOutputTextDelta(OutputTextDeltaEvent event) {
output.append(event.getDelta());
}
}
).waitForCompletion();
System.out.println("Complete output: " + output.toString());
高级用法
工具调用(Function Calling)
import com.theokanning.openai.response.tool.definition.*;
// 定义工具
ToolDefinition weatherTool = ToolDefinition.builder()
.type("function")
.function(FunctionDefinition.builder()
.name("get_weather")
.description("Get the current weather")
.parameters(/* ... */)
.build())
.build();
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("What's the weather in Tokyo?"))
.tools(List.of(weatherTool))
.build();
Response response = service.createResponse(request);
推理模型配置
CreateResponseRequest request = CreateResponseRequest.builder()
.model("o1-preview")
.input(InputValue.of("Solve this complex problem..."))
.reasoning(CreateResponseRequest.ReasoningConfig.builder()
.effort("high")
.summary("detailed")
.build())
.build();
// 流式处理推理输出
ResponseStreamManager.start(
service.createResponseStream(request),
new ResponseEventHandler() {
@Override
public void onReasoningTextDelta(ReasoningTextDeltaEvent event) {
System.out.print("[Reasoning] " + event.getDelta());
}
@Override
public void onOutputTextDelta(OutputTextDeltaEvent event) {
System.out.print("[Output] " + event.getDelta());
}
}
).waitForCompletion();
响应格式控制
import com.theokanning.openai.completion.chat.ChatResponseFormat;
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Generate a JSON user profile"))
.text(CreateResponseRequest.TextConfig.builder()
.format(ChatResponseFormat.JSON_OBJECT)
.verbosity("detailed")
.build())
.build();
Response response = service.createResponse(request);
截断策略
import com.theokanning.openai.assistants.run.TruncationStrategy;
CreateResponseRequest request = CreateResponseRequest.builder()
.model("gpt-4o-mini")
.input(InputValue.of("Very long input..."))
.truncation("auto") // 或 "disabled"
.maxOutputTokens(1000)
.build();