How to apply streaming in azure openai dotnet web application?

2024-09-06 04:44

本文主要是介绍How to apply streaming in azure openai dotnet web application?,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

题意:"如何在 Azure OpenAI 的 .NET Web 应用程序中应用流式处理?"

问题背景:

I want to create a web api backend that stream openai completion responses.

"我想创建一个 Web API 后端,用于流式传输 OpenAI 的完成响应。"

How can I apply the following solution to a web api action in controller?

"如何将以下解决方案应用到控制器中的 Web API 操作?"

var client = new OpenAIClient(nonAzureOpenAIApiKey, new OpenAIClientOptions());
var chatCompletionsOptions = new ChatCompletionsOptions()
{DeploymentName = "gpt-3.5-turbo", // Use DeploymentName for "model" with non-Azure clientsMessages ={new ChatRequestSystemMessage("You are a helpful assistant. You will talk like a pirate."),new ChatRequestUserMessage("Can you help me?"),new ChatRequestAssistantMessage("Arrrr! Of course, me hearty! What can I do for ye?"),new ChatRequestUserMessage("What's the best way to train a parrot?"),}
};await foreach (StreamingChatCompletionsUpdate chatUpdate in client.GetChatCompletionsStreaming(chatCompletionsOptions))
{if (chatUpdate.Role.HasValue){Console.Write($"{chatUpdate.Role.Value.ToString().ToUpperInvariant()}: ");}if (!string.IsNullOrEmpty(chatUpdate.ContentUpdate)){Console.Write(chatUpdate.ContentUpdate);}
}

问题解决:

You can simply wrap your code inside the controller

"您可以简单地将代码包裹在控制器内。"

using Microsoft.AspNetCore.Mvc;
using OpenAI;
using OpenAI.Chat;
using System.Collections.Generic;
using System.Threading.Tasks;[ApiController]
[Route("[controller]")]
public class ChatController : ControllerBase
{[HttpGet]public async Task<ActionResult<List<string>>> GetChatCompletions(){var client = new OpenAIClient(nonAzureOpenAIApiKey, new OpenAIClientOptions());var chatCompletionsOptions = new ChatCompletionsOptions(){DeploymentName = "gpt-3.5-turbo",Messages ={new ChatRequestSystemMessage("You are a helpful assistant. You will talk like a pirate."),new ChatRequestUserMessage("Can you help me?"),new ChatRequestAssistantMessage("Arrrr! Of course, me hearty! What can I do for ye?"),new ChatRequestUserMessage("What's the best way to train a parrot?"),}};var responses = new List<string>();await foreach (StreamingChatCompletionsUpdate chatUpdate in client.GetChatCompletionsStreaming(chatCompletionsOptions)){if (chatUpdate.Role.HasValue){responses.Add($"{chatUpdate.Role.Value.ToString().ToUpperInvariant()}: ");}if (!string.IsNullOrEmpty(chatUpdate.ContentUpdate)){responses.Add(chatUpdate.ContentUpdate);}}return Ok(responses);}
}

If you don't want to hardcode the message and pass that as a body then you can do something like this

"如果您不想将消息硬编码并作为主体传递,那么您可以这样做"

using Microsoft.AspNetCore.Mvc;
using OpenAI;
using OpenAI.Chat;
using System.Collections.Generic;
using System.Threading.Tasks;[ApiController]
[Route("[controller]")]
public class ChatController : ControllerBase
{public class ChatRequest{public List<string> Messages { get; set; }}[HttpPost]public async Task<ActionResult<List<string>>> PostChatCompletions([FromBody] ChatRequest request){var client = new OpenAIClient(nonAzureOpenAIApiKey, new OpenAIClientOptions());var chatCompletionsOptions = new ChatCompletionsOptions(){DeploymentName = "gpt-3.5-turbo",Messages = new List<ChatRequestMessage>()};foreach (var message in request.Messages){chatCompletionsOptions.Messages.Add(new ChatRequestUserMessage(message));}var responses = new List<string>();await foreach (StreamingChatCompletionsUpdate chatUpdate in client.GetChatCompletionsStreaming(chatCompletionsOptions)){if (chatUpdate.Role.HasValue){responses.Add($"{chatUpdate.Role.Value.ToString().ToUpperInvariant()}: ");}if (!string.IsNullOrEmpty(chatUpdate.ContentUpdate)){responses.Add(chatUpdate.ContentUpdate);}}return Ok(responses);}
}

Remember the above implementation of the API does not support streaming responses. It waits for all the chat completions to be received from the OpenAI API, then sends them all at once to the client.

"请记住,上述 API 实现不支持流式响应。它会等待从 OpenAI API 接收到所有聊天完成后,再将它们一次性发送给客户端。"

Streaming responses to the client as they are received from the OpenAI API would require a different approach. This could be achieved using Server-Sent Events (SSE) or a similar technology, but it's important to note that not all clients and network environments support these technologies.

"将从 OpenAI API 接收到的响应流式传输给客户端需要采用不同的方法。这可以通过使用服务器发送事件 (SSE) 或类似技术来实现,但需要注意的是,并非所有客户端和网络环境都支持这些技术。"

Here's a simplified example of how you could implement this using Server-Sent Events in ASP.NET Core:

"以下是一个使用服务器发送事件 (SSE) 在 ASP.NET Core 中实现此功能的简化示例:"

[HttpPost]
public async Task PostChatCompletions([FromBody] ChatRequest request)
{var client = new OpenAIClient(nonAzureOpenAIApiKey, new OpenAIClientOptions());var chatCompletionsOptions = new ChatCompletionsOptions(){DeploymentName = "gpt-3.5-turbo",Messages = new List<ChatRequestMessage>()};foreach (var message in request.Messages){chatCompletionsOptions.Messages.Add(new ChatRequestUserMessage(message));}Response.Headers.Add("Content-Type", "text/event-stream");await foreach (StreamingChatCompletionsUpdate chatUpdate in client.GetChatCompletionsStreaming(chatCompletionsOptions)){if (chatUpdate.Role.HasValue){await Response.WriteAsync($"data: {chatUpdate.Role.Value.ToString().ToUpperInvariant()}: \n\n");}if (!string.IsNullOrEmpty(chatUpdate.ContentUpdate)){await Response.WriteAsync($"data: {chatUpdate.ContentUpdate}\n\n");}}
}

这篇关于How to apply streaming in azure openai dotnet web application?的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/1141035

相关文章

Web服务器-Nginx-高并发问题

《Web服务器-Nginx-高并发问题》Nginx通过事件驱动、I/O多路复用和异步非阻塞技术高效处理高并发,结合动静分离和限流策略,提升性能与稳定性... 目录前言一、架构1. 原生多进程架构2. 事件驱动模型3. IO多路复用4. 异步非阻塞 I/O5. Nginx高并发配置实战二、动静分离1. 职责2

SpringBoot通过main方法启动web项目实践

《SpringBoot通过main方法启动web项目实践》SpringBoot通过SpringApplication.run()启动Web项目,自动推断应用类型,加载初始化器与监听器,配置Spring... 目录1. 启动入口:SpringApplication.run()2. SpringApplicat

Spring Boot项目如何使用外部application.yml配置文件启动JAR包

《SpringBoot项目如何使用外部application.yml配置文件启动JAR包》文章介绍了SpringBoot项目通过指定外部application.yml配置文件启动JAR包的方法,包括... 目录Spring Boot项目中使用外部application.yml配置文件启动JAR包一、基本原理

Python Web框架Flask、Streamlit、FastAPI示例详解

《PythonWeb框架Flask、Streamlit、FastAPI示例详解》本文对比分析了Flask、Streamlit和FastAPI三大PythonWeb框架:Flask轻量灵活适合传统应用... 目录概述Flask详解Flask简介安装和基础配置核心概念路由和视图模板系统数据库集成实际示例Stre

如何使用Maven创建web目录结构

《如何使用Maven创建web目录结构》:本文主要介绍如何使用Maven创建web目录结构的问题,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录创建web工程第一步第二步第三步第四步第五步第六步第七步总结创建web工程第一步js通过Maven骨架创pytho

Java Web实现类似Excel表格锁定功能实战教程

《JavaWeb实现类似Excel表格锁定功能实战教程》本文将详细介绍通过创建特定div元素并利用CSS布局和JavaScript事件监听来实现类似Excel的锁定行和列效果的方法,感兴趣的朋友跟随... 目录1. 模拟Excel表格锁定功能2. 创建3个div元素实现表格锁定2.1 div元素布局设计2.

如何使用Haporxy搭建Web群集

《如何使用Haporxy搭建Web群集》Haproxy是目前比较流行的一种群集调度工具,同类群集调度工具有很多如LVS和Nginx,本案例介绍使用Haproxy及Nginx搭建一套Web群集,感兴趣的... 目录一、案例分析1.案例概述2.案例前置知识点2.1 HTTP请求2.2 负载均衡常用调度算法 2.

python web 开发之Flask中间件与请求处理钩子的最佳实践

《pythonweb开发之Flask中间件与请求处理钩子的最佳实践》Flask作为轻量级Web框架,提供了灵活的请求处理机制,中间件和请求钩子允许开发者在请求处理的不同阶段插入自定义逻辑,实现诸如... 目录Flask中间件与请求处理钩子完全指南1. 引言2. 请求处理生命周期概述3. 请求钩子详解3.1

SpringBoot项目Web拦截器使用的多种方式

《SpringBoot项目Web拦截器使用的多种方式》在SpringBoot应用中,Web拦截器(Interceptor)是一种用于在请求处理的不同阶段执行自定义逻辑的机制,下面给大家介绍Sprin... 目录一、实现 HandlerInterceptor 接口1、创建HandlerInterceptor实

Web技术与Nginx网站环境部署教程

《Web技术与Nginx网站环境部署教程》:本文主要介绍Web技术与Nginx网站环境部署教程,具有很好的参考价值,希望对大家有所帮助,如有错误或未考虑完全的地方,望不吝赐教... 目录一、Web基础1.域名系统DNS2.Hosts文件3.DNS4.域名注册二.网页与html1.网页概述2.HTML概述3.