自学内容网 自学内容网

Azure OpenAI and token limit

题意:Azure OpenAI 和令牌限制

问题背景:

I want to use GPT model to analyze my data. Data is a suite of records (e.g. 1000 records) with 10 or even more properties. I want to say GPT (or other model):

我想使用 GPT 模型来分析我的数据。数据是一组记录(例如,1000 条记录),每条记录包含 10 个或更多属性。我想告诉 GPT(或其他模型):

"please, analyze this data and find and exceptions, extremums etc. Anything, what is different than common"

I use Azure.AI.OpenAI nuget package 

我使用了 `Azure.AI.OpenAI` NuGet 包。azure-sdk-for-net/sdk/openai/Azure.AI.OpenAI/README.md at Azure.AI.OpenAI_1.0.0-beta.12 · Azure/azure-sdk-for-net · GitHub

When I try model "gpt-35-turbo with the following code:

当我尝试使用以下代码运行模型 "gpt-35-turbo" 时:

var chatCompletionsOptions = new ChatCompletionsOptions()
        {
            DeploymentName = "gpt-35-turbo", // Use DeploymentName for "model" with non-Azure clients
            Messages =
            {
                new ChatRequestSystemMessage("You are data specialist"),
                new ChatRequestUserMessage(@"Analyze this data and find exceptions"),
                new ChatRequestUserMessage(stringBuilder.ToString())
            }
        };

        Response<ChatCompletions> aiChatResponse = await _openAIClient.GetChatCompletionsAsync(chatCompletionsOptions);
        ChatResponseMessage responseChatMessage = aiChatResponse.Value.Choices[0].Message;

where stringBuilder has JSONL model with 1000 records and even 2 columns

其中,`stringBuilder` 包含具有 1000 条记录和 2 列的 JSONL 模型。

I get

{
  "error": {
    "message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 17901 tokens. Please reduce the length of the messages.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "context_length_exceeded"
  }
}

so, as we can see, limitation is small to analyze data via chat

所以,如我们所见,通过对话分析数据的限制很小。

When I try to use model text-embedding-ada-002:

当我尝试使用模型 `text-embedding-ada-002` 时:

        EmbeddingsOptions embeddingsOptions = new("text-embedding-ada-002", strings);
        Response<Embeddings> responseEmbeddings = await _openAIClient.GetEmbeddingsAsync(embeddingsOptions);

        EmbeddingItem eItem = responseEmbeddings.Value.Data[0];
        ReadOnlyMemory<float> embedding = eItem.Embedding;

but it being executed long time and I cancelled it for cost increasing :) with 10 records it returns only number list...

但它执行时间过长,我取消了它以避免成本增加 :) 使用 10 条记录时,它只返回了数字列表……

ADDED #1

e.g. I have list of the people and all of them from Chicago, except 2, which are from other cities. Or most of them has salary approx $100000 per year, but some of them has $10000 (much less) and $100000 (much more, than approx). Or any other different exceptions and deviations, I don't know which, because otherwise I can develop it directly. I want to have ability to analyze all data as model and find anything (probably, not only by one parameter, probably, linked parameters). And, even find relations inside data, between one parameter from another (e.g. salary in New York much more, that city X). There are only examples, main goal - I DON'T KNOW which concrete relations and exceptions, AI should point me it

例如,我有一个人员列表,其中大部分人都来自芝加哥,只有 2 个人来自其他城市。或者大多数人的年薪大约为 $100,000,但有些人的薪资是 $10,000(远低于这个数值)和 $100,000(高于这个数值)。或者其他不同的例外和偏差,我不知道具体是什么,因为我可以直接开发它。我想要的是能够像模型一样分析所有数据,并发现任何异常(可能不仅仅是一个参数,可能是相关参数)。甚至找到数据内部的关系,比如一个参数与另一个参数之间的关系(例如,纽约的薪资远高于城市 X)。这些只是示例,主要目标是——我不知道具体的关系和异常是什么,AI 应该指出这些。

How to solve my task?        怎样解决我的任务?

问题解决:

I'll try to answer questions to the issues you are facing. Hopefully it will give you some ideas about how to proceed.

我会尝试回答你所遇到问题的相关问题。希望这能给你一些如何继续的思路。

First, regarding text-embedding-ada-002 model, you cannot use it for chat completions. It is used when you want to create vector representation of the data and feed to a vector database.

首先,关于 `text-embedding-ada-002` 模型,你不能将它用于对话补全。它用于在你需要创建数据的向量表示并将其输入到向量数据库时使用。

Regarding the error you are getting for token length with GPT 3.5 model, it is expected as the max token sizes (also known as context size i.e. the amount of data that can be sent to Azure OpenAI) are fixed for a model and you cannot exceed them. A few things you can do to take care of this error:

关于你在使用 GPT-3.5 模型时遇到的令牌长度错误,这是预料之中的,因为每个模型的最大令牌数(也称为上下文大小,即可以发送给 Azure OpenAI 的数据量)是固定的,不能超出这个限制。为了解决这个错误,你可以尝试以下几种方法:

  • Use a model that supports larger context length. For example, you can use gpt-35-turbo-16k which supports 16k context size (double the context size of the model you are currently using) or gpt-4-32k (four time the context size of the model you are currently using). This will allow you to pass larger text to the model for comprehension. However, please note that the cost for GPT-4 models are significantly higher than GPT-3.5 models so please also take that into consideration.

使用支持更大上下文长度的模型。例如,你可以使用 `gpt-35-turbo-16k`,它支持 16k 的上下文大小(是你当前使用的模型上下文大小的两倍),或者 `gpt-4-32k`(是你当前使用的模型上下文大小的四倍)。这将允许你将更大的文本传递给模型以进行理解。然而,请注意,GPT-4 模型的成本明显高于 GPT-3.5 模型,所以请也要考虑这一点。

  • Use prompt patterns. One possible solution would be make use of Map-Reduce kind of pattern where you chunk your data in smaller pieces and send each chunk to LLM with your question. Once you get response for each chunk, send the responses back to LLM to comprehend and come up with a final answer (without knowing details about the business problem you are trying to solve, I am not 100% sure that if this would solve your problem though so you may want to try it out).

使用提示模式。一种可能的解决方案是采用类似 Map-Reduce 的模式,将数据分成较小的块,并将每个块连同你的问题一起发送给语言模型。收到每个块的响应后,再将这些响应发送回语言模型,以便其理解并得出最终答案(由于我不了解你所要解决的业务问题的具体细节,我不能百分之百确定这种方法是否能解决你的问题,不过你可以尝试一下)。


原文地址:https://blog.csdn.net/suiusoar/article/details/142358949

免责声明:本站文章内容转载自网络资源,如本站内容侵犯了原著者的合法权益,可联系本站删除。更多内容请关注自学内容网(zxcms.com)!