本文主要是介绍【langchain中自定义LLM together为例子】,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!
为了在LangChain中使用TogetherAI,我们必须扩展基本LLM抽象类。
这里有一个创建自定义LLM包装器的示例代码(https://python.langchain.com/docs/modules/model_io/llms/custom_llm),但我们将通过类型验证、异常处理和日志记录使其变得更好。
先导入相应的包
import os
import time
import json
import logging
from datetime import datetimeimport together
from langchain.llms.base import LLM
from langchain import PromptTemplate, LLMChainfrom dotenv import load_dotenv # The dotenv library's load_dotenv function reads a .env file to load environment variables into the process environment. This is a common method to handle configuration settings securely.
# Load env variables
load_dotenv()# Set up logging
logging.basicConfig(level=logging.INFO)
langchain.lms.base模块通过提供比直接实现_generate方法用户更友好的界面来简化与LLM的交互。 类langchain.lms.base.LLM是LLM的一个抽象基类,这意味着它为其他类提供了一个模板,但并不意味着它自己被实例化。它旨在通过在内部处理LLM的复杂性,为LLM的工作提供一个更简单的界面,允许用户更容易地与这些模型交互。
__call__方法允许像函数一样调用类,它检查缓存并在给定提示下运行LLM。
class TogetherLLM(LLM):"""Together LLM integration.Attributes:model (str): Model endpoint to use.together_api_key (str): Together API key.temperature (float): Sampling temperature to use.max_tokens (int): Maximum number of tokens to generate."""model: str = "togethercomputer/llama-2-7b-chat"together_api_key: str = os.environ["TOGETHER_API_KEY"]temperature: float = 0.7max_tokens: int = 512@propertydef _llm_type(self) -> str:"""Return type of LLM."""return "together"def _call(self, prompt: str, **kwargs: Any) -> str:"""Call to Together endpoint."""try:logging.info("Making API call to Together endpoint.")return self._make_api_call(prompt)except Exception as e:logging.error(f"Error in TogetherLLM _call: {e}", exc_info=True)raisedef _make_api_call(self, prompt: str) -> str:"""Make the API call to the Together endpoint."""together.api_key = self.together_api_keyoutput = together.Complete.create(prompt,model=self.model,max_tokens=self.max_tokens,temperature=self.temperature,)logging.info("API call successful.")return output['output']['choices'][0]['text']
使用创建的LLM
llm = TogetherLLM(model = model,max_tokens = 256,temperature = 0.8
)# 创建chain
prompt_template = "You are a friendly bot, answer the following question: {question}"
prompt = PromptTemplate(input_variables=["question"], template=prompt_template
)chat = LLMChain(llm=llm, prompt=prompt)# chat
chat("Can AI take over developer jobs?")
参考链接
这篇关于【langchain中自定义LLM together为例子】的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!