Straightforward Query with llm.invoke
System-Level Instruction with ChatPromptTemplate
Using a Custom Chain with Preprocessing
Using a Retriever to Enhance Context
Multi-Turn Chat with Memory
Straightforward Query with llm.invoke
This is the simplest and most direct approach.
from langchain_core.llms import OpenAI
# Initialize the LLM
llm = OpenAI(model="text-davinci-003")
# Straightforward query
response = llm.invoke("What is generative AI?")
print(response)
Output
Generative AI refers to artificial intelligence systems that can generate new content, such as text, images, or audio, based on the patterns they have learned from data.
System-Level Instruction with ChatPromptTemplate
This approach involves specifying system-level roles and behaviors.
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.llms import OpenAI
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Create a chat prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert AI Engineer. Provide precise and detailed answers."),
("user", "{input}")
])
# Inject input dynamically and query
response = prompt | llm
result = response.invoke({"input": "Explain the concept of Langsmith."})
print(result)
Output
Langsmith is a tool designed to help developers test, evaluate, and optimize AI models by creating structured workflows and managing prompts effectively. It enables streamlined deployment of AI-powered applications.
Using a Custom Chain with Preprocessing
Here, you preprocess the input before sending it to the LLM, useful for ensuring cleaner or normalized queries.
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.llms import OpenAI
# Preprocessing function
def preprocess_input(user_input):
return user_input.strip().capitalize() + "?"
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Create a chain with a prompt
prompt = ChatPromptTemplate.from_messages([
("system", "You are an AI assistant. Provide concise and accurate answers."),
("user", "{input}")
])
chain = prompt | llm
# Preprocess input and invoke the chain
input_query = preprocess_input("what is LangChain?")
response = chain.invoke({"input": input_query})
print(response)
Output
LangChain is a framework that allows developers to build robust AI applications by combining language models with external data sources, APIs, and tools.
Using a Retriever to Enhance Context
This approach combines a retriever (e.g., a vector database) to provide additional context before querying the LLM.
from langchain_core.llms import OpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
# Load or create a vector database (simulated here)
documents = ["LangChain is a framework for building applications powered by language models.",
"Langsmith is used for testing and deploying AI workflows."]
embeddings = OpenAIEmbeddings()
db = FAISS.from_texts(documents, embeddings)
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Retrieve context
query = "What is Langsmith?"
retrieved_docs = db.similarity_search(query)
# Combine retrieved context with user query
context = "\n".join([doc.page_content for doc in retrieved_docs])
full_query = f"Context: {context}\nQuestion: {query}"
# Invoke the LLM
response = llm.invoke(full_query)
print(response)
Output
Langsmith is a tool for testing and deploying AI workflows. It works seamlessly with frameworks like LangChain to ensure robust and reliable applications.
Multi-Turn Chat with Memory
This approach enables dynamic, multi-turn conversations using memory to retain context across queries.
from langchain_core.llms import OpenAI
from langchain_core.memory import ConversationMemory
from langchain_core.prompts import ChatPromptTemplate
# Initialize memory
memory = ConversationMemory()
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Create a chat prompt template
prompt = ChatPromptTemplate.from_messages([
("system", "You are a knowledgeable AI assistant. Keep track of the conversation context."),
("user", "{input}")
])
# Chain the memory with the LLM
chain = memory | prompt | llm
# First question
response_1 = chain.invoke({"input": "What is generative AI?"})
print(f"First Response: {response_1}")
# Second question with memory
response_2 = chain.invoke({"input": "How does it differ from traditional AI?"})
print(f"Second Response: {response_2}")
Output
First Response: Generative AI refers to systems capable of generating new content, such as text, images, or music, based on learned patterns from data.
Second Response: Generative AI differs from traditional AI in its ability to create new data,
Top comments (0)