Debug School

rakesh kumar
rakesh kumar

Posted on • Edited on

Different kinds of custom chains for obtaining accurate information using LangChain

Retrieval → Summarization Chain
Retrieval → Contextual Question Answering Chain
Summarization → Sentiment Analysis Chain
Multi-Document Retrieval → Aggregation Chain
Contextual FAQ Generation Chain
Context → Argument Analysis Chain
Document Comparison Chain
Retrieval → Contextual Role-Based QA Chain
Workflow Automation Chain

Retrieval → Summarization Chain
Use Case: Retrieve relevant documents and summarize their content.
Code:

from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import create_retrieval_chain
from langchain_core.llms import OpenAI

llm = OpenAI(model="gpt-4")
retrieval_prompt = ChatPromptTemplate.from_template("Summarize the following context:\n{context}")
retrieval_chain = create_retrieval_chain(retriever, retrieval_prompt | llm)

query = "What are the benefits of AI in education?"
response = retrieval_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

AI in education enables personalized learning, automates administrative tasks, and improves accessibility for students with disabilities.

  1. Retrieval → Contextual Question Answering Chain Use Case: Fetch relevant documents and answer user-specific questions. Code:
qa_prompt = ChatPromptTemplate.from_template(
    "Based on the context, answer the user's question:\nContext: {context}\nQuestion: {input}"
)
qa_chain = create_retrieval_chain(retriever, qa_prompt | llm)

query = "How does AI assist in healthcare?"
response = qa_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

AI assists in healthcare by improving diagnostics, enabling personalized treatment plans, and streamlining administrative workflows.

  1. Summarization → Sentiment Analysis Chain Use Case: Summarize the content and analyze the sentiment of the summary. Code:
summarize_prompt = ChatPromptTemplate.from_template("Summarize the following text:\n{input}")
sentiment_prompt = ChatPromptTemplate.from_template(
    "Analyze the sentiment of this text:\n{text}"
)

summary_chain = summarize_prompt | llm
sentiment_chain = sentiment_prompt | llm
multichain = SequentialChain(
    chains={"summary": summary_chain, "sentiment": sentiment_chain},
    input_variables=["input"],
    output_variables=["sentiment_analysis"],
    map_inputs={
        "summary": {"input": "input"},
        "sentiment": {"text": "summary_output"}
    }
)

text = "AI technology is revolutionizing industries with its incredible efficiency."
result = multichain.invoke({"input": text})
print("Summary Output:\n", result["summary_output"])
print("\nSentiment Analysis Output:\n", result["sentiment_analysis"])
Enter fullscreen mode Exit fullscreen mode

Output:

Summary Output:
 AI technology is transforming industries with exceptional efficiency.

Sentiment Analysis Output:
 The sentiment of the text is highly positive, reflecting excitement and admiration for the advancements in AI technology.

Enter fullscreen mode Exit fullscreen mode
  1. Multi-Document Retrieval → Aggregation Chain Use Case: Retrieve information from multiple documents and aggregate insights. Code:
aggregation_prompt = ChatPromptTemplate.from_template(
    "Combine and summarize the insights from these contexts:\n{contexts}"
)
aggregation_chain = create_retrieval_chain(retriever, aggregation_prompt | llm)

query = "Explain quantum computing trends in 2022."
response = aggregation_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

Quantum computing in 2022 saw advancements in hardware scalability, increased investment, and breakthroughs in error correction.
Enter fullscreen mode Exit fullscreen mode
  1. Contextual FAQ Generation Chain Use Case: Generate FAQs based on provided documents or context. Code:
faq_prompt = ChatPromptTemplate.from_template(
    "Generate 5 FAQs based on the following context:\n{context}"
)
faq_chain = create_retrieval_chain(retriever, faq_prompt | llm)

query = "AI in agriculture"
response = faq_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

1. What is AI's role in agriculture?
2. How does AI optimize crop yields?
3. What are the benefits of AI-powered drones in farming?
4. How can farmers use AI for pest control?
5. What challenges does AI face in agriculture?
6. Translation → Grammar Explanation Chain
Enter fullscreen mode Exit fullscreen mode

Use Case: Translate text into another language and explain its grammar.
Code:

translation_prompt = ChatPromptTemplate.from_template("Translate to French:\n{input}")
grammar_prompt = ChatPromptTemplate.from_template(
    "Explain the grammar of this French sentence:\n{text}"
)

translation_chain = translation_prompt | llm
grammar_chain = grammar_prompt | llm
multichain = SequentialChain(
    chains={"translation": translation_chain, "grammar": grammar_chain},
    input_variables=["input"],
    output_variables=["grammar_explanation"],
    map_inputs={
        "translation": {"input": "input"},
        "grammar": {"text": "translation_output"}
    }
)

text = "The quick brown fox jumps over the lazy dog."
result = multichain.invoke({"input": text})
print("Translation Output:\n", result["translation_output"])
print("\nGrammar Explanation:\n", result["grammar_explanation"])
Enter fullscreen mode Exit fullscreen mode

Output:

Translation Output:
 Le rapide renard brun saute par-dessus le chien paresseux.

Grammar Explanation:
 The sentence is in the present tense. "Le rapide renard brun" is the subject, with "rapide" and "brun" acting as adjectives modifying "renard." "Saute" is the third-person singular form of the verb "sauter," and "par-dessus" is a prepositional phrase indicating direction. "Le chien paresseux" is the object, with "paresseux" modifying "chien."
Enter fullscreen mode Exit fullscreen mode
  1. Context → Argument Analysis Chain Use Case: Analyze arguments presented in a given context. Code:
argument_prompt = ChatPromptTemplate.from_template(
    "Analyze the arguments presented in this context:\n{context}"
)
argument_chain = create_retrieval_chain(retriever, argument_prompt | llm)

query = "Analyze arguments about AI ethics."
response = argument_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

The context presents arguments about AI's potential for bias, lack of transparency, and its transformative societal benefits.

  1. Document Comparison Chain Use Case: Compare two documents and highlight differences or similarities. Code:
compare_prompt = ChatPromptTemplate.from_template(
    "Compare the following two contexts:\nContext 1: {context1}\nContext 2: {context2}"
)
compare_chain = create_stuff_documents_chain(llm, compare_prompt)

result = compare_chain.invoke({
    "context1": "AI enables automation in factories.",
    "context2": "AI facilitates automation in healthcare."
})
print(result)
Enter fullscreen mode Exit fullscreen mode

Output:

Both contexts discuss AI enabling automation, but the focus differs: one is on factories and the other on healthcare.
Enter fullscreen mode Exit fullscreen mode
  1. Retrieval → Contextual Role-Based QA Chain Use Case: Retrieve documents and generate answers tailored to a specific role (e.g., teacher, student). Code:
role_prompt = ChatPromptTemplate.from_template(
    "Answer the question as a teacher based on the context:\n{context}"
)
role_chain = create_retrieval_chain(retriever, role_prompt | llm)

query = "Explain photosynthesis for a high school biology class."
response = role_chain.invoke({"input": query})
print(response)
Enter fullscreen mode Exit fullscreen mode

Output:

Photosynthesis is the process by which plants convert sunlight into energy using chlorophyll in their leaves. This energy is stored as glucose.
Enter fullscreen mode Exit fullscreen mode
  1. Workflow Automation Chain Use Case: Automate a multi-step process (e.g., fetch, summarize, and send). Code:
summary_prompt = ChatPromptTemplate.from_template("Summarize the context:\n{context}")
email_prompt = ChatPromptTemplate.from_template(
    "Compose a professional email based on this summary:\n{summary}"
)

summary_chain = summary_prompt | llm
email_chain = email_prompt | llm
multichain = SequentialChain(
    chains={"summary": summary_chain, "email": email_chain},
    input_variables=["context"],
    output_variables=["email"],
    map_inputs={
        "summary": {"context": "context"},
        "email": {"summary": "summary_output"}
    }
)

context = "Our quarterly sales have increased by 20%, and customer satisfaction scores are at an all-time high."
result = multichain.invoke({"context": context})
print("Summary Output:\n", result["summary_output"])
print("\nEmail Output:\n", result["email"])
Enter fullscreen mode Exit fullscreen mode

Output:

Summary Output:
 Quarterly sales have increased by 20%, and customer satisfaction scores have reached an all-time high.

Email Output:
 Subject: Celebrating Our Achievements This Quarter

Dear Team,

I am thrilled to share some outstanding news with all of you. Our quarterly sales have seen an impressive 20% increase, and our customer satisfaction scores are at an all-time high. These achievements are a testament to your hard work, dedication, and commitment to excellence.

Thank you for your unwavering efforts. Let’s continue striving for even greater success in the upcoming quarter.

Best regards,
[Your Name]

Enter fullscreen mode Exit fullscreen mode

Top comments (0)