Single Chain
MultiChain
Difference between single chain and multichain
Coding example of Multichain
FAQ Generation from User Query
Recipe Recommendation with Instructions
Sentiment Analysis and Summary
Job Recommendation and Resume Tailoring
Language Translation and Polishing
Research Workflow: Retrieval, Summarization, and FAQ Generation
Task
Personal Assistant: Task Suggestion, Prioritization, and Scheduling Task
Language Learning: Translation, Grammar Explanation, and Sentence Practice
Movie Recommendation: Suggestion, Sentiment Analysis, and Review
Business Pitch: Idea Generation, Problem-Solution Match, and Presentation Draft
Single Chain
What is a Single Chain?
A single chain is a simple workflow where a single LLM or prompt template is used to process input and produce output. There’s no interconnection between multiple processing steps.
Advantages of Single Chain
Simplicity
:
Easy to set up and execute for straightforward tasks.
Requires minimal configuration.
Low Resource Usage
:
Only one model invocation is required, reducing computation time and cost.
Direct Output
:
No intermediate steps or dependencies, so results are produced faster.
Best for Basic Use Cases
:
Ideal for one-off tasks like answering a single query or generating content from a prompt.
Disadvantages of Single Chain
Limited Functionality
:
Cannot handle workflows requiring multiple logical steps.
Lacks flexibility for complex tasks.
No Reusability of Intermediate Results
:
Cannot reuse intermediate outputs for additional processing.
Use Case
Answering a simple query like
:
result = llm.invoke("What is generative AI?")
print(result)
MultiChain
What is a MultiChain?
A MultiChain combines multiple chains (or steps) into a sequential or interconnected workflow. Each step processes the input or the output from the previous step.
Advantages of MultiChain
Modular Workflow
:
Complex tasks are broken into smaller, manageable steps.
Each step can be customized independently.
Reusability of Intermediate Outputs
:
Outputs of one chain (step) can be reused by subsequent chains, ensuring efficient workflows.
Scalability
:
Useful for handling multi-step problems like information retrieval, summarization, and text transformation.
Flexibility
:
Allows integration with external systems like retrievers, memory, or databases for richer functionality.
Can easily adapt to workflows with branching or conditional logic.
Clarity
:
By separating tasks into multiple steps, the overall process becomes easier to debug and maintain.
Disadvantages of MultiChain
Complexity
:
Requires additional setup for chaining, mapping inputs, and managing dependencies.
May be overkill for simple tasks.
Higher Resource Usage
:
Each step in the chain invokes the LLM, increasing computational cost and response time.
Dependency Management
:
Outputs of one chain must be correctly mapped as inputs to the next, which can introduce errors.
Use Case
For example, combining retrieval, summarization, and user interaction:
result = multichain.invoke({"input": "Tell me about Alan Turing."})
print(result["summarized_output"])
FAQ Generation from User Query
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.chains import SequentialChain
from langchain_core.llms import OpenAI
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Step 1: Retrieve information
retrieval_prompt = ChatPromptTemplate.from_messages([
("system", "You are an expert AI. Provide detailed information about the topic."),
("user", "{input}")
])
# Step 2: Generate FAQs
faq_prompt = ChatPromptTemplate.from_messages([
("system", "Based on the given information, generate a set of 5 frequently asked questions."),
("user", "{retrieved_info}")
])
# Create chains
retrieval_chain = retrieval_prompt | llm
faq_chain = faq_prompt | llm
# Combine chains
multichain = SequentialChain(
chains={
"retrieval_chain": retrieval_chain,
"faq_chain": faq_chain
},
input_variables=["input"],
output_variables=["retrieved_info", "faqs"],
map_inputs={
"retrieval_chain": {"input": "input"},
"faq_chain": {"retrieved_info": "retrieval_chain_output"}
},
map_outputs={
"retrieval_chain_output": "retrieved_info",
"faq_chain_output": "faqs"
}
)
# Execute the MultiChain
query = "Tell me about quantum computing."
result = multichain.invoke({"input": query})
# Output
print("Retrieved Information:\n", result["retrieved_info"])
print("\nFAQs:\n", result["faqs"])
Retrieved Information:
Quantum computing is a type of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform operations on data...
FAQs:
1. What is quantum computing?
2. How does quantum computing differ from classical computing?
3. What are qubits, and how do they work?
4. What are the practical applications of quantum computing?
5. What are the challenges in building quantum computers?
Recipe Recommendation with Instructions
Goal: Recommend a recipe based on a user’s preferred cuisine and then provide step-by-step cooking instructions.
# Prompts for recommending a recipe and instructions
recommendation_prompt = ChatPromptTemplate.from_messages([
("system", "You are a culinary expert. Recommend a dish based on the cuisine preference."),
("user", "{input}")
])
instruction_prompt = ChatPromptTemplate.from_messages([
("system", "Provide detailed cooking instructions for the given recipe."),
("user", "{recipe}")
])
# Create chains
recommendation_chain = recommendation_prompt | llm
instruction_chain = instruction_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"recommendation_chain": recommendation_chain,
"instruction_chain": instruction_chain
},
input_variables=["input"],
output_variables=["cooking_instructions"],
map_inputs={
"recommendation_chain": {"input": "input"},
"instruction_chain": {"recipe": "recommendation_chain_output"}
},
map_outputs={"instruction_chain_output": "cooking_instructions"}
)
# Execute the MultiChain
query = "Italian cuisine"
result = multichain.invoke({"input": query})
# Output
print("Recipe and Instructions:\n", result["cooking_instructions"])
Recipe and Instructions
:
Recommended Recipe: Spaghetti Carbonara
1. Cook spaghetti in salted boiling water until al dente.
2. Sauté pancetta until crispy.
3. Whisk eggs and Parmesan cheese in a bowl.
4. Combine spaghetti with pancetta and egg mixture, stirring quickly.
5. Serve with additional Parmesan and freshly ground black pepper.
Sentiment Analysis and Summary
Goal: Analyze the sentiment of a user’s text and provide a summary based on the sentiment.
# Prompts for sentiment analysis and summary
sentiment_prompt = ChatPromptTemplate.from_messages([
("system", "Determine the sentiment (positive, negative, or neutral) of the given text."),
("user", "{input}")
])
summary_prompt = ChatPromptTemplate.from_messages([
("system", "Summarize the content based on its sentiment."),
("user", "Sentiment: {sentiment}\nText: {input}")
])
# Create chains
sentiment_chain = sentiment_prompt | llm
summary_chain = summary_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"sentiment_chain": sentiment_chain,
"summary_chain": summary_chain
},
input_variables=["input"],
output_variables=["sentiment_summary"],
map_inputs={
"sentiment_chain": {"input": "input"},
"summary_chain": {"sentiment": "sentiment_chain_output", "input": "input"}
},
map_outputs={"summary_chain_output": "sentiment_summary"}
)
# Execute the MultiChain
text = "The movie had stunning visuals, but the storyline was boring and predictable."
result = multichain.invoke({"input": text})
# Output
print("Sentiment and Summary:\n", result["sentiment_summary"])
Output
Sentiment and Summary:
Sentiment: Negative
Summary: Although the visuals were impressive, the predictable storyline disappointed viewe
Job Recommendation and Resume Tailoring
Goal: Recommend a job based on user input and tailor their resume to match the job description.
# Prompts for job recommendation and resume tailoring
job_prompt = ChatPromptTemplate.from_messages([
("system", "Recommend a job based on the user's input."),
("user", "{input}")
])
resume_prompt = ChatPromptTemplate.from_messages([
("system", "Tailor the user's resume to match the given job description."),
("user", "Job: {job}\nResume: {resume}")
])
# Create chains
job_chain = job_prompt | llm
resume_chain = resume_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"job_chain": job_chain,
"resume_chain": resume_chain
},
input_variables=["input", "resume"],
output_variables=["tailored_resume"],
map_inputs={
"job_chain": {"input": "input"},
"resume_chain": {"job": "job_chain_output", "resume": "resume"}
},
map_outputs={"resume_chain_output": "tailored_resume"}
)
# Execute the MultiChain
user_input = "Software engineering roles in AI"
resume_text = "Experienced software engineer with expertise in backend development and databases."
result = multichain.invoke({"input": user_input, "resume": resume_text})
# Output
print("Tailored Resume:\n", result["tailored_resume"])
Output
Tailored Resume:
Recommended Job: AI Software Engineer at OpenAI
Resume: Experienced software engineer with expertise in backend development, databases, and AI tools such as TensorFlow and PyTorch.
Language Translation and Polishing
Goal: Translate text to another language and then polish the translation for fluency.
# Prompts for translation and polishing
translation_prompt = ChatPromptTemplate.from_messages([
("system", "Translate the text to French."),
("user", "{input}")
])
polishing_prompt = ChatPromptTemplate.from_messages([
("system", "Polish the translated text for fluency."),
("user", "{translated_text}")
])
# Create chains
translation_chain = translation_prompt | llm
polishing_chain = polishing_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"translation_chain": translation_chain,
"polishing_chain": polishing_chain
},
input_variables=["input"],
output_variables=["final_translation"],
map_inputs={
"translation_chain": {"input": "input"},
"polishing_chain": {"translated_text": "translation_chain_output"}
},
map_outputs={"polishing_chain_output": "final_translation"}
)
# Execute the MultiChain
text = "The future of artificial intelligence is exciting and full of potential."
result = multichain.invoke({"input": text})
# Output
print("Polished Translation:\n", result["final_translation"])
Output
Polished Translation:
L'avenir de l'intelligence artificielle est passionnant et plein de potentiel.
Research Workflow: Retrieval, Summarization, and FAQ Generation
Task:
Retrieve detailed information about a topic.
Summarize the retrieved content.
Generate a list of frequently asked questions (FAQs) from the summary.
Code:
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.chains import SequentialChain
from langchain_core.llms import OpenAI
# Initialize the LLM
llm = OpenAI(model="gpt-4")
# Step 1: Retrieve detailed information
retrieval_prompt = ChatPromptTemplate.from_messages([
("system", "You are a knowledgeable assistant. Provide detailed information on the topic."),
("user", "{input}")
])
# Step 2: Summarize the content
summary_prompt = ChatPromptTemplate.from_messages([
("system", "Summarize the given content in a concise manner."),
("user", "{retrieved_info}")
])
# Step 3: Generate FAQs
faq_prompt = ChatPromptTemplate.from_messages([
("system", "Generate 5 FAQs based on the summarized information."),
("user", "{summary}")
])
# Create individual chains
retrieval_chain = retrieval_prompt | llm
summary_chain = summary_prompt | llm
faq_chain = faq_prompt | llm
# Combine chains
multichain = SequentialChain(
chains={
"retrieval_chain": retrieval_chain,
"summary_chain": summary_chain,
"faq_chain": faq_chain
},
input_variables=["input"],
output_variables=["faqs"],
map_inputs={
"retrieval_chain": {"input": "input"},
"summary_chain": {"retrieved_info": "retrieval_chain_output"},
"faq_chain": {"summary": "summary_chain_output"}
},
map_outputs={"faq_chain_output": "faqs"}
)
# Execute MultiChain
query = "Quantum computing"
result = multichain.invoke({"input": query})
# Output
print("FAQs:\n", result["faqs"])
Output
:
FAQs:
1. What is quantum computing?
2. How does it differ from classical computing?
3. What are the main components of a quantum computer?
4. What industries are benefiting from quantum computing?
5. What challenges do we face in building quantum computers?
- Personal Assistant: Task Suggestion, Prioritization, and Scheduling Task: Suggest tasks based on a user's input. Prioritize the suggested tasks. Generate a daily schedule based on priorities. Code:
# Prompts for task suggestion, prioritization, and scheduling
suggest_task_prompt = ChatPromptTemplate.from_messages([
("system", "You are a productivity assistant. Suggest tasks based on the user's input."),
("user", "{input}")
])
prioritize_task_prompt = ChatPromptTemplate.from_messages([
("system", "Prioritize the following tasks."),
("user", "{tasks}")
])
schedule_prompt = ChatPromptTemplate.from_messages([
("system", "Generate a daily schedule based on the prioritized tasks."),
("user", "{prioritized_tasks}")
])
# Create chains
task_chain = suggest_task_prompt | llm
priority_chain = prioritize_task_prompt | llm
schedule_chain = schedule_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"task_chain": task_chain,
"priority_chain": priority_chain,
"schedule_chain": schedule_chain
},
input_variables=["input"],
output_variables=["daily_schedule"],
map_inputs={
"task_chain": {"input": "input"},
"priority_chain": {"tasks": "task_chain_output"},
"schedule_chain": {"prioritized_tasks": "priority_chain_output"}
},
map_outputs={"schedule_chain_output": "daily_schedule"}
)
# Execute MultiChain
query = "I need to prepare a presentation, finish a report, and exercise."
result = multichain.invoke({"input": query})
# Output
print("Daily Schedule:\n", result["daily_schedule"])
Output
:
Daily Schedule:
9:00 AM - Prepare presentation slides
11:00 AM - Draft the report
2:00 PM - Finalize the presentation
4:00 PM - Review and edit the report
6:00 PM - Exercise
- Language Learning: Translation, Grammar Explanation, and Sentence Practice Task: Translate text into a foreign language. Explain the grammar structure of the translated sentence. Generate practice sentences for the learner. Code:
# Prompts for translation, grammar explanation, and sentence practice
translation_prompt = ChatPromptTemplate.from_messages([
("system", "Translate the following text into Spanish."),
("user", "{input}")
])
grammar_prompt = ChatPromptTemplate.from_messages([
("system", "Explain the grammar structure of the translated text."),
("user", "{translated_text}")
])
practice_prompt = ChatPromptTemplate.from_messages([
("system", "Generate 3 practice sentences based on the grammar explained."),
("user", "{grammar}")
])
# Create chains
translation_chain = translation_prompt | llm
grammar_chain = grammar_prompt | llm
practice_chain = practice_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"translation_chain": translation_chain,
"grammar_chain": grammar_chain,
"practice_chain": practice_chain
},
input_variables=["input"],
output_variables=["practice_sentences"],
map_inputs={
"translation_chain": {"input": "input"},
"grammar_chain": {"translated_text": "translation_chain_output"},
"practice_chain": {"grammar": "grammar_chain_output"}
},
map_outputs={"practice_chain_output": "practice_sentences"}
)
# Execute MultiChain
text = "The cat is sitting on the mat."
result = multichain.invoke({"input": text})
# Output
print("Practice Sentences:\n", result["practice_sentences"])
Output:
Practice Sentences:
1. El perro está sentado en la alfombra.
2. El niño está jugando en el patio.
3. La mujer está leyendo en la silla.
- Movie Recommendation: Suggestion, Sentiment Analysis, and Review Summary Task: Recommend a movie based on user preferences. Perform sentiment analysis on recent reviews for the movie. Provide a brief summary of the reviews. Code:
# Prompts for movie suggestion, sentiment analysis, and review summary
movie_prompt = ChatPromptTemplate.from_messages([
("system", "Suggest a movie based on the user's preferences."),
("user", "{input}")
])
sentiment_prompt = ChatPromptTemplate.from_messages([
("system", "Analyze the sentiment of the following reviews."),
("user", "{reviews}")
])
review_summary_prompt = ChatPromptTemplate.from_messages([
("system", "Summarize the reviews based on their sentiment."),
("user", "Sentiment: {sentiment}\nReviews: {reviews}")
])
# Create chains
movie_chain = movie_prompt | llm
sentiment_chain = sentiment_prompt | llm
review_summary_chain = review_summary_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"movie_chain": movie_chain,
"sentiment_chain": sentiment_chain,
"review_summary_chain": review_summary_chain
},
input_variables=["input", "reviews"],
output_variables=["review_summary"],
map_inputs={
"movie_chain": {"input": "input"},
"sentiment_chain": {"reviews": "reviews"},
"review_summary_chain": {"sentiment": "sentiment_chain_output", "reviews": "reviews"}
},
map_outputs={"review_summary_chain_output": "review_summary"}
)
# Execute MultiChain
user_input = "I enjoy sci-fi and action movies."
reviews = "The movie was visually stunning but lacked a strong storyline."
result = multichain.invoke({"input": user_input, "reviews": reviews})
# Output
print("Review Summary:\n", result["review_summary"])
Output
:
Review Summary:
The movie received mixed reviews. While the visuals were praised, the storyline was criticized for being weak.
Business Pitch: Idea Generation, Problem-Solution Match, and Presentation Draft
Task:
Generate a business idea based on user input.
Match the idea to a specific problem and solution.
Draft a presentation outline for the idea.
# Prompts for idea generation, problem-solution, and presentation draft
idea_prompt = ChatPromptTemplate.from_messages([
("system", "Generate a business idea based on the user's input."),
("user", "{input}")
])
problem_solution_prompt = ChatPromptTemplate.from_messages([
("system", "Match the business idea to a specific problem and its solution."),
("user", "{idea}")
])
presentation_prompt = ChatPromptTemplate.from_messages([
("system", "Draft a presentation outline for the business idea."),
("user", "{problem_solution}")
])
# Create chains
idea_chain = idea_prompt | llm
problem_solution_chain = problem_solution_prompt | llm
presentation_chain = presentation_prompt | llm
# MultiChain setup
multichain = SequentialChain(
chains={
"idea_chain": idea_chain,
"problem_solution_chain": problem_solution_chain,
"presentation_chain": presentation_chain
},
input_variables=["input"],
output_variables=["presentation"],
map_inputs={
"idea_chain": {"input": "input"},
"problem_solution_chain": {"idea": "idea_chain_output"},
"presentation_chain": {"problem_solution": "problem_solution_chain_output"}
},
map_outputs={"presentation_chain_output": "presentation"}
)
# Execute MultiChain
query = "Eco-friendly products for urban living."
result = multichain.invoke({"input": query})
# Output
print("Presentation Outline:\n", result["presentation"])
Output
:
Presentation Outline:
1. Introduction to eco-friendly urban products.
2. The problem: Unsustainable living in cities.
3. The solution: Affordable, eco-friendly daily-use products.
4. Target market and potential growth.
5. Implementation plan and marketing strategy.
Top comments (0)