Debug School

rakesh kumar
rakesh kumar

Posted on

Managing complex workflows by chaining multiple components together using LCEL

Advantage OF using LCEL
Single-Step Workflow (Prompt → LLM → OutputParser)
Multi-Step Workflow with Conditional Logic
Workflow with Intermediate Outputs
Tool Integration Workflow (Prompt → LLM → Tool Usage)

Advantage OF using LCEL

LCEL (LangChain Expression Language) allows you to compose and manage complex workflows by chaining multiple components together, such as prompts, models, parsers, and tools. While you can directly query a model using a ChatPromptTemplate and an LLM, LCEL provides the following benefits:

Pipeline Creation:

LCEL enables chaining different components (like prompts, models, and parsers) into a single pipeline, allowing modular and reusable workflows.
For example:

# Chain: Prompt -> LLM -> OutputParser
pipeline = prompt | llm | output_parser
Enter fullscreen mode Exit fullscreen mode

Flexible Composition:

You can combine components with various configurations without hardcoding dependencies, making the system easier to maintain.
Advanced Control:

LCEL supports complex scenarios where you need to conditionally pass data between components or integrate additional logic in workflows.
Modular and Reusable:

LCEL makes it easy to reuse and extend components. For instance, if you later need a new output parser, you can plug it into the existing workflow without altering the main logic.

Single-Step Workflow (Prompt → LLM → OutputParser)

This example demonstrates a simple workflow where user input is processed through a prompt, passed to an LLM, and parsed with an OutputParser.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.llms import OpenAI

# Initialize the LLM
llm = OpenAI(model="gpt-4")

# Create a chat prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an expert AI Engineer. Provide precise and detailed answers."),
    ("user", "{input}")
])

# Output Parser
output_parser = StrOutputParser()

# LCEL Chain
pipeline = prompt | llm | output_parser

# Input text
input_text = {"input": "What is LangChain used for?"}

# Execute the chain
result = pipeline.invoke(input_text)

# Print the parsed output
print("Output:")
print(result)
Enter fullscreen mode Exit fullscreen mode

Expected Output:

Output:
LangChain is a framework designed for developing applications powered by large language models (LLMs), offering tools for prompt engineering, chaining workflows, and managing integrations.

Example: Multi-Step Workflow with Conditional Logic

This example introduces conditional logic. If the user asks for a definition, the system explains the term. If the user requests a list, the system generates relevant points.


from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.llms import OpenAI

# Initialize the LLM
llm = OpenAI(model="gpt-4")

# Create prompt templates
definition_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an AI specializing in definitions."),
    ("user", "Define the term: {input}")
])

list_prompt = ChatPromptTemplate.from_messages([
    ("system", "You are an AI that generates lists."),
    ("user", "List the applications of {input}")
])

# Output Parser
output_parser = StrOutputParser()

# LCEL Chains
definition_chain = definition_prompt | llm | output_parser
list_chain = list_prompt | llm | output_parser

# Input text
user_input = {"input": "LangChain"}

# Conditional logic
if "define" in user_input["input"].lower():
    result = definition_chain.invoke(user_input)
else:
    result = list_chain.invoke(user_input)

# Print the output
print("Output:")
print(result)
Enter fullscreen mode Exit fullscreen mode

Expected Output:
For "LangChain":

Output:
LangChain is a framework for building applications powered by large language models, with tools for prompt engineering, chaining workflows, and integrations.
For "List the applications of LangChain":

Output:

1. Chatbots
2. Content generation
3. Summarization
4. Complex reasoning
5. Personalized learning systems
Enter fullscreen mode Exit fullscreen mode

Another Example

from langchain.chains import SimpleSequentialChain, SequentialChain
from langchain.prompts import PromptTemplate
from langchain.llms import OpenAI
from langchain.chains.router import MultiPromptChain, MultiRouteChain

# Define the LLM to be used
llm = OpenAI(temperature=0.7)

# Define the individual prompts
step1_prompt = PromptTemplate(
    input_variables=["input_text"],
    template="Summarize the following text: {input_text}",
)

step2_prompt = PromptTemplate(
    input_variables=["summary"],
    template="Provide a detailed explanation of the following summary: {summary}",
)

step3_prompt = PromptTemplate(
    input_variables=["explanation"],
    template="Based on the explanation: {explanation}, what would be the next step to solve the issue?",
)

conditional_prompt_a = PromptTemplate(
    input_variables=["context"],
    template="Provide a technical solution for the context: {context}",
)

conditional_prompt_b = PromptTemplate(
    input_variables=["context"],
    template="Provide a managerial solution for the context: {context}",
)

# Create individual chains
step1_chain = SimpleSequentialChain(
    llm=llm,
    prompt=step1_prompt,
)

step2_chain = SimpleSequentialChain(
    llm=llm,
    prompt=step2_prompt,
)

step3_chain = SimpleSequentialChain(
    llm=llm,
    prompt=step3_prompt,
)

# Conditional logic based on the input to route to appropriate chain
def conditional_logic(context):
    if "technical" in context.lower():
        return conditional_prompt_a
    else:
        return conditional_prompt_b

# Wrap conditional logic into a function-based chain
class ConditionalChain:
    def __init__(self, llm, prompt_a, prompt_b):
        self.llm = llm
        self.prompt_a = prompt_a
        self.prompt_b = prompt_b

    def run(self, input_context):
        chosen_prompt = conditional_logic(input_context)
        if chosen_prompt == self.prompt_a:
            return self.llm(prompt=self.prompt_a.format(context=input_context))
        else:
            return self.llm(prompt=self.prompt_b.format(context=input_context))

conditional_chain = ConditionalChain(
    llm=llm,
    prompt_a=conditional_prompt_a,
    prompt_b=conditional_prompt_b,
)

# Combine chains into a multi-step workflow
class MultiStepWorkflow:
    def __init__(self, step1, step2, step3, conditional):
        self.step1 = step1
        self.step2 = step2
        self.step3 = step3
        self.conditional = conditional

    def run(self, input_text):
        # Step 1
        summary = self.step1.run({"input_text": input_text})
        print("Step 1 Summary:", summary)

        # Step 2
        explanation = self.step2.run({"summary": summary})
        print("Step 2 Explanation:", explanation)

        # Step 3
        next_step_context = self.step3.run({"explanation": explanation})
        print("Step 3 Context for Decision:", next_step_context)

        # Conditional Logic
        solution = self.conditional.run(next_step_context)
        print("Conditional Solution:", solution)

# Initialize the workflow
workflow = MultiStepWorkflow(
    step1=step1_chain,
    step2=step2_chain,
    step3=step3_chain,
    conditional=conditional_chain,
)

# Run the workflow with sample input
sample_input = "LangChain is a framework for building applications with LLMs."
workflow.run(sample_input)
Enter fullscreen mode Exit fullscreen mode

Output
Step 1 Summary: LangChain simplifies building applications with language models.
Step 2 Explanation: LangChain provides tools and integrations for developers to efficiently create workflows and applications that leverage language models for various tasks, such as summarization, decision-making, and automation.
Step 3 Context for Decision: Develop a technical plan to integrate LangChain into existing workflows.
Conditional Solution: To integrate LangChain, create a Python script to automate API calls and implement a CI/CD pipeline for deployment.

Example: Workflow with Intermediate Outputs

This example demonstrates extracting and printing intermediate outputs from each step in the workflow.

from langchain_core.prompts import ChatPromptTemplate
from langchain_core.llms import OpenAI
from langchain_core.output_parsers import StrOutputParser

# Initialize the LLM
llm = OpenAI(model="gpt-4")

# Create prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

# Output Parser
output_parser = StrOutputParser()

# Chain components
pipeline = prompt | llm

# Input text
input_text = {"input": "Explain the concept of Langsmith."}

# Step 1: Run through the pipeline (prompt + LLM)
intermediate_output = pipeline.invoke(input_text)

# Step 2: Parse the intermediate output
parsed_output = output_parser.parse(intermediate_output)

# Print intermediate and final outputs
print("Intermediate Output (Raw):")
print(intermediate_output)

print("\nFinal Output (Parsed):")
print(parsed_output)
Enter fullscreen mode Exit fullscreen mode

Expected Output:


Intermediate Output (Raw):
{
    "choices": [
        {
            "text": "Langsmith is a developer-centric tool within LangChain for tracking, debugging, and managing LLM workflows."
        }
    ],
    "model": "gpt-4",
    "usage": {
        "prompt_tokens": 25,
        "completion_tokens": 50,
        "total_tokens": 75
    }
}
Enter fullscreen mode Exit fullscreen mode

Final Output (Parsed):

Langsmith is a developer-centric tool within LangChain for tracking, debugging, and managing LLM workflows
Enter fullscreen mode Exit fullscreen mode

.

Example: Tool Integration Workflow (Prompt → LLM → Tool Usage)

This example showcases integrating a custom tool to extend functionality (e.g., extracting keywords).


from langchain_core.prompts import ChatPromptTemplate
from langchain_core.llms import OpenAI
from langchain_core.output_parsers import StrOutputParser

# Tool for keyword extraction
class KeywordExtractor:
    def extract(self, text):
        # Mock keyword extraction logic
        return [word.strip() for word in text.split() if len(word) > 6]

# Initialize LLM
llm = OpenAI(model="gpt-4")

# Create a chat prompt template
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    ("user", "{input}")
])

# Output Parser
output_parser = StrOutputParser()

# Tool instance
keyword_extractor = KeywordExtractor()

# LCEL Chain
pipeline = prompt | llm | output_parser

# Input text
input_text = {"input": "Explain the importance of LangChain in AI workflows."}

# Execute the chain
result = pipeline.invoke(input_text)

# Use the extracted result with the tool
keywords = keyword_extractor.extract(result)

# Print the output and extracted keywords
print("Generated Output:")
print(result)

print("\nExtracted Keywords:")
print(keywords)
Enter fullscreen mode Exit fullscreen mode

Expected Output:

Generated Output:
LangChain is a crucial framework in AI workflows, offering tools for prompt engineering, chaining tasks, and integrating APIs efficiently.

Extracted Keywords:

['LangChain', 'crucial', 'framework', 'engineering', 'chaining', 'integrating', 'efficiently']
Enter fullscreen mode Exit fullscreen mode

Top comments (0)