Direct LLM call
result = llm.invoke("What is generative AI?")
Chain call
chain = prompt | llm
response = chain.invoke({"input":"Can you tell me about Langsmith?"})
Chain with output parser
chain = prompt | llm | output_parser
response = chain.invoke({"input":"Can you tell me about Langsmith?"})
Why use chain?
We use a chain to connect multiple steps in one flow.
For example:
first step = format the prompt
second step = send it to LLM
third step = parse the output
So instead of writing everything separately, chain makes it clean, reusable, and easy to manage.
Simple sentence:
Chain is used to join prompt + model + parser into one pipeline.
Why use StrOutputParser?
When you call:
response = chain.invoke(...)
without parser, the output is usually an AIMessage object.
That means response contains:
content
metadata
extra model information
But many times we only need the final text.
So StrOutputParser() converts the output into a plain string.
Simple sentence:
StrOutputParser is used to extract only text from the model response.
Difference between both chains
Chain 1
chain = prompt | llm
Output type:
AIMessage
Chain 2
chain = prompt | llm | output_parser
Output type:
str
Why use second chain?
Because it gives clean text output, which is easier to:
print
store in database
show in UI
pass to next function
20 Important Questions Based on Your Screenshot
Subjective Questions
- What is the purpose of load_dotenv() in this code?
Answer:
load_dotenv() loads environment variables from the .env file into Python, so secrets like API keys can be used safely.
- Why do we store OPENAI_API_KEY in environment variables?
Answer:
Because API keys are sensitive data. Keeping them in environment variables is safer than writing them directly in code.
- What is the use of ChatOpenAI(model="gpt-4o")?
Answer:
It creates an LLM object that allows us to communicate with the OpenAI chat model.
- What does llm.invoke("What is generative AI?") do?
Answer:
It sends the input question to the model and gets a response back.
- What is ChatPromptTemplate?
Answer:
ChatPromptTemplate helps us create structured prompts using system messages and user messages.
- Why do we use a system message in prompt template?
Answer:
A system message sets the behavior of the model, like telling it to act as an AI engineer or teacher.
- Why do we use {input} inside the prompt template?
Answer:
{input} is a variable placeholder. It lets us send different user questions dynamically.
- Why do we use prompt | llm?
Answer:
It creates a chain where the prompt is first prepared and then sent to the language model automatically.
- Why do we use StrOutputParser()?
Answer:
Because the LLM normally returns an AIMessage object, and StrOutputParser() converts that into plain text.
- What is the main benefit of using chains in LangChain?
Answer:
Chains reduce manual coding and make the workflow modular, readable, and reusable.
Objective Questions
- load_dotenv() is used to:
Answer: Load variables from .env file.
- os.getenv("OPENAI_API_KEY") is used to:
Answer: Read the API key from environment variables.
- The output type of prompt | llm is generally:
Answer: AIMessage
- The output type of prompt | llm | StrOutputParser() is generally:
Answer: string
- ChatPromptTemplate.from_messages() is used to:
Answer: Create prompt structure using system and human messages.
- LANGCHAIN_TRACING_V2="true" is used for:
Answer: Tracking and monitoring LangChain execution.
- The symbol | in LangChain is used to:
Answer: Combine components into a chain.
MCQ Questions
- Which function loads environment variables from a .env file?
A. getenv()
B. load_dotenv()
C. setenv()
D. read_env()
Answer: B. load_dotenv()
- What does StrOutputParser() return?
A. JSON object
B. Dictionary
C. Plain text string
D. AIMessage object
Answer: C. Plain text string
- Why is prompt | llm | output_parser better than only prompt | llm in many cases?
A. It increases API speed
B. It converts response into plain text
C. It removes API key
D. It changes model version
Answer: B. It converts response into plain text
- What is the role of PromptTemplate in this code?
A. To create database models
B. To define structured prompt text for the LLM
C. To validate JSON request
D. To start Flask server
Answer: B. To define structured prompt text for the LLM
- Which of the following is one of the prompt keys in the prompts dictionary?
A. student_details
B. recommendation
C. save_recipe
D. database_query
Answer: B. recommendation
- What is checked in this block? if llm is None: return jsonify({"error": "LLM is not available"}), 500
A. Whether query is empty
B. Whether prompt is missing
C. Whether language model instance is available
D. Whether JSON is valid
Answer: C. Whether language model instance is available
- What does status code 500 mean here?
A. Data created successfully
B. Redirect response
C. Internal server error
D. Unauthorized user
Answer: C. Internal server error
- What is created by this line?
chains = {key: prompts[key] | llm for key in selected_chains}
A. A database connection
B. A dictionary of runnable chains for selected prompts
C. A list of user queries
D. A JSON response
Answer: B. A dictionary of runnable chains for selected prompts
- What does the | operator do in this line?
prompts[key] | llm
A. Performs bitwise OR
B. Joins prompt template with LLM into a pipeline/chain
C. Compares two values
D. Converts string to JSON
Answer: B. Joins prompt template with LLM into a pipeline/chain
- Why is RunnableParallel(chains) used?**
A. To execute all selected chains one after another very slowly
B. To run multiple selected chains in parallel
C. To save results in database
D. To validate prompt variables
Answer: B. To run multiple selected chains in parallel
- What does this line do?
results = parallel_chain.invoke({"query": query})
A. Deletes the query
B. Executes the chains using the given query input
C. Stops the LLM
D. Creates a new route
Answer: B. Executes the chains using the given query input
- What is the main purpose of this loop?
for key, value in results.items():
A. To create database tables
B. To process each chain output one by one
C. To remove invalid prompts
D. To sort the JSON request
Answer: B. To process each chain output one by one
- Why is this line used?
content = value.content if hasattr(value, "content") else str(value)
A. To check whether output has a content attribute and extract text safely
B. To delete content from result
C. To convert list into dictionary
D. To validate selected chains
Answer: A. To check whether output has a content attribute and extract text safely
Extra Important Interview Questions From This Screenshot
Here are some more commonly asked short questions you can revise:
- Why not call the model directly every time?
Because direct model calls are fine for simple tasks, but chains are better for structured and reusable workflows.
- Why use prompt templates instead of raw strings?
Because prompt templates are dynamic, cleaner, and easier to reuse.
- What happens if we do not use output parser?
We get a model response object like AIMessage, not just plain text.
- When is StrOutputParser very useful?
It is useful when you want only final text for display, saving, or passing to another function.
- Why is chain more scalable?
Because later you can add memory, retrievers, tools, parsers, and multiple steps easily.
One-line easy revision
LLM gives response
Prompt template formats input
Chain connects steps
StrOutputParser extracts plain text
Top comments (0)