Debug School

rakesh kumar
rakesh kumar

Posted on

How to Build a Simple AI Question Answer App with LangChain

Direct LLM call

result = llm.invoke("What is generative AI?")
Enter fullscreen mode Exit fullscreen mode

Chain call

chain = prompt | llm
response = chain.invoke({"input":"Can you tell me about Langsmith?"})
Enter fullscreen mode Exit fullscreen mode

Chain with output parser

chain = prompt | llm | output_parser
response = chain.invoke({"input":"Can you tell me about Langsmith?"})
Enter fullscreen mode Exit fullscreen mode

Why use chain?

We use a chain to connect multiple steps in one flow.

For example:

first step = format the prompt

second step = send it to LLM

third step = parse the output
Enter fullscreen mode Exit fullscreen mode

So instead of writing everything separately, chain makes it clean, reusable, and easy to manage.

Simple sentence:

Chain is used to join prompt + model + parser into one pipeline.

Why use StrOutputParser?

When you call:

response = chain.invoke(...)

without parser, the output is usually an AIMessage object.

That means response contains:

content

metadata

extra model information
Enter fullscreen mode Exit fullscreen mode

But many times we only need the final text.

So StrOutputParser() converts the output into a plain string.

Simple sentence:

StrOutputParser is used to extract only text from the model response.

Difference between both chains

Chain 1
chain = prompt | llm
Enter fullscreen mode Exit fullscreen mode

Output type:

AIMessage
Chain 2

chain = prompt | llm | output_parser
Enter fullscreen mode Exit fullscreen mode

Output type:

str
Why use second chain?

Because it gives clean text output, which is easier to:

print

store in database

show in UI

pass to next function
Enter fullscreen mode Exit fullscreen mode

20 Important Questions Based on Your Screenshot

Subjective Questions

  1. What is the purpose of load_dotenv() in this code?

Answer:
load_dotenv() loads environment variables from the .env file into Python, so secrets like API keys can be used safely.

  1. Why do we store OPENAI_API_KEY in environment variables?

Answer:
Because API keys are sensitive data. Keeping them in environment variables is safer than writing them directly in code.

  1. What is the use of ChatOpenAI(model="gpt-4o")?

Answer:
It creates an LLM object that allows us to communicate with the OpenAI chat model.

  1. What does llm.invoke("What is generative AI?") do?

Answer:
It sends the input question to the model and gets a response back.

  1. What is ChatPromptTemplate?

Answer:
ChatPromptTemplate helps us create structured prompts using system messages and user messages.

  1. Why do we use a system message in prompt template?

Answer:
A system message sets the behavior of the model, like telling it to act as an AI engineer or teacher.

  1. Why do we use {input} inside the prompt template?

Answer:
{input} is a variable placeholder. It lets us send different user questions dynamically.

  1. Why do we use prompt | llm?

Answer:
It creates a chain where the prompt is first prepared and then sent to the language model automatically.

  1. Why do we use StrOutputParser()?

Answer:
Because the LLM normally returns an AIMessage object, and StrOutputParser() converts that into plain text.

  1. What is the main benefit of using chains in LangChain?

Answer:
Chains reduce manual coding and make the workflow modular, readable, and reusable.

Objective Questions

  1. load_dotenv() is used to:

Answer: Load variables from .env file.

  1. os.getenv("OPENAI_API_KEY") is used to:

Answer: Read the API key from environment variables.

  1. The output type of prompt | llm is generally:

Answer: AIMessage

  1. The output type of prompt | llm | StrOutputParser() is generally:

Answer: string

  1. ChatPromptTemplate.from_messages() is used to:

Answer: Create prompt structure using system and human messages.

  1. LANGCHAIN_TRACING_V2="true" is used for:

Answer: Tracking and monitoring LangChain execution.

  1. The symbol | in LangChain is used to:

Answer: Combine components into a chain.

MCQ Questions

  1. Which function loads environment variables from a .env file?

A. getenv()
B. load_dotenv()
C. setenv()
D. read_env()

Answer: B. load_dotenv()

  1. What does StrOutputParser() return?

A. JSON object
B. Dictionary
C. Plain text string
D. AIMessage object

Answer: C. Plain text string

  1. Why is prompt | llm | output_parser better than only prompt | llm in many cases?

A. It increases API speed
B. It converts response into plain text
C. It removes API key
D. It changes model version

Answer: B. It converts response into plain text

  1. What is the role of PromptTemplate in this code?

A. To create database models
B. To define structured prompt text for the LLM
C. To validate JSON request
D. To start Flask server

Answer: B. To define structured prompt text for the LLM

  1. Which of the following is one of the prompt keys in the prompts dictionary?

A. student_details
B. recommendation
C. save_recipe
D. database_query

Answer: B. recommendation

  1. What is checked in this block? if llm is None: return jsonify({"error": "LLM is not available"}), 500

A. Whether query is empty
B. Whether prompt is missing
C. Whether language model instance is available
D. Whether JSON is valid

Answer: C. Whether language model instance is available

  1. What does status code 500 mean here?

A. Data created successfully
B. Redirect response
C. Internal server error
D. Unauthorized user

Answer: C. Internal server error

  1. What is created by this line?
chains = {key: prompts[key] | llm for key in selected_chains}
Enter fullscreen mode Exit fullscreen mode

A. A database connection
B. A dictionary of runnable chains for selected prompts
C. A list of user queries
D. A JSON response

Answer: B. A dictionary of runnable chains for selected prompts

  1. What does the | operator do in this line?
prompts[key] | llm
Enter fullscreen mode Exit fullscreen mode

A. Performs bitwise OR
B. Joins prompt template with LLM into a pipeline/chain
C. Compares two values
D. Converts string to JSON

Answer: B. Joins prompt template with LLM into a pipeline/chain

  1. Why is RunnableParallel(chains) used?**

A. To execute all selected chains one after another very slowly
B. To run multiple selected chains in parallel
C. To save results in database
D. To validate prompt variables

Answer: B. To run multiple selected chains in parallel

  1. What does this line do?
results = parallel_chain.invoke({"query": query})
Enter fullscreen mode Exit fullscreen mode

A. Deletes the query
B. Executes the chains using the given query input
C. Stops the LLM
D. Creates a new route

Answer: B. Executes the chains using the given query input

  1. What is the main purpose of this loop?
for key, value in results.items():
Enter fullscreen mode Exit fullscreen mode

A. To create database tables
B. To process each chain output one by one
C. To remove invalid prompts
D. To sort the JSON request

Answer: B. To process each chain output one by one

  1. Why is this line used?
content = value.content if hasattr(value, "content") else str(value)
Enter fullscreen mode Exit fullscreen mode

A. To check whether output has a content attribute and extract text safely
B. To delete content from result
C. To convert list into dictionary
D. To validate selected chains

Answer: A. To check whether output has a content attribute and extract text safely

Extra Important Interview Questions From This Screenshot

Here are some more commonly asked short questions you can revise:

  1. Why not call the model directly every time?

Because direct model calls are fine for simple tasks, but chains are better for structured and reusable workflows.

  1. Why use prompt templates instead of raw strings?

Because prompt templates are dynamic, cleaner, and easier to reuse.

  1. What happens if we do not use output parser?

We get a model response object like AIMessage, not just plain text.

  1. When is StrOutputParser very useful?

It is useful when you want only final text for display, saving, or passing to another function.

  1. Why is chain more scalable?

Because later you can add memory, retrievers, tools, parsers, and multiple steps easily.

One-line easy revision

LLM gives response

Prompt template formats input

Chain connects steps

StrOutputParser extracts plain text
Enter fullscreen mode Exit fullscreen mode

Top comments (0)