Debug School

rakesh kumar
rakesh kumar

Posted on

Integrating Laravel with Python to Leverage OpenAI's LangChain for Dynamic Content Search

To implement the functionality where a sidebar button and a search text field in Laravel send data to a Python script, which in turn calls the LangChain OpenAI LLM with the search text parameter, and returns the data to Laravel for display on the dashboard, follow these steps:

Step 1: Create a Laravel Form with a Search Field
First, add a search form in a Laravel Blade file that sends the input to a specific route:

{{-- resources/views/manager/sidebar.blade.php --}}

<form action="{{ route('search') }}" method="POST">
    @csrf
    <input type="text" name="searchText" placeholder="Search...">
    <button type="submit">Search</button>
</form>
Enter fullscreen mode Exit fullscreen mode

Step 2: Setup a Route and Controller
Define a route in routes/web.php that handles the form submission:

use App\Http\Controllers\SearchController;

Route::post('/search', [SearchController::class, 'search'])->name('search');
Enter fullscreen mode Exit fullscreen mode

Create the SearchController and its search method:

php artisan make:controller SearchController
Enter fullscreen mode Exit fullscreen mode

Implement the search method:

namespace App\Http\Controllers;

use Illuminate\Http\Request;

class SearchController extends Controller
{
    public function search(Request $request)
    {
        $searchText = $request->input('searchText');

        // Call the Python script and pass the search text
        $command = escapeshellcmd("python /path/to/script.py " . escapeshellarg($searchText));
        $output = shell_exec($command);

        // Convert JSON response from Python to an array
        $results = json_decode($output, true);

        // Return to a view with the results
        return view('manager.results', compact('results'));
    }
}
Enter fullscreen mode Exit fullscreen mode

Step 3: Create the Python Script
Create a Python script that calls the LangChain OpenAI LLM using the search text:

import sys
import json
from langchain.llms import OpenAI

def search(query):
    llm = OpenAI(api_key="your_openai_api_key")
    response = llm.query(query)
    return response

if __name__ == "__main__":
    query = sys.argv[1]
    result = search(query)
    print(json.dumps(result))  # Print the result as JSON for PHP to read
Enter fullscreen mode Exit fullscreen mode

Step 4: Update Your Laravel Environment
Ensure you have Python installed on your server where Laravel is hosted. Also, configure the necessary environment variables and dependencies, including installing the LangChain library (pip install langchain).

Step 5: Display Results in Laravel View
Create a view resources/views/manager/results.blade.php to display the results:

@extends('layouts.app')

@section('content')
<div class="container">
    <h1>Search Results</h1>
    @if(!empty($results))
        <ul>
            @foreach ($results as $result)
                <li>{{ $result }}</li>
            @endforeach
        </ul>
    @else
        <p>No results found.</p>
    @endif
</div>
@endsection
Enter fullscreen mode Exit fullscreen mode

How improve search accuracy


import sys
import json
import openai
from langchain.llms import OpenAI

# Initialize the OpenAI API client
openai.api_key = 'your_openai_api_key'

def enhanced_query(query, model="text-davinci-003", temperature=0.5, max_tokens=100):
    """
    Perform an enhanced query to OpenAI's API using specified parameters.
    """
    try:
        # Advanced prompt engineering: adding context or instructions
        prompt = f"Please provide a detailed, accurate answer to the following question: {query}"

        # Query the model with customized parameters
        response = openai.Completion.create(
            model=model,
            prompt=prompt,
            temperature=temperature,
            max_tokens=max_tokens,
            top_p=1.0,
            frequency_penalty=0.0,
            presence_penalty=0.0
        )

        return response['choices'][0]['text'].strip()
    except Exception as e:
        return str(e)

def get_feedback():
    """
    Simple function to collect feedback, simulate user rating.
    """
    print("Please rate the response from 1 (poor) to 5 (excellent):")
    rating = input()
    return int(rating)

def main():
    query = sys.argv[1] if len(sys.argv) > 1 else input("Enter your query: ")
    result = enhanced_query(query)
    print("AI Response:", result)

    # Collect feedback
    rating = get_feedback()

    # Simulate adjusting parameters based on feedback
    if rating < 3:
        print("Adjusting parameters for better accuracy...")
        result = enhanced_query(query, temperature=0.3, max_tokens=150)
        print("Adjusted AI Response:", result)
        get_feedback()

if __name__ == "__main__":
    main()
Enter fullscreen mode Exit fullscreen mode

=====================or===================================

import sys
import json
from langchain.llms import OpenAI

def search(query, model="text-davinci-003", temperature=0.65, max_tokens=150):
    # Initialize the LLM with your API key
    llm = OpenAI(api_key="your_openai_api_key")

    # Advanced prompt engineering: Depending on the type of query, you might modify the prompt
    enhanced_prompt = f"Please answer the following question with a detailed explanation: {query}"

    # Prepare the query parameters more dynamically
    params = {
        "model": model,
        "prompt": enhanced_prompt,
        "temperature": temperature,
        "max_tokens": max_tokens,
        "n": 1,                # Number of completions to generate
        "stop": ["\n", "."],   # Stops at the first full stop or new line
        "logprobs": 10         # Optionally fetch log probabilities for the top 10 tokens
    }

    # Perform the query and capture the response
    try:
        response = llm.query(**params)
        # Processing the response to extract only relevant parts
        answer = response.get('choices')[0].get('text', '').strip()
        return answer
    except Exception as e:
        # More detailed error handling
        return {"error": str(e), "message": "Failed to process the query"}

if __name__ == "__main__":
    query = sys.argv[1] if len(sys.argv) > 1 else input("Please enter your query: ")
    result = search(query)
    print(json.dumps(result, indent=2))  # Output the result as formatted JSON
Enter fullscreen mode Exit fullscreen mode

Another Example

from langchain_community.llms import Ollama
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

load_dotenv()

## Langsmith Tracking
os.environ["LANGCHAIN_API_KEY"]=os.getenv("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"]="true"
os.environ["LANGCHAIN_PROJECT"]=os.getenv("LANGCHAIN_PROJECT")

## Prompt Template
prompt=ChatPromptTemplate.from_messages(
    [
        ("system","You are a helpful assistant. Please respond to the question asked"),
        ("user","Question:{question}")
    ]
)

## streamlit framework
st.title("Langchain Demo With Gemma Model")
input_text=st.text_input("What question you have in mind?")


## Ollama Llama2 model
llm=Ollama(model="gemma:2b")
output_parser=StrOutputParser()
chain=prompt|llm|output_parser

if input_text:
    st.write(chain.invoke({"question":input_text}))

Enter fullscreen mode Exit fullscreen mode

Implement in flask

from flask import Flask, request, jsonify, render_template
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Langsmith Tracking
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = os.getenv("LANGCHAIN_PROJECT")

# Initialize Flask app
app = Flask(__name__)

# Prompt Template
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant. Please respond to the question asked"),
        ("user", "Question:{question}")
    ]
)

# Ollama Llama2 model
llm = Ollama(model="gemma:2b")
output_parser = StrOutputParser()
chain = prompt | llm | output_parser

@app.route("/", methods=["GET", "POST"])
def index():
    response = None
    if request.method == "POST":
        # Get the input question from the form
        input_text = request.form.get("question")
        if input_text:
            # Process the input through the chain
            response = chain.invoke({"question": input_text})

    # Render the HTML template and pass the response
    return render_template("index.html", response=response)

if __name__ == "__main__":
    app.run(debug=True)
Enter fullscreen mode Exit fullscreen mode

HTML Template (templates/index.html)
Create an index.html file inside a templates folder with the following content:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Langchain Demo With Gemma Model</title>
    <style>
        body {
            font-family: Arial, sans-serif;
            margin: 40px;
            text-align: center;
        }
        form {
            margin-bottom: 20px;
        }
        input[type="text"] {
            padding: 10px;
            width: 80%;
            margin-bottom: 10px;
        }
        button {
            padding: 10px 20px;
            background-color: #4CAF50;
            color: white;
            border: none;
            cursor: pointer;
        }
        button:hover {
            background-color: #45a049;
        }
        .response {
            margin-top: 20px;
            font-weight: bold;
        }
    </style>
</head>
<body>
    <h1>Langchain Demo With Gemma Model</h1>
    <form method="POST">
        <input type="text" name="question" placeholder="What question do you have in mind?" required>
        <br>
        <button type="submit">Submit</button>
    </form>
    {% if response %}
    <div class="response">
        <h2>Response:</h2>
        <p>{{ response }}</p>
    </div>
    {% endif %}
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Run the Application
Save the Python file (e.g., app.py) and the HTML template in the appropriate folder structure.
Install necessary dependencies:

pip install flask langchain langchain-community python-dotenv
Enter fullscreen mode Exit fullscreen mode

Run the Flask application:

python app.py
Enter fullscreen mode Exit fullscreen mode
Open your browser and navigate to http://127.0.0.1:5000.
Enter fullscreen mode Exit fullscreen mode

Integrate in Laravel

Set Up Python API for LangChain
We'll create a Python Flask API to interact with LangChain.

Python Code (langchain_api.py):

from flask import Flask, request, jsonify
from langchain_community.llms import Ollama
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
import os
from dotenv import load_dotenv

# Load environment variables
load_dotenv()

# Langsmith Tracking
os.environ["LANGCHAIN_API_KEY"] = os.getenv("LANGCHAIN_API_KEY")
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_PROJECT"] = os.getenv("LANGCHAIN_PROJECT")

# Initialize Flask app
app = Flask(__name__)

# Initialize LangChain components
prompt = ChatPromptTemplate.from_messages(
    [
        ("system", "You are a helpful assistant. Please respond to the question asked."),
        ("user", "Question:{question}")
    ]
)

llm = Ollama(model="gemma:2b")
output_parser = StrOutputParser()
chain = prompt | llm | output_parser

@app.route("/api/langchain", methods=["POST"])
def langchain():
    data = request.json
    question = data.get("question")

    if not question:
        return jsonify({"error": "Question is required"}), 400

    # Process the question through LangChain
    response = chain.invoke({"question": question})
    return jsonify({"response": response})

if __name__ == "__main__":
    app.run(debug=True, port=5000)
Enter fullscreen mode Exit fullscreen mode

Explanation:

The Flask app listens on /api/langchain.
It accepts a POST request with a question payload.
Processes the question using LangChain and returns the response.
Run the API:

Install dependencies:

pip install flask langchain langchain-community python-dotenv
Enter fullscreen mode Exit fullscreen mode
python langchain_api.py
Enter fullscreen mode Exit fullscreen mode
  1. Set Up Laravel Frontend In Laravel, you'll create a form to interact with the Python backend.

Laravel Controller (LangChainController.php):

<?php

namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Illuminate\Support\Facades\Http;

class LangChainController extends Controller
{
    public function index()
    {
        return view('langchain');
    }

    public function processQuestion(Request $request)
    {
        $request->validate([
            'question' => 'required|string|max:255',
        ]);

        $question = $request->input('question');

        // Send the question to the Python API
        $response = Http::post('http://127.0.0.1:5000/api/langchain', [
            'question' => $question,
        ]);

        if ($response->failed()) {
            return back()->with('error', 'Failed to get a response from the API.');
        }

        return back()->with('response', $response->json()['response']);
    }
}
Enter fullscreen mode Exit fullscreen mode

Laravel Route (web.php):

use App\Http\Controllers\LangChainController;

Route::get('/langchain', [LangChainController::class, 'index']);
Route::post('/langchain', [LangChainController::class, 'processQuestion']);
Enter fullscreen mode Exit fullscreen mode

Laravel Blade View (resources/views/langchain.blade.php):

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>LangChain Integration</title>
    <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/bootstrap@5.3.0/dist/css/bootstrap.min.css">
</head>
<body class="p-5">
    <h1 class="mb-4">LangChain Integration with Laravel</h1>

    @if (session('error'))
        <div class="alert alert-danger">{{ session('error') }}</div>
    @endif

    @if (session('response'))
        <div class="alert alert-success">
            <h4>Response:</h4>
            <p>{{ session('response') }}</p>
        </div>
    @endif

    <form action="/langchain" method="POST">
        @csrf
        <div class="mb-3">
            <label for="question" class="form-label">Enter Your Question</label>
            <input type="text" id="question" name="question" class="form-control" required>
        </div>
        <button type="submit" class="btn btn-primary">Submit</button>
    </form>
</body>
</html>
Enter fullscreen mode Exit fullscreen mode

Explanation:
A form accepts user input (question) and submits it to the Laravel backend.
The Laravel controller sends the question to the Python API and fetches the response.
The Blade view displays the API response.

  1. Integration Steps Start Python API:

Run the Flask server with:

python langchain_api.py
Ensure it’s accessible at http://127.0.0.1:5000.
Enter fullscreen mode Exit fullscreen mode

Start Laravel Server:

Run the Laravel server with:

php artisan serve
Access Laravel at http://127.0.0.1:8000/langchain.
Enter fullscreen mode Exit fullscreen mode

Test Integration:

Open the Laravel URL.
Submit a question through the form.
View the response from the Python API displayed on the page.

Top comments (0)