Using OpenAI GPT-3 and Unsplash API
Using Hugging Face Transformers and Image Search
Using Rasa for Conversational AI
Using Text Summarization and Image Generation
Flask API for Dynamic Content Generation
Code to embed in laravel
Using OpenAI GPT-3 and Unsplash API
Prerequisites:
Install the required libraries:
pip install openai requests pillow
Code:
import openai
import requests
from PIL import Image
from io import BytesIO
# Set your OpenAI API key
openai.api_key = 'YOUR_OPENAI_API_KEY'
def get_question_answer_image(topic):
# Generate a question and answer using OpenAI GPT
prompt = f"Generate a question and answer about {topic}."
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": prompt}]
)
qa = response.choices[0].message.content.split('\n')
question = qa[0]
answer = qa[1]
# Get relevant image using Unsplash API
image_url = f"https://source.unsplash.com/featured/?{topic.replace(' ', '+')}"
image_response = requests.get(image_url)
if image_response.status_code == 200:
img = Image.open(BytesIO(image_response.content))
img.show()
return question, answer, image_url
# Example usage
topic = "Artificial Intelligence"
question, answer, image_url = get_question_answer_image(topic)
print(f"Question: {question}\nAnswer: {answer}\nImage URL: {image_url}")
Using Hugging Face Transformers and Image Search
Prerequisites:
Install the required libraries:
pip install transformers requests pillow
Code:
import requests
from transformers import pipeline
from PIL import Image
from io import BytesIO
# Load the QA pipeline
nlp = pipeline("question-answering")
def get_question_answer_image(topic):
# Generate a question and answer
question = f"What can you tell me about {topic}?"
context = f"{topic} is a branch of computer science that deals with the simulation of intelligent behavior in computers."
result = nlp(question=question, context=context)
# Get relevant image using Unsplash API
image_url = f"https://source.unsplash.com/featured/?{topic.replace(' ', '+')}"
image_response = requests.get(image_url)
if image_response.status_code == 200:
img = Image.open(BytesIO(image_response.content))
img.show()
return result['question'], result['answer'], image_url
# Example usage
topic = "Machine Learning"
question, answer, image_url = get_question_answer_image(topic)
print(f"Question: {question}\nAnswer: {answer}\nImage URL: {image_url}")
Using Rasa for Conversational AI
Prerequisites:
Install Rasa:
pip install rasa
Set up Rasa:
Initialize Rasa project:
rasa init
Modify the nlu.yml to include intents for your topic questions.
Code: Create a custom action in actions.py that fetches questions and answers, similar to the previous methods.
import requests
from PIL import Image
from io import BytesIO
def get_question_answer_image(topic):
question = f"Tell me something about {topic}."
answer = f"{topic} is a fascinating area of study."
# Get relevant image using Unsplash API
image_url = f"https://source.unsplash.com/featured/?{topic.replace(' ', '+')}"
image_response = requests.get(image_url)
if image_response.status_code == 200:
img = Image.open(BytesIO(image_response.content))
img.show()
return question, answer, image_url
Using Text Summarization and Image Generation
Prerequisites:
Install the required libraries
:
pip install transformers requests pillow
Code:
import requests
from transformers import pipeline
from PIL import Image
from io import BytesIO
# Load summarization model
summarizer = pipeline("summarization")
def get_question_answer_image(topic):
# Summarize information about the topic
text = f"{topic} is a field of study that focuses on creating systems capable of performing tasks that typically require human intelligence."
summary = summarizer(text, max_length=50, min_length=25, do_sample=False)
question = f"What is {topic}?"
answer = summary[0]['summary_text']
# Get relevant image
image_url = f"https://source.unsplash.com/featured/?{topic.replace(' ', '+')}"
image_response = requests.get(image_url)
if image_response.status_code == 200:
img = Image.open(BytesIO(image_response.content))
img.show()
return question, answer, image_url
# Example usage
topic = "Deep Learning"
question, answer, image_url = get_question_answer_image(topic)
print(f"Question: {question}\nAnswer: {answer}\nImage URL: {image_url}")
Flask API for Dynamic Content Generation
Prerequisites:
Install the required libraries:
pip install Flask requests pillow
Code: Create a simple Flask app to serve your application.
from flask import Flask, request, jsonify
import requests
from PIL import Image
from io import BytesIO
app = Flask(__name__)
@app.route('/api/get_info', methods=['GET'])
def get_info():
topic = request.args.get('topic')
question = f"How does {topic} work?"
answer = f"{topic} is crucial in various applications today."
# Get relevant image
image_url = f"https://source.unsplash.com/featured/?{topic.replace(' ', '+')}"
image_response = requests.get(image_url)
if image_response.status_code == 200:
img = Image.open(BytesIO(image_response.content))
img.show()
return jsonify({
"question": question,
"answer": answer,
"image_url": image_url
})
if __name__ == '__main__':
app.run(debug=True)
Code to embed in laravel
Step 1: Install Required Libraries
Make sure you have the necessary Python libraries installed. You will need transformers, torch, Pillow for image processing, and requests for making API calls if you plan to use an image generation API.
You can install these using pip:
pip install transformers torch Pillow requests
Step 2: Update the Python Script
Here’s an updated version of your transformer_script.py, which now includes functionality for text generation, answering questions, and image generation using an image generation API (like DALL-E).
File: transformer_script.py
import sys
import requests
from transformers import pipeline
def generate_text(input_text):
"""Generate text based on the input using Hugging Face Transformers."""
generator = pipeline('text-generation', model='gpt2')
response = generator(input_text, max_length=50, num_return_sequences=1)
return response[0]['generated_text']
def answer_question(question):
"""Answer a question using a question-answering model."""
qa_pipeline = pipeline('question-answering', model='distilbert-base-uncased-distilled-squad')
context = "This is a sample context for answering questions." # Provide a relevant context here
result = qa_pipeline(question=question, context=context)
return result['answer']
def generate_image(prompt):
"""Generate an image based on the prompt using an image generation API."""
# Here we assume you have access to an API like DALL-E or similar
url = "https://api.openai.com/v1/images/generations" # Replace with actual API endpoint
headers = {
"Authorization": "Bearer YOUR_API_KEY", # Replace with your actual API key
"Content-Type": "application/json"
}
data = {
"prompt": prompt,
"n": 1,
"size": "1024x1024"
}
response = requests.post(url, headers=headers, json=data)
if response.status_code == 200:
image_url = response.json().get('data')[0].get('url')
return image_url
else:
return "Image generation failed."
def main():
if len(sys.argv) < 3:
print("Usage: python transformer_script.py '<mode>' '<input>'")
print("Mode: 'text', 'question', 'image'")
sys.exit(1)
mode = sys.argv[1]
input_value = sys.argv[2]
if mode == 'text':
result = generate_text(input_value)
elif mode == 'question':
result = answer_question(input_value)
elif mode == 'image':
result = generate_image(input_value)
else:
print("Invalid mode. Use 'text', 'question', or 'image'.")
sys.exit(1)
print(result)
if __name__ == "__main__":
main()
Explanation of the Script
Function Definitions:
generate_text: Generates text based on input using the GPT-2 model.
answer_question: Uses a question-answering model to answer questions based on a provided context. You should replace the context with relevant information based on the topic.
generate_image: Sends a request to an image generation API to generate an image based on the provided prompt.
Main Function: The script takes an additional command-line argument to specify the mode (text, question, or image). Based on the mode, it calls the appropriate function.
Step 3: Update Laravel Controller
You will need to adjust your Laravel controller to handle the different modes of interaction. Here's how you can modify the controller to work with the updated Python script.
File: app/Http/Controllers/TransformerController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Log;
class TransformerController extends Controller
{
public function showForm()
{
return view('text_input');
}
public function transform(Request $request)
{
$request->validate([
'input_text' => 'required|string|max:255',
'mode' => 'required|string|in:text,question,image'
]);
$inputText = $request->input('input_text');
$mode = $request->input('mode');
// Prepare the command to execute the Python script
$command = escapeshellcmd("python3 /path/to/your/transformer_script.py " . escapeshellarg($mode) . " " . escapeshellarg($inputText));
// Execute the command
$output = shell_exec($command);
// Check for errors
if ($output === null) {
Log::error('Python script execution failed.');
return redirect()->back()->with('output', 'Error executing script.');
}
return redirect()->back()->with('output', trim($output));
}
}
Step 4: Update Your Blade View
You need to add a way for the user to select which mode they want to use (text generation, question answering, or image generation).
File: resources/views/text_input.blade.php
<!DOCTYPE html>
<html>
<head>
<title>Input Text</title>
</head>
<body>
<h1>Input Text</h1>
<form action="{{ route('transform') }}" method="POST">
@csrf
<input type="text" name="input_text" required>
<select name="mode" required>
<option value="text">Text Generation</option>
<option value="question">Answer Question</option>
<option value="image">Image Generation</option>
</select>
<button type="submit">Submit</button>
</form>
@if(session('output'))
<h2>Output:</h2>
<p>{{ session('output') }}</p>
@endif
</body>
</html>
Step 5: Test the Application
Start your Laravel server if it’s not already running:
php artisan serve
Open your web browser and go to http://localhost:8000/text-input
Another way
To create a Laravel application that uses a Python script for question answering with the Hugging Face Transformers library, follow the detailed steps below. This guide will help you implement everything from setting up the Laravel side to the Python script.
Step 1: Set Up Your Python Environment
Ensure you have Python installed along with the necessary libraries. You will need the transformers and torch libraries for question answering.
You can install these libraries using pip:
pip install transformers torch
Step 2: Create the Python Script
Create a Python script named question_answering.py that will handle question answering.
File: question_answering.py
import sys
from transformers import pipeline
def answer_question(question, context):
"""Answer the question based on the given context using Hugging Face Transformers."""
qa_pipeline = pipeline('question-answering', model='distilbert-base-uncased-distilled-squad')
result = qa_pipeline(question=question, context=context)
return result['answer']
def main():
if len(sys.argv) != 3:
print("Usage: python question_answering.py '<question>' '<context>'")
sys.exit(1)
question = sys.argv[1]
context = sys.argv[2]
answer = answer_question(question, context)
print(answer)
if __name__ == "__main__":
main()
Explanation of the Python Script
Imports: The script imports the necessary library from Hugging Face for question answering.
Function Definitions:
answer_question: This function takes a question and context, then uses the Hugging Face model to generate an answer.
Main Function: Handles command-line arguments for the question and context, calls the answer_question function, and prints the answer.
Step 3: Update the Laravel Controller
Now, create a controller in Laravel that will call this Python script and return the results.
Run the following command to create the controller:
php artisan make:controller QuestionAnsweringController
File: app/Http/Controllers/QuestionAnsweringController.php
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Log;
class QuestionAnsweringController extends Controller
{
public function showForm()
{
return view('question_input');
}
public function answerQuestion(Request $request)
{
$request->validate([
'question' => 'required|string|max:255',
'context' => 'required|string|max:1000',
]);
$question = $request->input('question');
$context = $request->input('context');
// Prepare the command to execute the Python script
$command = escapeshellcmd("python3 /path/to/your/question_answering.py " . escapeshellarg($question) . " " . escapeshellarg($context));
// Execute the command
$output = shell_exec($command);
// Check for errors
if ($output === null) {
Log::error('Python script execution failed.');
return redirect()->back()->with('output', 'Error executing script.');
}
return redirect()->back()->with('output', trim($output));
}
}
Explanation of the Controller
showForm: Displays the form for user input.
answerQuestion: Validates the input for the question and context, prepares a command to execute the Python script, retrieves the output, and displays the answer.
Step 4: Create the Blade View
You need to create a Blade view where users can input their question and context.
File: resources/views/question_input.blade.php
<!DOCTYPE html>
<html>
<head>
<title>Question Answering</title>
</head>
<body>
<h1>Question Answering</h1>
<form action="{{ route('answer.question') }}" method="POST">
@csrf
<label for="context">Context:</label><br>
<textarea name="context" required rows="4" cols="50" placeholder="Enter context here..."></textarea><br><br>
<label for="question">Question:</label><br>
<input type="text" name="question" required placeholder="Enter your question here..."><br><br>
<button type="submit">Submit</button>
</form>
@if(session('output'))
<h2>Answer:</h2>
<p>{{ session('output') }}</p>
@endif
</body>
</html>
Explanation of the Blade View
A form is created with a textarea for context and an input field for the question.
The output answer is displayed below the form after submission.
Step 5: Define Routes
Add the necessary routes in your routes/web.php file.
File: routes/web.php
use App\Http\Controllers\QuestionAnsweringController;
Route::get('/question-input', [QuestionAnsweringController::class, 'showForm'])->name('question.input');
Route::post('/answer-question', [QuestionAnsweringController::class, 'answerQuestion'])->name('answer.question');
Step 6: Run Your Laravel Application
Start your Laravel server:
php artisan serve
generating-educational-content-a-laravel-application-with-llms
Top comments (0)