Building an AI-Powered Product Recommendation System with Laravel, Vue, and Hugging Face
A production-ready recommendation engine using semantic embeddings, vector similarity search, and microservice architecture. This system demonstrates how to integrate real AI inference into modern web applications without compromising maintainability or performance.
Overview
Modern marketplaces rely heavily on personalized discovery. Traditional keyword search and category filters are no longer enough—users expect systems that understand intent, context, and similarity.
In this project, I built an AI-powered product recommendation system for a digital asset marketplace called Signal, using a microservice architecture that separates AI inference from application logic.
The system combines:
- Laravel as the API orchestrator
- Vue 3 for the frontend experience
- Python (Flask) for AI inference
- Hugging Face sentence-transformers for semantic embeddings
- FAISS for high-performance vector similarity search
System Architecture
The recommendation system is intentionally split into three layers:
This design keeps the AI layer isolated, scalable, and replaceable without impacting the core application.
Why a Microservice for AI?
Running AI inference inside a Laravel application introduces unnecessary complexity:
- Python ML libraries are heavy
- GPU acceleration is impractical inside PHP
- Model upgrades become risky
Instead, AI inference runs in a dedicated Python service, while Laravel handles:
- Authentication
- Data validation
- Business rules
- API normalization
This mirrors how production systems at scale are built.
Data Flow: From Product Page to AI Recommendations
1. User Visits a Product Page
A user navigates to a product page in the Signal marketplace. Vue mounts the page and triggers an API request for recommendations.
GET /api/products/2/recommendations
2. Laravel API Receives the Request
Laravel exposes a clean API endpoint:
Route::get('/products/{product}/recommendations', [
RecommendationController::class,
'getRecommendations'
]);
The controller retrieves the product and forwards its metadata to the AI service.
$response = Http::post('http://127.0.0.1:5050/recommend', [
'product_id' => $product->id,
'title' => $product->name,
'description' => $product->description,
'tags' => $product->tags,
]);
3. Flask AI Microservice Processes the Request
The Flask service exposes a /recommend endpoint and uses Hugging Face embeddings to understand product similarity.
@app.route("/recommend", methods=["POST"])
def recommend():
data = request.json
product_text = model.prepare_product_text(
data["title"],
data["description"],
data["tags"]
)
results = index.search(product_text, top_k=6)
return jsonify(results)
Semantic Embeddings with Hugging Face
Instead of keyword matching, the system uses semantic embeddings to capture meaning.
The model used:
sentence-transformers/all-MiniLM-L6-v2
from sentence_transformers import SentenceTransformer
self.model = SentenceTransformer(
"sentence-transformers/all-MiniLM-L6-v2"
)
Each product is embedded using a combination of:
- Title
- Description
- Tags
This allows the system to understand that:
is similar to
"Analytics admin panel template"
—even if keywords differ.
Fast Similarity Search with FAISS
Embeddings are indexed using FAISS, enabling fast cosine similarity search in memory.
self.index = faiss.IndexFlatIP(embedding_dim)
self.index.add(embeddings)
At runtime, the system performs nearest-neighbor search to find related products, ranked by similarity score.
Returning Recommendations to the Frontend
The AI service returns ranked product IDs and scores:
[
{ "id": 9, "score": 0.87 },
{ "id": 14, "score": 0.82 }
]
Laravel then:
- Hydrates full product records
- Applies visibility rules
- Returns frontend-safe JSON
Vue renders the results in an "AI Suggested" section.
Vue Recommendation Component
The UI explains why recommendations were made, improving trust and transparency.
<h2>You Might Also Like</h2>
<p>
Based on your interest in {{ currentProduct.name }}
</p>
The result feels intelligent, not random.
Why This Matters
This system demonstrates:
- Real AI inference, not prompt engineering
- Vector similarity search at production speed
- Clean separation of concerns
- Scalable architecture
It mirrors how recommendations are built in real marketplaces—not demos or experiments.
Key Technologies Used
- Laravel – API orchestration and business logic
- Vue 3 – Frontend UI and user interaction
- Python 3 + Flask – AI microservice
- Hugging Face – Semantic embeddings
- FAISS – Vector similarity search
- Microservice Architecture – Scalable AI integration
Final Thoughts
AI features don't need to be monolithic or invasive. By isolating intelligence into a dedicated service, it's possible to add powerful capabilities to existing platforms without sacrificing maintainability or performance.
This project shows how modern web applications can integrate AI responsibly, scalably, and effectively.