How to build an AI Product (5) - FastAPI Backend Service: Return Similar Sentences
FastAPI is a modern web framework for building APIs with Python 3.6+ based on standard Python type hints. It's known for its speed and easy-to-use syntax. We create a simple FastAPI backend service that accepts a sentence and returns similar sentences.
FastAPI is a modern web framework for building APIs with Python 3.6+ based on standard Python type hints. It's known for its speed and easy-to-use syntax.
Below, we showcase how to create a simple FastAPI backend service that accepts a sentence and returns similar sentences.
Setting up the environment: Start by installing FastAPI and an ASGI server, like uvicorn
.
pip install fastapi[all] uvicorn
Create a new FastAPI instance: Let's create a new Python file, main.py
from fastapi import FastAPI
from typing import List
from annoy import AnnoyIndex
import numpy as np
from sentence_transformers import SentenceTransformer
import os
from typing import Dict
# Initializing the model
model = SentenceTransformer('paraphrase-MiniLM-L6-v2')
app = FastAPI()
# Number of dimensions should be specific to the embeddings model used when creating the index.
t = AnnoyIndex(384, 'angular')
# Load the pre-built Annoy index
t.load('text_embeddings_index.ann')
# Create a function to convert text to embedding. This will be used to encode the input text.
def text_to_vector(text: str) -> np.array:
print(text)
return model.encode(text)
@app.get("/similar/")
def get_similar_sentences(sentence: str, num_results: int = 5) -> Dict[str, str]:
# Convert the input sentence to a vector
vector = text_to_vector(sentence)
# Query the Annoy index for the most similar vectors
indices = t.get_nns_by_vector(vector, num_results)
# For simplicity, we're returning the indices of the most similar sentences.
# In practice, you might want to map these indices back to the actual sentences or data.
return {"similar_indices": '|'.join([str(k) for k in indices])}
Please note, in the above code snippet we are loading the Annoy index created in previous tutorials.
We will go over various sections of the code and learn more about what each chunk of the code is doing.
Embedding Function: To use vector search, you will need to convert the input text into an embedding vector.
Create a function text_to_vector
that converts a sentence into the embedding vector. In practice, this could be a wrapper around a model like Word2Vec, FastText, or BERT. In our case we will use the sentence transformer model with the same embeddings model that was used to index the documents.
# Create a function to convert text to embedding. This will be used to encode the input text.
def text_to_vector(text: str) -> np.array:
print(text)
return model.encode(text)
Endpoint for Similar Sentences: Create an endpoint /similar
to process the text, convert it to a vector, and find the most similar vectors in the Annoy index.
@app.get("/similar/")
def get_similar_sentences(sentence: str, num_results: int = 5) -> Dict[str, str]:
# Convert the input sentence to a vector
vector = text_to_vector(sentence)
# Query the Annoy index for the most similar vectors
indices = t.get_nns_by_vector(vector, num_results)
# For simplicity, we're returning the indices of the most similar sentences.
# In practice, you might want to map these indices back to the actual sentences or data.
return {"similar_indices": '|'.join([str(k) for k in indices])}
The indices need to be used to look-up the actual sentences. This is left as an assignment to the reader.
Run the FastAPI app: Use uvicorn
to run the FastAPI application.
uvicorn main:app --host 0.0.0.0 --port 3000
Now lets query the above service.
We will use requests
library in Python.
Install the requests
library: If you haven't already installed it:
pip install requests
Querying the FastAPI service: Create a new Python script, query_service.py
, to make the request:
import requests
def query_similar_sentences(sentence: str, num_results: int = 5):
url = "http://127.0.0.1:3000/similar/"
params = {
"sentence": sentence,
"num_results": num_results
}
response = requests.get(url, params=params)
if response.status_code == 200:
return response.json()
else:
print(f"Error {response.status_code}: {response.text}")
if __name__ == "__main__":
sentence = "Your sample sentence"
results = query_similar_sentences(sentence)
print(results)
Run the script: Execute the query_service.py
script:
python query_service.py
With this query_service.py
script, you're using Python's requests
library to make an HTTP GET request to your FastAPI service, sending the desired sentence as a query parameter and receiving the indices of similar sentences from the Annoy index in response.
🤖 Want to Build the Next Big AI Product?
Join our hands-on, real-life bootcamp and transform your ideas into groundbreaking AI solutions.
Sign Up Now