Skip to content

d1pankarmedhi/image-search-engine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Image Search Engine

A vector based image search engine using Visual Transformer model type.

Python PyTorch AWS Huggingface


Embedding Model

Using the CLIP model 🤗 openai/clip-vit-base-patch32 to generate embedding vector for images, stored on a vector database, such as Pinecone to facilitate search capabilities.

image

  • Fig: Pipeline diagram

With 4 classes, including Airplane, Dog, Cat and Car, there are around 120 images (30 each) in total. These images are stored in an AWS S3 bucket. After generating embeddings for each image, these embeddings are stored on a Pinecone index with their respective S3 links as metadata.

The vector embedding of the input image is generated and the relevant top-k embeddings are fetched from the Pinecone database. Once the results are obtained, the corresponding images are fetched from the S3 bucket using the links stored as metadata.

image

  • Fig: Search engine demo

Getting started

  1. Clone repository
  2. Create virtual env and install dependencies
    python -m venv venv
    
    source venv/bin/activate # linux
    venv\Scripts\activate # windows
    
    pip install -r requirements.txt
  3. Modify the config.yaml file and add the necessary fields like S3 bucket name, pinecone index name, etc.
  4. Run the application using
    streamlit run app.py