Article
· Aug 12 8m read

An Overview of Vector Search functionality

Artificial intelligence (AI) has transformative potential for driving value and insights from data. As we progress toward a world where nearly every application will be AI-driven, developers building those applications will need the right tools to create experiences from these applications. Tools like vector search are essential for enabling efficient and accurate retrieval of relevant information from massive datasets when working with large language models. By converting text and images into high-dimensional vectors, these techniques allow quick comparisons and searches, even when dealing with millions of files from disparate datasets across the organization.

In this article, we will cover the following topics:

  1. What is Vector Search?

  2. Vectors and Embeddings

  3. Storing Vector Data

  4. Viewing Vector Data

  5. Performing Vector Search

 

1. What is Vector Search? 

Vector search is a method of information retrieval where documents and queries are represented as vectors instead of plain text. Machine learning models generate these vector representations from source inputs that can be text, images, or other content. Having a mathematical representation of content provides a common basis for search scenarios. In this way, a query can find a match in vector space even if the original content is in a different media or language. This method allows one to find the most relevant documents in a given query by converting both the documents and the query into vectors and then computing the cosine similarity between them. The higher the cosine similarity, the more relevant the document is.

At the core of vector search lies the concept of vector representation. In this context, a vector is an array of numbers that encapsulate the semantic meaning of a piece of content. Such machine learning models as Word2Vec, GloVe, or BERT, are commonly used to generate these vectors. These models are trained on large datasets to learn the relationships and patterns between words, sentences, or entire documents.

The primary task in vector search is to measure the similarity between vectors. Various mathematical techniques can be used for this purpose. However, cosine similarity and dot product are the most common ones. Cosine similarity measures the cosine of the angle between two vectors, providing a value between -1 and 1, where 1 indicates identical directions (high similarity), and -1 indicates opposite directions (low similarity). The dot product, on the other hand, measures the magnitude of the overlap between two vectors.

 

2. Vectors and Embeddings

Vectors can represent the semantic meaning of language in embeddings. These embeddings are determined by an embedding model, a machine-learning model that maps words to a high-dimensional geometric space. Modern embedding vectors typically range from hundreds to thousands of dimensions. Words with similar semantic meanings occupy nearby positions in this space, while words with different meanings are placed far apart. These spatial positions allow applications to algorithmically determine the similarity between two words or even sentences by performing operations on their embedding vectors.

Embeddings are a specific type of vector representation created by machine learning models that capture the semantic meaning of a text or other types of content e.g., images. Natural language machine learning models are trained on large datasets to identify patterns and relationships between words. During training, they learn to represent any input as a vector of real numbers in an intermediary step called the encoder. After training, these language models can be modified, so the intermediary vector representation becomes the model's output

In vector search, a user can compare an input vector with vectors stored in a database using operations that determine similarity, e.g., the dot product. When the vectors represent embeddings, vector search enables the algorithmic determination of the most semantically similar pieces of text compared to an input. As a result, vector search is well-suited for tasks involving information retrieval.

Vectors can also be added, subtracted, or multiplied to find meanings and build relationships. One of the most popular examples is king – man + woman = queen. Machines might use this kind of relationship to determine gender or understand gender relationships.


3. Storing Vector Data

In vector search, an embedding model transforms unstructured data, e.g., text, into a piece of structured data, called a vector. Users can then perform operations on those vectors, which they could not do with the unstructured data. InterSystems IRIS® data platform supports a dedicated VECTOR type that performs operations on vectors. There are three numeric vector types: decimal (the most precise), double, and integer (the least accurate). Since VECTOR is a standard SQL datatype, we can store vectors alongside other data in a relational table, converting a SQL database transparently into a hybrid vector database. Vector data can be added to a table with INSERT statements or through ObjectScript with a property of %Library.Vector type. IRIS Vector Search comprises a new SQL datatype VECTOR, VECTOR_DOT_PRODUCT() and VECTOR_COSINE() similarity functions to search for similar vectors. Users can access this functionality via SQL directly, or via Community-developed LangChain and LlamaIndex, popular Python frameworks for developing Generative AI applications, we will use SQL direct functionality.

In order to store the vector data, we must create a table or persistent class containing a vector datatype column/property.

Below the ObjectScript function, we will use embedded Python to create the desired table:

Discussion (0)1
Log in or sign up to continue