Find

Article
· Jul 22 5m read

Vector Search Performance

Test Objectives

InterSystems has been testing Vector Search since it was announced as an “experimental feature” in IRIS 2024.1. The first test cycle was aimed at identifying algorithmic inefficiencies and performance constraints during the Development/QD cycle. The next test cycle used simple vector searches for single threaded performance analysis to model reliable, scalable and performant behaviour at production database scale in IRIS, and performed a series of comparative tests of key IRIS vector search features against PostgreSQL/pgvector. The current test cycle models the expected behaviour of real-world customer deployments using complex queries that span multiple indices (vector and non-vector) and run in up to 1000 concurrent threads. These tests will be run against IRIS, PostgreSQL/pgvector, and ElasticSearch

Test Platform

testing has been carried out on a variety of Linux deployment platforms; at the high end we have utilised the facilities of InterSystems Scalability Lab and our primary test platform has been an HPE bare metal server with 48 cores and 528GB RAM running Ubuntu 24.04 and current versions of IRIS and PostgreSQL/pgvector

Test Data

access to appropriate volumes of high-quality embedding data has been a significant issue during all iterations of testing. Testing during Development and QD phases was typically run against thousands of rows of synthetic (randomly generated) embedding data, production scale testing requires a minimum of one million rows. Synthetic embedding data provides an unsatisfactory basis for production scale testing because it does not support plausibility checking (there’s no source data to validate semantic proximity against) and does not reflect the clustering of data in vector space and its effects on indexed search performance and accuracy (specifically recall). We have used the following datasets during testing:

  • 40 thousand embeddings generated from historical bid submissions by the internal RF-Eye project
  • reference dataset downloaded from Hugging Face of embeddings generated against Simple Wikipedia articles at paragraph level using the Cohere multilingual-22-12 model, initially 1 million rows and then an additional 9 million rows (out of 35 million available)
  • 25 million rows of embeddings generated by a customer against public domain medical data using a custom model

to support our tests data is loaded into a staging table in the database then incrementally moved to vector data structures to support testing at increasing database sizes. Where required indexing is deferred and run as a separate post-move step. A small percentage (generally 0.1%) of the staged data is written to a query data store and subsequently used to drive the test harness; depending on whether we want to model exact vector matching this data may also be written to the vector data store.

We intend to scale our tests up to 500 million rows of embeddings when we can identify and access an appropriate dataset

Test Methodology

our standard vector search unit test performs 1000 timed queries against the vector database using randomly selected (not generated) query parameters. For indexed searches we also run an equivalent unindexed query with the same parameters to establish the ground truth result set, then compute RECALL at 1, 5 and 10 records. Each test run performs one hundred unit tests, and we perform multiple test runs against each configuration/database size

Unindexed Vector Search

in IRIS unindexed COSINE and DOT_PRODUCT search performance demonstrate predictable linear scaling until we reach the compute constraints of the server (typically by exhausting global buffers). Running our test harness against the same dataset in PostgreSQL/pgvector demonstrates that unindexed search is faster in IRIS than in current versions of PostgreSQL/pgvector

 

vector storage footprint and unindexed search performance are affected by the dimensionality of the embedding (the number of discrete components that define it in vector space), which is a function of the embedding model. IRIS DOT_PRODUCT and COSINE search cost per n dimensions shows better than linear scaling (the relative cost of searching n dimensions in an embedding decreases as the dimension count of the embedding increases), because the fixed overhead of accessing and processing the record is amortized over a greater number of component calculations

   

Indexed Vector Search

we have tested Approximate Nearest Neighbor (ANN) search  (implemented in IRIS using the Hierarchical Navigable Small World (HNSW) algorithm) with datasets up to 25 million rows of real-world embedding data and 100 million rows of synthetic data.

IRIS ANN index build times are predictable and consistent as the number of records being indexed increases, in our test environment (48 core physical server) each increment of 1 million records increases the index build time by around 3000 seconds

IRIS ANN search performance is significantly better than unindexed search performance, with sub-second execution of our test harness at database sizes up to 25 million real world records

ANN search recall was consistently above 90% in our IRIS tests with real world data

comparative testing against the same dataset in PostgreSQL/pgvector demonstrates that IRIS ANN search runs more slowly than an equivalent search in PostgreSQL/pgvector but has a more compact data footprint and quicker index builds

What’s Next?

our current benchmarking effort is focused on complex multi-threaded queries which span multiple indices (ANN and non-ANN). The ANN index is fully integrated into InterSystems query optimizer with an assigned a pseudo-selectivity of 0.5%. The query optimizer generates a query plan by comparing the selectivity of all indices available on the table being queried, with query execution starting with the most selective index and evaluating the ANN index at the appropriate step

Key Application Considerations (so far)

  • if you need guaranteed exact results unindexed search is the only option – try to limit the size of the dataset to fit into global buffers for best performance
  • restrict dimensionality to the minimum value that will provide the search granularity required by your application (model choice)
  • if you’re using ANN indexing understand the RECALL value your application needs and tune the index accordingly (remember this is data dependent)
  • the index-time resource requirement is likely to be different from the query-time requirement, you may need different environments
  • index build is resource intensive – maximize the number of worker jobs with Work Queue Manager and be aware of the constraints in your environment (IOPS in public cloud); ingest then index rather than index on insert for initial data load
  • keep the index global in buffers at query time, consider pre-fetching for optimal performance from first access
  • good IRIS system management remains key.
Discussion (0)1
Log in or sign up to continue
Question
· Jul 22

Check out the ACE Tractors Models and price in India

 

ACE Tractor was established in 1995. It offers a complete agricultural solution worldwide. Ace tractors are used for various purposes, such as ploughing, tilling, planting, and hauling. Ace tractors require low maintenance and have high fuel efficiency. The most affordable model is the ACE Veer 20, and the most expensive model is the ACE DI 9000 4WD. The ACE Tractor price in India starts from Rs. 3,30,000* to Rs. 15,75,000* and varies according to the tractor's features. 

 

Key features of the ACE Tractor

  • The ACE Tractor is equipped with a powerful engine ranging from 15 to 88.4 HP.
  • These tractors are available in both 2WD and 4WD options
  • Across India, there are over 100 ACE Tractor dealerships.

Popular Tractor Models of ACE Tractor

  • ACE DI 450 NG: The ACE DI 450 NG is a tractor powered by a 3-cylinder engine with a output of 45 HP, which works at 2000 RPM. This engine produces a maximum torque of 185Nm.
  • ACE DI 6565: It features a 61.2 HP engine that operates at 2,200 RPM and is equipped with a 4-cylinder engine. This engine produces a maximum torque of 255 Nm and features a dry-type air filter.
  • ACE DI 550 NG: This tractor is equipped with a powerful engine that produces 50 HP at 2100 RPM. This tractor is equipped with a 3-cylinder engine of 3065 cc capacity, and it also features a liquid-cooled engine, ensuring smooth operation.

Get all the details about the ACE Tractor and explore other models at TractorKarvan.

Discussion (0)1
Log in or sign up to continue
Discussion (5)2
Log in or sign up to continue
Announcement
· Jul 22

线上研讨会 | 从FHIR到OMOP,灵活的转换有效推动数据资产的应用落地

📣📣📣2025年7月25日15:00,我们邀请InterSystems销售工程师Kate Lau带来一场关于“InterSystems FHIR to OMOP数据管道”的分享,欢迎参加!

🎉🎉🎉欢迎点击此处报名参会🎉🎉🎉

 

医疗数据资产化对于现代化医院管理非常重要,因为它正在重塑医疗行业的价值创造逻辑,推动医疗机构从传统的“诊疗服务提供者”向“数据驱动的健康生态参与者”转型。作为数字时代的“石油”,数据资产化正在重新定义医疗行业的价值分配规则——将数据转化为可计量、可交易、可增值资产,成为数据持有方(例如:医疗机构)的必修之路。

然而在数据资产化的过程中,数据持有方(例如:医疗机构)面临四大典型困境
  • 数据孤岛:EHR、PACS、LIS等系统数据格式割裂,导致患者360视图缺失。
  • 标准缺失:缺乏统一性,一方面大量数据源未采用有效标准;另一方面,行业通用模型众多、数据质量标准不一,且应用场景差异巨大。这些都阻碍了数据的有效利用。
  • 治理耗时:数据治理依赖人工映射,导致ETL耗时过长。
  • 价值沉没:海量数据因缺乏标准化处理,无法支持AI模型训练或科研转化。

这些典型困境的解题钥匙在互操作性和数据标准,那么便离不开FHIR和OMOP这两大标准。

FHIR(Fast Healthcare Interoperability Resources)是HL7组织开发的医疗互操作性标准,以RESTful API和JSON/XML格式为核心,以解决医疗数据交换的实时性、轻量化和跨系统协作问题,因此,FHIR目前在全球医疗健康数据资产化方面占据统治地位,FHIR可以通过标准化数据接口和持久化能力,将原始医疗数据转化为可流通、可复用的资产,为互联网医疗、健康管理、保险精算等场景提供底层支持。

OMOP(Observational Medical Outcomes Partnership)是一个由多学科研究人员组成的国际合作组织,OMOP通用数据模型(CDM)和标准化术语集,为医疗数据分析提供了统一框架和语言,医院、制药公司、数据公司,都可以从OMOP提供的标准化数据资产中获益。通过统一数据结构和术语,支持跨数据库、跨机构的大规模队列研究。OMOP的主导型场景体现在真实世界研究,如药物安全性监测、多中心临床研究、专病数据库建设等方面,通过标准化数据模型和工具链,能够降低科研门槛,加速从数据到知识的转化,成为药物研发、公共卫生政策制定的核心基础设施。

 

一图理解FHIR与OMOP的场景差异与协同路径

如果说FHIR是数据资产化的“起点”,通过实时交换和标准化接口,将分散的医疗数据转化为可流通的资产;那么OMOP就是科研价值的“终点”,通过标准化模型和工具链,挖掘数据资产中的知识,支撑临床决策和药物研发。从FHIR到OMOP,灵活的转换能够有效推动数据资产的应用落地。

InterSystems FHIR to OMOP数据管道提供了解决方案。通过标准互操作(基于FHIR R4标准构建数据接口)、自动化映射(内置OMOP CDM预构建映射规则,大大缩短传统ETL开发周期)、 自动化数据质量分析和云原生架构(依托AWS HealthLake实现弹性扩展),可以帮助用户快速实现数据资产的OMOP化,为用户在数字时代占据先机!

📣📣📣2025年7月25日15:00,我们邀请InterSystems销售工程师Kate Lau带来一场关于“InterSystems FHIR to OMOP数据管道”的分享,通过Demo演示,详解拆解InterSystems FHIR to OMOP解决方案。欢迎点击此处报名参会!

Discussion (0)1
Log in or sign up to continue
InterSystems Official
· Jul 22

VS Code - Extension ObjectScript désormais avec télémétrie améliorée

InterSystems a le plaisir d'annoncer la sortie de la version 3.0.5 de l'extension VS Code - ObjectScript. Cette version inclut de nombreuses corrections de bugs, ainsi que des modifications des données de télémétrie collectées. La collecte de données d'utilisation supplémentaires permet à InterSystems d'identifier et de prioriser les correctifs et améliorations les plus bénéfiques pour vous, nos utilisateurs. Les informations personnelles identifiables (PII) ne seront jamais collectées et la télémétrie peut être désactivée via le paramètre telemetry.telemetryLevel. La liste complète des données collectées est disponible ici. Merci d'utiliser nos extensions et n'hésitez pas à nous signaler tout problème si vous avez des commentaires !

Discussion (0)0
Log in or sign up to continue