During some consulting activity, I found at the client's site CACHEAUDIT database of more than 100 GB size. The reason was simple: several processes produced a great amount of %System/%System/OSCommand audit records due to frequent external calls ($zf(-100,...)). As it is well-known, those events can be easily disabled systemwide, while this can be hardly considered secure enough. Reducing the number of days before audit cleanup from default 62 to some reasonable figure (e.g. 15) seems to be a better solution, but...

0 2
0 229

As we all well know, InterSystems IRIS has an extensive range of tools for improving the scalability of application systems. In particular, much has been done to facilitate the parallel processing of data, including the use of parallelism in SQL query processing and the most attention-grabbing feature of IRIS: sharding. However, many mature developments that started back in Caché and have been carried over into IRIS actively use the multi-model features of this DBMS, which are understood as allowing the coexistence of different data models within a single database. For example, the HIS qMS database contains both semantic relational (electronic medical records) as well as traditional relational (interaction with PACS) and hierarchical data models (laboratory data and integration with other systems). Most of the listed models are implemented using SP.ARM's qWORD tool (a mini-DBMS that is based on direct access to globals). Therefore, unfortunately, it is not possible to use the new capabilities of parallel query processing for scaling, since these queries do not use IRIS SQL access.

Meanwhile, as the size of the database grows, most of the problems inherent to large relational databases become right for non-relational ones. So, this is a major reason why we are interested in parallel data processing as one of the tools that can be used for scaling.

In this article, I would like to discuss those aspects of parallel data processing that I have been dealing with over the years when solving tasks that are rarely mentioned in discussions of Big Data. I am going to be focusing on the technological transformation of databases, or, rather, technologies for transforming databases.

12 4
3 568

It's well-known among Studio users that besides few predefined code fragments (for ObjectScript, Basic, MV Basic) it's possible to add user-defined code fragments. I found it rather convenient to use them as patterns that help to follow some conventions (internal standards) of writing, say, methods descriptions.

But I didn't find a way how to share these patterns, except dumb copy-pasting. Did somebody succeed with this task? Any help would be appreciated.

0 10
0 217

Introduction

Despite the fact that InterSystems has long recommended using external backup tools, many users have opted to use the internal Online Backup facility, which is included in all distributions of InterSystems products (IRIS Data Platform, Caché, etc.). The reasons why are quite obvious:

10 0
2 362

Sometimes global mapping of the same globals can be defined in different ways. E.g., I need to define it for 3 globals ^qAuditC, ^qAuditLog, ^qAuditLogC from the same database named APP-NOJOURN. Which approach should be better from the performance point of view?

1) qAudit* => APP-NOJOURN (one record in global mapping table)
or

2) qAuditC => APP-NOJOURN
qAuditLog => APP-NOJOURN
qAuditLogC => APP-NOJOURN (three records in global mapping table)

0 3
0 465