According to Databricks Apache Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Apache Parquet is designed to be a common interchange format for both batch and interactive workloads. It is similar to other columnar-storage file formats available in Hadoop, namely RCFile and ORC. (source: https://www.databricks.com/glossary/what-is-parquet).

1 0
1 50
Question
· Nov 25
New CACHE.DAT file

Hello,

I have a database drive which is getting full and almost getting out of space. My retention policy is for keeping 3 days worth of data. Seems like I have to modify my code for daily purge as it seems like not doing the complete job. In the meantime, I was looking to move the current CACHE.DAT files for a couple of my databases with maximum size and create a new .DAT file to replace. Can anyone kindly guide on the process?

Thank you,

0 1
0 98

Hello,

First of all thanks for your help, time, and answers.

We would like to know what are we doing wrong and how could we improve it and fix it.

We need to convert the Api Monitor Metrics which are a String with this format:

iris_cache_efficiency 13449.122
iris_cpu_pct{id="CSPDMN"} 0
iris_cpu_pct{id="ECPWorker"} 0

[...]

iris_wdwij_time 11
iris_wd_write_time 8
iris_wij_writes_per_sec 0

To JSON.

We would expect them to look like a normal JSON as follows:

0 4
1 74

I have a case where the Active Directory Service Account passwords will be changed periodically every 3 months and the changes are shared via the LastPass application which requires logging into the app to retrieve the new password and manually entering it into the Interoperability Credentials configuratrion, or the Service Registry.

0 0
0 45

Hello everyone!
Sorry for the vague title! But I wonder what would be the best way to easily import a large XML-file into a production, modifying it by deleting elements and nodes depending on what values are in those nodes/elements and later creating whole new XML-file from that?

I have gone through this: Using Caché XML Tools | Caché & Ensemble 2018.1.4 – 2018.1.8 (intersystems.com)

0 2
0 64

This question originally appeared in the comments of the post: Making use of Multiple Sub Transforms in a main map || HL7

I'm having a similar problem trying to get PRD(1) into PV1:ReferingDoctor and PRD(2) into PV1:ConsultingDoc

Running the subtransform will populate referring doctor for the first PRD, and then the second running will delete the PV1, make a new one with only the consulting doctor populated.

0 1
0 36