Thanks for the explanation, I assumed it worked something like that.

Do you agree with that approach though or would you prefer something more like Robert's description?

In my mind you don't create a database based on a specific file so the DAT should be swappable and retain the database settings, other database settings work that way.

Hi, that was my understanding but I have seen resource issues caused by moving CACHE.DAT in the past and recently had an odd issue on IRIS then did the below to test it.

I just copied the IRIS.DAT from the USER database folder, which has the %DB_USER resource, to another database folder which had %DB_%DEFAULT and afterwards the other database had %DB_USER instead of %DB_%DEFAULT and so it seems the Resource does move with the DAT file.
The copy was done manually with windows file copies while IRIS was down and no config changes made.

This is all viewing the database resource via the portal in  System > Configuration > Local Databases

That removes all " characters, while these are indeed doublequote's, someone could also mean "" where there are double quotes, as in quotes inside a quoted string (str="this is a quote"".")

For that

$translate(str,"""""","""")

or I find it more readable to use the ASCII value so to remove

$translate(str,$c(34.34),"")

or to replace with a single "

$replace(str,$c(34,34),$c(34))

Not 100% sure of the requirement but if you write the output from the Cache code as if it was to the terminal then you can redirect the output to a file by appending >output.file but I don't think cterm is the way to do that as cterm will trap all the output itself, you would need to use csession for that.

You could just write to a file in your Cache code or run an external command/script from inside the Cache code?

Access Denied is generally either a user permissions or a resource issue.  However, from one of your screenshots it looks like you are connecting to a Cache install running an evaluation license?  These can have limitations so it is also possible you are hitting a license (user/process) limit?

Most likely the user does not have permissions.  Based on screenshots in other comments it looks like you are using the wrong password for the Marco user and that the Admin user is disabled but you have obviously been able to log into the portal and overcome the issue mentioned with selecting events?  Would also be useful to check the console log for that time as well.

If this is still happening perhaps update this query with details on the username, whether that user works in the portal, and the latest errors in event log that relates to the studio failure, look at all entries for that time, not just the login failures, in case the issue is different.

Microsoft used to have an Excel Viewer but it is retired, though it may still work.
Download the latest online Excel Viewer - Office | Microsoft Docs

If you don't have an MS Office license I'd suggest LibreOffice.
Home | LibreOffice - Free Office Suite - Based on OpenOffice - Compatible with Microsoft

However, you mention CSV files and these are not actually Excel Spreadsheets, they can just be viewed in a text editor if needs be, or any Spreadsheet software will open a formatted view.  There are also CSV specific viewers such as Nirsoft's
CSV / Tab delimited file viewer and converter for Windows (nirsoft.net)
 

Ok, the below is from my limited knowledge so may need correcting.

The base file format is simple, the complication is the compression methods (which includes encoding) used and making sure that is all supported as it is part of the actual file format specification.

I have issues with some of the speed claims as they seem to stem from statements that the column based, as opposed to row based layout, means it doesn't have to read the whole file but that is pretty limited, as long as you need to look at a single record and the last field then you still have to read through the whole file.  It also ignores that for most of the usage I have had of CSV files I have needed to ingest the whole file anyway.

Any other speed improvements are likely due to the code and database ingesting/searching the data not the format itself.

That doesn't mean there aren't advantages.  It may be easier to write more efficient code and the built in compression can reduce time and costs.

So the question is how popular is this format.  Well it's pretty new and so hasn't made great inroads as yet, however, given Intersystems interest in "Big Data", ML, and it's integration with Spark etc...  it does make sense as it is part of Hadoop and Spark.

One thing to note though is that python already supports this through the PyArrow package and so this might be the best solution, simply use embedded python to process the files.

As far as I recall this isn't logged at the point the status becomes "dead" only when cleaned up.

We once wrote a scheduled task that looks for and logs these processes using IsGhost() as they can sit there a while waiting to be cleaned up at times.

There is a mention in the IRIS documentation that says these are logged to the Event Log but I am not sure what happens if you aren't using Interoperability or Productions in IRIS, I suspect there is no logging then as it is part of Production Monitoring.