We recently went through an Audit of our Security Policies and Procedures when it comes to IRIS. As a result of that Audit, we need to make adjustments to the way that Security is setup within IRIS. I have already done my changes on our TEST and DEVELOPMENT environments, but now I am trying to plan out how do we make these changes in Production.
These changes include moving away from the PWS, setting up Apache/Web Gateway, moving to LDAP instead of using Delegated Authentication, updating Web Applications, updating Resources, updating Services, etc...
I recently had a company-enforced OS upgrade, and ever since going from mac OS 14.x to 15.x, I am currently having issues with SSL in IRIS.
An ARM (M3 pro) machine running OS 15.2, with the latest Docker Desktop (at the time of writing, 4.37.0). The Docker container runs IRIS for UNIX (Ubuntu Server LTS for x86-64 Containers) 2022.1.2 (Build 574_0_22161U). This container has not changed.
I followed the Documentation instructions to install Caché on my Linux box and it seems to be running fine except when I try to modify the configuration. I keep getting errors that a previous cache.cpf file (e.g., cache.cpf_8503) cannot be opened. Is my Linux "user who owns instance" account deficient in Linux permissions?
I want to online backup one Caché database by windows bat, I find some article from the online documentation about terminal script and the ^DBACK tool and External Entry Points for ^DBACK, so I can invoke the External Entry Points for ^DBACK from terminal script like this: send: Do BACKUP^DBACK<CR>
Is there other better way for backup database from bat/script?
We will be transitioning from a server running HP OpenVMS to one running RHEL 7. The main question some of the team had was what would be the best method for moving the globals to the new system.
Also, I was wondering if any others have transitioned from OpenVMS to RHEL. If so, were there any kinks that we should be aware of prior to transition?
Can someone tell me if intersystems-ru/deepsee-sysmon-dashboards is developed for a specific version of Ensemble? Looks like it could be useful to my group but we aren't upgrading till later this year and we are on 2015.2.2.
I have an above error when purging record map batches and was wondering if anyone out there has ever experienced this and if they have please any advice
Failed to purge body for header 9747192, BodyClassname='******.Batch':ERROR #5823: Cannot delete object, referenced by '*****.Record.%ParentBatch'
We are seeing more and more customers being lured with latest infrastructure technologies, particularly Composable Infrastructure. Coming with all sorts of data center consolidations and costs savings.
Question is: are there any concerns for HealthShare/TrakCare being run on these platforms or things to look out for? Anyone out there, already on these platforms?
To be more specific this is HPe Synergy with 480 Compute blades booting as bare metal.
Note sure if anyone would know this.... But I presented my team with a Proof of Concept of running SAM to monitor our IRIS Development and Test Clusters. In talking with them we would like additional OS metrics that aren't provided by what is built into SAM. Looking at more OS detail I found node_exporter from Promethus. I added node_exporter to our server that we want to monitor, but then tried to config isc_prometheus.yml to use node_exporter. That did not go well and when I restarted SAM, it would not download the built in metrics to SAM.
I am working on setting up our Failover techniques as we move to a Mirror Environment with a Arbiter, 2 Failover Nodes, and a Async (DR) Node. There are some system commands that I would like to call when the Mirror moves, and I am working on a ZMIRROR routine for that, but I also wanted to create an additional step if we wanted to manually shutdown and for the Mirror to move. So I was looking at using ZSTOP to call a couple of different items while shutting down, while the documentation has an example a couple of questions come to mind about using ZSTOP.
In the Business Operation, how do we get to know which source is currently sending primary request if there are multiple requests coming at the same time?
In a customer project we started enforcing the "Inactivity Limit" as defined in System-Wide Security Parameters. The customer would expect accounts to become Disabled after they have been inactive for the specified amount of days. However, that doesn't happen; it seems the Inactivity Limit is only established after logging in.
Furthermore, the account inactivity only starts being applied after the first login. Can you confirm that?
Lastly, for accounts that have been manually Disabled, and have an expired password, we see the following weird behavior:
Hi All,Cache is not starting ,I checked the cconsole log it is showing the below error.
The following parameters are missing from section [Journal]: '2,12' at line 143,but i checked in last cpf file,
In cpf file consist of that particular line
[Journal]
AlternateDirectory=C:\InterSystems\CACHE16\mgr\journal\
BackupsBeforePurge=2
CurrentDirectory=C:\InterSystems\CACHE16\mgr\journal\
DaysBeforePurge=2
FileSizeLimit=1024
FreezeOnError=0
JournalFilePrefix=
JournalcspSession=0
I need to pull .zip file and unzip it to access the zip files contained in the archive using the web service call. Can any one suggest is there any option for to get the zip files.
I'm trying to write an installer manifest that can create a namespace, resources (%DB_namespace) and a role (with the resource, above), based on the namespace. So you could pass in "ABC", or "XYZ", and it would create the %DB_ABC resource and the ABC role with %DB_ABC:RW permissions; or it will create the %DB_XYZ resource and the XYZ role with %DB_XYZ:RW permissions, accordingly.
We are using Cache 5.0.15 version. We are facing error. I tried to copy the dat file, I am getting cyclic redudancy error.
I did run the integrity check and repair utility . Nothing works.
Now My question is how to recover the dat file with data.
Hi All,I'm getting the error when i tried to TEST LDAP Authentication .
"Connect error: 81 - Server Down",Actually Am a beginner for LDAP Connect with Intersystems Cache.
Please provide the Info to proceed further
We are currently using Ensemble on AIX. We are on 2015.2.2. If I install Field Test on a windows desktop, is it possible that I can import the Cache.dat from my AIX server, so I can do some Proof of Concept development?
I am trying to work through our upgrading steps from 2015.2.2 to Health Connect 2019.1. Is there a place online I can access previous versions of Health Connect documentation? I am specifically looking for Health Connect 2018.1.2 documentation so I can understand the upgrade process on AIX.