I am not system admin. But it used to be very simple to install CSP Gateway on an apache system on Linux with Apache installed. I used to run the CSP Gateway installation program and after it was done, all I had to do was fine tune some configurations on CSP Gateway portal on http://<ip>/csp/bin/Systems/Module.cxw and I was up and running.
We are accessing the InterSystems`s cache database via UNIX ODBC and displaying the data in PHP website . Recently we have upgraded the PHP version to 5.6. We are getting the nondisplayable characters (�) for only strings. But the numbers and date fields are displaying correctly.
While querying the database via ISQL everything working fine (No Special characters).
I have looked around the internet and found the PHP 5.6 changed the default character to UTF-8.
For this issue anything, we can do from cache side.
I've configured my Ensemble instance to use IIS 7, according to the instructions CSP Gateway configuration Guide and I've configure the CSP Virtual application.
On documentation, this ^ISCSOAP^is log to service SOAP, but why send to cconsole.log?
04/04/18-01:00:00:597 (10608) 2 ^ISCSOAP in Namespace %SYS has been active for 348 day(s). 04/04/18-01:00:00:598 (10608) 2 ^ISCSOAP in Namespace X has been active for 165 day(s).
I'm trying to schedule a task hourly. It's currently set to execute "Daily" but run every 60 minutes. It's only running daily. Do I need to change the "Daily" to OnDemand in order to get it to run hourly?
Is there any way (on a Window machine) to rename an installed instance? I installed my instance as 20201 and now I've upgraded in place to 2020.4 and I would like to change the instance name to be 20204, but I'm not sure how/where/or-even-if-possible to rename the instance.
I've recently deleted several Caché namespaces to clear up space, turned Caché off, and then this started happening. Caché cube is grey, but only the start Caché option is unavailable.
I am taking over a production system that had some HSLIB and other database routine and class files modified. However, I do not know what was modified .
Hi, all! As I know, InterSystems recommends the use of Huge Pages. And if count of Huge Pages is enough, we'll see (in cconsole.log) something like this during Cache startup:
12/29/17-14:40:50:360 (3625) 0 Allocated 4630MB shared memory using Huge Pages: 4096MB global buffers, 256MB routine buffers
But if count of Huge Pages is not enough for location of all Globals and Routines caches, Cache won't use Huge Pages.
Hi All, When i try to run the CSP from Studio it shows "Server Availability Error" and management portal and documents also throw a same error.how to recover it.
Currently, we have an application running in one namespace ("Database B") that has globals and routines mapped to another database ("Database A"). After enforcing clean up on Database A, we found that 90% of the disk is free. We would like to compact Database A and release the unused space. However, we are running OpenVMS, which seems to be the issue.
For databases consisting of only globals, we are able to use ^GBLOCKCOPY; however, we need to ensure that the routines and mappings are also copied.
Our Cache server(version 5.0.15) has recently crashed. We are in the process of recovery. We had replaced the server's hard disk and trying to install the cache(5.0.15) But getting the following error. Any body show some light on the issue.
In System Management Portal, I'm on UnknownUser (which I've accidentally removed the %All role from), so I log out of UnknownUser and try to log in as root or Admin, but only see the following screen:
So i'm having this problem with the task manager, the tasks simply stopped running. I had a problem with queued massages and trying to figure out what to do i'm afraid I may messed up something else, can someone help me ?
I was wondering if there was a certain procedure or documentation on securing (Https://) the Web Portal into IRIS/Ensemble?
Currently we are using LDAP Delegated Authentication to access the Web Portal using LDAP. However as more and more emphasis is put on securing applications within networks, I can see Management/Security asking us to make sure that the web portal is more secure.
I have a problem with an Ensemble instance on Windows to access to a network shared directory. Ensemble service (services.msc) is executed with a user which has access to this network shared directory :
- When I try to copy or access files from a terminal ==> this is OK : the command w ##class(%SYS.ProcessQuery).%OpenId($Job).OSUserName returns the user defined in Ensemble service logon screen.
I have a Web Service, and I need that only a specific user can consume the web service. Also, I need that this user can ONLY consume the Web Service and nothing more (no access to management portal).
Currently, we are utilizing batch jobs at the OS level to kick off routines that watch for files. We are trying to convert these processes to be performed by the Task Manager.
The routines have while loops and perform while loops so long as the time parameters are being met.
What's the best way to ensure Task Manager kicks them off after the completion of the shutdown/backup/start process is performed, which we do nightly? I want to ensure that it starts it regardless of the time that we've set.
If I have a cache.dat file from a Windows 2012 (64-bit) machine and I want to mount it on a Caché instance running on RHEL, will it work? Assume the versions of Caché are the same.
Can a Cache Mirror be used in the cloud ? (ie stand up a Primary and Backup member instances in a High Availability Cache Mirroring configuration)
I'm investigating the validity of this configuration, because I was of the understanding that this may not possible due to these cloud servers not (typically) having fixed ip addresses, which interferes with the Virtual IP settings for the mirror set.
Is this correct, and if there are workarounds (like Load Balancing ?) can I have details on how this should be configured ?
In one of the projects, when we have ECP with 10 ECP application servers, from time to time we faced the issue when our journals fail to purge, due to open transactions. While we have about 100-150 GB journal files per day, it quite quickly became a big issue, and with mirroring a very big issue. Mostly we just rebooted our ECP Data server, so it searches rollbacks any transactions, but such process is too long, may steal a few hours. I did not find any way, how to get the list of the open transactions from one place from ECP Data Server. We just migrated our Data server to 2018.1.