We have a new requirement being push down by our Data Security to no longer use Local SQL Accounts to access our Databases. So they asked me to create a Service Account that is on the Domain for our connections to each database.

I tried just changing my JDBC connection to using this Service Account and Password but I am not having any luck trying to connect to the database.

" Connection failed.
Login failed for user 'osumc\CPD.Intr.Service'. ClientConnectionId:ade97239-c1c8-4ed1-8230-d274edb2e731 "

1 4
0 3.6K
Question
· Mar 12, 2019
String to date

I am trying to convert a string to date but can not get it to work I have function that I would like to take in a date string and covert it to date object

here is the ezample so far can not get it to work any help appreciated

set p="12/03/2019"
 
w $System.SQL.TODATE(p,"YYYY-MM-DD")
 
<ILLEGAL VALUE>todate+32^%qarfunc

if I try this still get the wrong value returned

set p="12/03/2019" 
w $ZDATE(p,3)
1841-01-12
0 6
0 2.8K

Hi Cache team, I am in the need of listing all the user defined schemas that are present my Cache db and also the user defined tables and views and Columns of those tables and views through Queries. So that I can write some JDBC code to run the queries and fetch the above metadata. Any help is appreciated.

Thanks in Advance,

Kranthi kiran.

0 2
0 2.3K

We have noticed in the course of the last 18 days our CACHE.dat has grown by 20 GB. Is there a way we can break down the data in CACHE.dat to see what could be growing in size?

Let me state it another way.....Is there a way to see what space an Operation/Service/Process is taking up within a certain Production?

Thanks

Scott Roth

The Ohio State University Wexner Medical Center

0 3
0 1.4K

I am looking for a database management tool I would have expected to find something like on the SMP website

Aim

show current database usage (ie size allocation) by database then table etc and allow continued drill down,

show information as a table, so can then sort by size to find the biggest item easily

also show it graphically

And then have ability to track and trend growth in size over time

identify a normal growth pattern

alert if variation (higher or lower) from normal based on recent trend

0 2
0 1.1K

Currently, we have an application running in one namespace ("Database B") that has globals and routines mapped to another database ("Database A"). After enforcing clean up on Database A, we found that 90% of the disk is free. We would like to compact Database A and release the unused space. However, we are running OpenVMS, which seems to be the issue.

For databases consisting of only globals, we are able to use ^GBLOCKCOPY; however, we need to ensure that the routines and mappings are also copied.

1 8
0 996
Question
· Apr 26, 2017
Restore Database issue

Hi Everybody,

I'm trying to restore database to a 2016.2.2.853 caché version but i've some problems ...

Into my backup file, i've 6 namespaces. After use the Do ^DBREST and configure all namespace into the portal, I can only reach 2 of 6.

when I write zn "blabla" into the terminal, i've got this error message :

ZN "blabla"
^
<DIRECTORY> *r:\data\blabla


Of course, Database and namespace are correctly define.

0 3
0 986

在Aix7.1上安装使用root用户安装cache2016.1.1.107,且在安装过程中创建cacheusr用户;更改操作系统上的cacheusr的umask后,通过数据库修改编译后的文件(如,js,csp等)在小机上查看权限不变(-rwxrw-r-- cacheusr cacheusr test.js)。

目的:通过数据库编译后的文件的other用户有读写权限。

5 3
0 923

Hello, I would like to know if CACHE has any limitations on uploading files.

Why am I asking this, why am I going through a problem here.

What happens is that when uploading files or images that are larger than 2.7 MB CACHE is limiting the size to 2.7 MB, any file larger than that is saved corrupted.

Already at the time of uploading the file or image, CACHE is limiting the size of the file to 2.6 MB.

0 3
0 902

We don't often use SQL within our org, which is mostly due to the performance issue we experience due to the quantity of data we are reviewing.

Aside from the standard performance measures for non-Caché databases, are there any recommended approaches when querying large tables?

The table would have roughly 50M records, but there are not a finite amount of sub-nodes.

0 7
0 896