thanks. But the error is "set gid failure: Operation not permitted" when I execute "cctontrol stop cache" after I have modified the user's group to cacheusr.
- Log in to post comments
thanks. But the error is "set gid failure: Operation not permitted" when I execute "cctontrol stop cache" after I have modified the user's group to cacheusr.
no locks in the view locks. I backuped with
BACKUP^^DBACK("","F","backup","tplusn_db.cbk","Y","%bakPath%\backup.log", "NOISY","Y","Y","",60, "F")
and restored with
do EXTSELCT^DBREST(1,0,"plusn_db.cbk","list.txt",4,"","")
and do not appy Journal Files
there is no index on this column and I test for rebuilding all the index of this table but can't resolve this issue.
hi everyone
thanks for your help. I replace the jdbc driver with version 2017.1 and then this issue looks like be resolved .
Thanks again!
I found this issue maybe resolved when I replace the jdbc driver with version 2017.1. thanks
great! the new jdbc driver has resolved my issue! Thanks!
I think upgrade is impossible for me. Is there a patch for this issue?
I have pasted the demo code. very simple, I get the same error every time when I run this code ( 16M-->100,000 rows 49M->300,000 Rows).
I have tested this code ,but the result was same with my demo code.
My code is in java and query with JDBC
String sql="Select ID,Text from eprinstance.isegment";
Statement st = dbconn.createStatement();
java.sql.ResultSet rs = st.executeQuery(sql);
while( rs.next()){
String c=rs.getString("Text");
System.out.println( c);
}
st.close();
rs.close();
dbconn.close();this is the demo code. the "Text" column type is longvarchar
Yes, I read rows one by one, the query include longvarchar column. I can read about 100,000 rows when process memory size is 16m; and about 300,000 rows when process memory size is 49M
not recursive. only so much rows, and include longvarchar column.
the query include longvarchar column. when the process memory (bbsize) is 16M by default, I can read about 100,000 rows and then prompt <store> error.
After I change the process memory to 49M(the max size in version 2010.2), I can read about 300,000 row and then prompt <store> error.
So I need some method to release the memory for process
Is there some method for release the memory used for the rows that I have read because I read the records of resultset one by one and forward only.
Thanks
I found that there were some CRC check error for some blocks when I restored again, I will backup again and restore it .
Thanks!
thanks! for 2010 version, ##class(%SQL.Statement).%ExecDirect() for executing sql
thanks! But rs.Get("NewValue") returns string not list
I convert it to list like this:
set columnValues= $EXTRACT(s,5,*) //remove the header
set delimiter=$CHAR(4)_$CHAR(1)
set columnCount= $LENGTH(columnValues, delimiter)
SET list=$LISTFROMSTRING(columnValues,delimiter)
Is there other better method for convert to list?
thanks very much!
Now I can get the records by your answer, and I can get the old/new value for each record like this
w rs.Get("NewValue")
the output likes this, aa jj is the value of 2 columns in my table

my question is: How to split these old/new value to each column?
Thanks!
thanks for your answer! But how to config? I find nothing about this in the backup/restore documentation.
Could you tell me ho to do this or the documentation link about this? And can I use script to do this config?
thanks!