how to release memory for process?

Primary tabs

Caché

hi

  I query large mount of rows from one table with JDBC, and will prompt <store> error.  I found that it was out of process memory (default 16M, max 49M in version 2010.2) .

  so my question is : how to release memory for large  mount of rows  query? or how to resolve this error?

  • 0
  • 0
  • 483
  • 24

Replies

You have two ways, query fewer rows at once. Or switch to a newer version of Caché, since version 2012.2 available much more memory per process, and by default 256Mb.

Is there some method for release the memory used for the rows that I have read because I read the records of resultset  one by one and forward only.   

Just curious, does this <STORE> error indicate that process private memory is not enough ?

If so, GROUP BY use process private memory to store the intermediate results ?

not recursive. only so much rows, and include longvarchar column.

  I found this issue maybe resolved when I replace the jdbc driver with version 2017.1.  thanks

the query include longvarchar column. when the process memory (bbsize) is  16M by default, I can read about 100,000 rows and then prompt <store> error.  

After I change the process memory to 49M(the max size in version 2010.2), I can read about 300,000 row and then prompt <store> error.

So I need some method to release the memory for process

Sadly I think we need to fall back on the first answer you got, upgrade.  This sounds like a bug in 2010 as we should not be holding the LongVarChar in the process memory.  Most likely this problem is fixed in a later version of Cache and later version also allow processes to use more memory.

 

Brendan

 

 

I think upgrade is impossible for me.   Is there a patch for this issue?

There is a method $system.Memory.Clean(<cleancache>, <defragment>), but unfortunately it appeared only with version 2011.1.

Try this code (consider that LONGVARCHAR = %Stream.GlobalCharacter) Read a CLOB through JDBC:

try
{
  ByteArrayOutputStream bos = new ByteArrayOutputStream();
  Statement st = dbconn.createStatement();
  ResultSet rs = st.executeQuery("Select ID,Text from eprinstance.isegment");
 
  while (rs.next())
  {
    int len;
    Reader reader = rs.getCharacterStream("Text");
    while ((len = reader.read()) != -1)
      bos.write(len);
 
    System.out.println(bos);
 
    reader.close();
    bos.reset();
  }
 
  bos.close();
  st.close();
  rs.close();
 
  System.out.println("OK!");
} finally
{
  dbconn.close();
}

I have tested this code ,but the result was same with my demo code.

great! the new jdbc driver has resolved my issue!  Thanks!

My code is in java and query with JDBC

In what time you getting this error? Just by reading one by one rows? If you would use the same query multiple times with the same amount of rows, the error will happen after the same count read rows?

Yes, I read rows one by one, the query include longvarchar column.  I can read about 100,000 rows when process memory size is 16m; and about 300,000 rows when process memory size is 49M

I have pasted the demo code.  very simple,  I  get the same error every time when I run this code ( 16M-->100,000 rows   49M->300,000 Rows).

Can you give a little more info about the query format and the details of the <STORE> error.  In simplest terms:  

SELECT * FROM VeryLargeTable

 

will not result in a STORE error.  So you either have a GROUP BY or you are doing other things with the results and that is leading to a <STORE> error.

Brendan

 

If you can't release memory by ajusting your own source code, then what you need is to do is expand it's size by using $zstorage or the configuration.

Also, is that a recursive call?

There are a couple of versions that use process private memory for GROUP BY but I don't think Cache 2010.2 is one of them, I was questioning myself as I was typing my first entry.

Looking at your post again you don't say what value you have for process memory.  When working with SQL we strongly recommend that you change this to the max value, 49M.

 

I still would like to see the full error message.

 

Brendan

 

 

String sql="Select ID,Text from eprinstance.isegment";
  Statement st = dbconn.createStatement();

  java.sql.ResultSet rs = st.executeQuery(sql);
  
while(  rs.next()){

     String c=rs.getString("Text");
    System.out.println( c);
}

  st.close();
  rs.close();

  dbconn.close();

this is the demo code.   the "Text" column type is longvarchar

Wonder if you add the following to your OnInit() would help...

Kill $$$EnsRuntimeAppData(..%ConfigName)

2010.2 is 7 years old at this point.  Even if we could identify a fix for this problem back porting a fix that far is not recommended.

 

 

What about chopping your query like this or similar

lastID=0

"Select TOP 10000 ID,Text from eprinstance.isegment where ID > ?"

passing lastID as query Parameter

Now you loop on it until you collected all records.

The basic idea is to limit your result set by TOP as your partition allows.

wink it's definitely independent of Cache versions.
 

hi everyone

  thanks for your help.   I replace the jdbc driver with version 2017.1 and then this issue  looks like be resolved .

Thanks again!