for this cases a possible solution could be 

%Stream.Global has a FindAt method that could give you a position of  "\u00"

[Find the first occurrence of target in the stream starting the search at position. ]

http://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?P...

But: if you are on the decoded stream all non printables are just single characters. No issue to cut it in pieces

  • read your source stream in reasonable sized junks
  • clean out what ever you need
  • append it to a temporary stream
  • loop on source until you hit AtEnd condition
  • finally replace your source either by "copyFromSteam" method [temp -> source]
    or replace source stream reference by temp stream reference

I guess the whole code is shorter than this description.

I'd suggest not to touch the global under the source steam.

The default for %String is MAXLEN=50

if you write in your definition %String(MAXLEN="")   also in Method calls this should be enough.

Query Methode(data1 As %Library.String(MAXLEN=""), data2 As %Library.String(MAXLEN=""), data3 As %Library.String(MAXLEN="")) As %Library.Query(CONTAINID = 1, ROWSPEC = "Result,Par2:%String") [ SqlProc ]

  and so on.

Or you make you own data type  inheriting %String overwriting  Parameter MAXLEN=""

Or just use %Library.VarString which makes just this MAXLEN=""
http://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?P...

Query Methode(data1 As %LibraryVar.String, data2 As %Library.VarString, data3 As %Library.VarString) As %Library.Query(CONTAINID = 1, ROWSPEC = "Result,Par2:%VarString") [ SqlProc ]

  and so on.

of course it makes sense!

but then you know which application is using it and can use the application's cleaning method / routine that takes care of all kind of dependencies:
I remember well times when routines used to start with  KILL ^CacheTemp*($JOB)
I expected over time most applications are using PPG  (^||myGlobal...) to avoid this. Or have a Clean-Up.

Hi Evgeny,

IF you can afford a short OFFLINE state:

#5)  dismount DB / copy of cache.dat to a fast local device / remount it
      move the copy in a secure place: #2, #1

ELSE IF you have to remain online all the time:
#3)  on fast local device + move backup in secure place by #2,#1
 

NEVER #4) a fair chance for massive inconsitency

Robert
[semper fidelis]

Great explanation of the issue. Thanks!
So we have an nice example what Proleptic Gregorian Calendar used for $H calculation means:

 

write $zd($zdh("1492-10-12",3,,,,,-600000)#7,11)
Wed

 

And that's definitely not correct as you demonstrated very precisely.
But is common use in most programming and DB systems. 

But the date as such is questionable for 2 reasons

  • There is a 5..6 hours time gap between Spain and the Caribean sea 
  • At the and of the middle age every kingdom and smaller typically dated their documents
    by the years their actual king was in power. A common date as we know was not at all in place.

So Oct.12 is most likely a date back calculated by historians hundred years later . 
So we should interpret this date as an common agreed convention that by luck was Friday.

Thanks again for the contribution.

 

It's possible but counter productive
since: if you switch off journal this get's logged in Audit and generates at least 1 entry in journal
and swicthing it back on for the rest of your application you get another entry.

I just don't recommend things with negative impact on performance. 

Anyhow IF you insist it's your fate:

http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=...

You may do it as well with SQL

select count(*) cnt , ID from (
  select 'PERS' Typ, ID from %Dictionary.ClassDefinition
  where Super [ 'Persistent'
    union all
  select 'XML' Typ, ID from %Dictionary.ClassDefinition
  where Super [ 'XML.Adaptor'
)
group by ID
order by cnt desc

Result : 2  both classes contained  in class

cntID
2%BI.Blog
2%BI.BlogPost
2%BI.DashBoard
2%BI.DetailList
2%BI.DocMag
2%BI.ImageList
2%BI.KPI
2%BI.ListField
2%BI.Measure
2%BI.PerfMet
2%BI.PerfMetNode
2%BI.PivotData
2%BI.PivotTable

Athanassios,

I googled some time around Python console:
It  is single threaded.  
But your expectations seem to be that the behavour is like a terminal.

To achieve this your have to run 2 Phyton consoles / shells as you required 2 Terminals.
a) 1 passive to receive messages from WRITE  see attaches example starting before b)
b) 1 active to trigger action in Caché

Your initial code from your questions using the Caché Phyton binding covers b) OK!

for a) you may use a listener similar to this Phyton example with the correct port, buffer,...  ToDo

   1 #!/usr/bin/env python
   2 
   3 import socket
   4 
   5 
   6 TCP_IP = '127.0.0.1'
   7 TCP_PORT = 5005
   8 BUFFER_SIZE = 20  # Normally 1024, but we want fast response
   9 
  10 s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
  11 s.bind((TCP_IP, TCP_PORT))
  12 s.listen(1)
  13 
  14 conn, addr = s.accept()
  15 print 'Connection address:', addr
  16 while 1:
  17     data = conn.recv(BUFFER_SIZE)
  18     if not data: break
  19     print "received data:", data
  20     conn.send(data)  # echo
  21 conn.close()

Dashboards are always fresh calculated.

Compare the performance in .NET  to  IE.  It should be rather close to each other being based on similar technology.

If there is a significant difference then you dig at  .NET end (buffers, ...)

If Chrome is significant faster then IE it's most likely the faster JS engine in Chrome.

Next you could look into global buffers of Caché and concurrent use of the Caché instance.
And (rarely) the complexity of your dashboard