It seems to be an IP of its first network adapter, probably of virtual one.
- Log in to post comments
It seems to be an IP of its first network adapter, probably of virtual one.
47 (pure COS w/o undocumented features :)
a q:$i(r($a(w,$i(i))#32))=20q:i>$l(w) 1 g aThank you, Vitaliy. Corrected.
The suggestion on the transaction was to avoid needing to lock and block by read processes
Hi Alex,
How can transactions help to avoid locking/blocking by read processes if they need to see a consistent database state, taking in account well known fact that transactions are not isolated in IRIS?
> This avoids the need wait with locks / blocking
If so, why the common practice is to write something like this:
tstart
lock +^smth
set^var1(x1,...xn)=...
...
set^varN(x1,...xm)=...
lock -^smth
tcommitRobert,
I guess you meant Lock +^myReload
58
sr=1 f i=1:1:$l(a){sr=$i(r($a(a,i)#32))<2q:'r} qrHi Ben and John, now I've got some food for thought, thank you.
It would be nice if this was not be hidden when View -> Output panel is visible.
Standard queues provide at-least-once delivery, which means that each message is delivered at least once. FIFO queues provide exactly-once processing, which means that each message is delivered once and remains available until a consumer processes it and deletes it. Duplicates are not introduced into the queue.
..so I guess that you mean standard queue with several worker processes dequeuing items from the queue. In this case the CPU utilization would likely depend on the number of workers, wouldn't it?
Then if you have an preinstalled IRIS, you will keep the private server
Do you mean that in the case of preinstalled IRIS 2022.x the private server would be kept after the upgrade to 2023.1?
Dmitry inspired me for the next question:
what will happen if IRIS 2022.2 development instance (with PWS inside) will be upgraded to 2023.1? Will PWS be wiped off during upgrade or not?
If not, will VS Code and SMP be still operational with that PWS?
Agree with you. IMHO, the original question has no definite answer: to get the executed code line offset of the current process, this (arbitrary!) line of code should contain the call of some "checker". Or there should be the limited number of selfsufficient "points of interest" in the code.
ROUTINE ztest
line ; the entry point
w $Stack($stack,"MCODE"),!
qIf run, it would apparently print the current source line... which does nothing but printing itself:
USER> do line^ztestw$Stack($stack,"MCODE"),!Hi Anna,
You can look at the property CurrentSrcLine of %SYS.ProcessQuery class (guess SMP uses it showing process details on Process page). Keep in mind that there is more sense to examine the state of some other process rather than the current one as in the last case you will see the very code line where you are examining the process :).
When I inserted your sample call of %apiOBJ nothing bad happened with the parser, maybe the update would help yours. Meanwhile, more standard way of doing the same:
do$system.Status.DecomposeStatus(sc,.err,"-d")is apparently better supported with VS Code: pop-up help and parameters prompting are rather handy, aren't they?
I guess it would not as ZRemove / ZLoad / ZIinsert / ZSave commands are mostly used as tools for the legacy way of editing routines (using routine buffer) rather than for code execution. Of course, one can execute the code inside the previously loaded routine: `zload routine do sub(...)`, but this looks strange if compared with the normal way of doing the same using `do sub^routine(...)` or `set sc=$$fun^routine(...)`. In short, virtually nobody runs code this way.
/// 48 characters:
ClassMethod MinLength(s As %String) As %Integer
{
F i=1:1:$L(s){S v($L($P(s," ",i)))=i} Q $O(v(0))
}
Hello Stefan,
Thanks for the reference, while I'm still not sure about the step #2 as ^JRNRESTO provides the (defaulted) option to disable journaling of updates during the restore to make the operation faster, see the step #10 of Restore Globals From Journal Files Using ^JRNRESTO. Besides, this is the only option compatible with parallel dejournaling. So the idea to switch off journaling system-wide looks excessive.
Regards,
Alexey
The manual says I should (in short):
- Stop journaling with ^JRNSTOP
Which manual does it say? Any manual I've read last 20 years says quite opposite: never stop journaling unless you want to get your system in troubles.
It seems that you should not bother on the subject at all: during the database restore it can't be involved in any users' activity, so there would not exist any journal record on its change.
Evgeniy, thank you for sharing your experience.
there is one table that contains 11,330,263 rows at the time of writing. Not so critically much, but it creates delays. Even the query to count the number of rows takes almost 30 seconds
Looking just at the number of rows, one apparently can't consider such a table to be a big one.
What was the size (in GB) of the underlining global?
Alas, it has the similar problem as my original solution:
w ##class(z.Scratch).Detector("azazz","zaaza") ;should be 01So, my feeling was right... It's time to quit this golf club for now, at least for me )))
Yes, it can be, while this approach can't win, as I already understood. Size = 67:
ClassMethod Detector(a As %String, b As %String) As %Boolean{
s a=$zu(28,a,6),b=$zu(28,b,6) q $tr(a,b)=$tr(b,a)&($l(a)=$l(b))}size = 83
ClassMethod Detector(a As %String, b As %String) As %Boolean{
f c="a","b"{s @c=$zstrip($zcvt(@c,"U"),"*W")} q $tr(a,b)=$tr(b,a)&($l(a)=$l(b))}This change seems to be applicable to IRIS only.
In Cache log data is still in ^%ISCLOG("Data"); just checked in Cache 2018.1.6.
If you are interested in global moving without downtime there is (live-global-mover)
Thank you, already not as new deployments of our HIS use separate document storage from the very beginning.
Your solution is beautiful as it allows placing the ECP enabled "moved data" server to some less expensive disk storage. In our case that I've briefly described ECP & Mirror were already in use, so we couldn't place document DB on a separated data server as having several independent data servers would be a bad decision for many reasons.
To reduce the backup time, it could be interesting to move these data to a database dedicated to the archive and make a backup of this database only after an archive process
...
Copy data older than 30 days to the ARCHIVE database
Hi Lorenzo,
We had the similar problem with our largest customer's site which summary database size overgrew 2 TB those days (now the have more than 5 TB). Our solution was more complex than yours as the data move process lasted several days, so we can't stop users for such a long time. We placed a reference to the new storage of (already moved) document data instead the document itself. After the data move was finished, namespace mapping was changed. References were left in the original DB because they took much less space than the original documents.
This rather sophisticated approach allowed us to move the document data without stopping the users' activity. And what was the total win? Should the document data be backup'd? Yes. Should it be journalled / mirrored? Definitely yes. The only advantage achieved with this data separation was no more or less than the ability to deploy testing and/or learning environments without the document database. That's all.
Set LocalTimestamp=$now()And to launch IRIS do...
Thanks, Evgeny.
Can't notice a difference between the lines: 1 and 3, 4 and 6. Just a typo?