Acting this way, you are just quitting your method on error. What if your method needs finalization?
Both approaches have been discussed above (try/catch or $ztrap/$etrap) allow it easily.
- Log in to post comments
Acting this way, you are just quitting your method on error. What if your method needs finalization?
Both approaches have been discussed above (try/catch or $ztrap/$etrap) allow it easily.
Evgeny,
Is it a question: try/catch (or another way of error handling) or %Status? If we try to follow some solid design principles, each method should perform only one function, have only one way to exit from, and should inform the caller whether it succeded. So, we should use both. Extending your sample:
ClassMethod solid1(parameter, ...) as %Status
{
set sc=$$$OK
try {
$$$TOE(sc,##class(x.y).a())
$$$TOE(sc,##class(x.y).b())
...
$$$TOE(sc,obj.NormalMethod(parameter))
...
} catch e {
// error handling
set sc=e.AsStatus()
}
// finally... common finalization for both (good or bad) cases
return sc
}One could write it in another fashion, e.g.
ClassMethod solid2(parameter, ...) as %Status
{
set sc=$$$OK
new $estack,$etrap
set $etrap="if $estack=0 goto errProc"
set sc=##class(x.y).a()) if 'sc goto solid2Q
set sc=##class(x.y).b()) if 'sc goto solid2Q
...
set sc=obj.NormalMethod(parameter) if 'sc goto solid2Q
...
solid2Q
// finally... common finalization for both (good or bad) cases
return sc
errProc // common error handler
// error handling
set sc=$$$ERROR(5001,"solid2 failed: "_$ze)
goto solid2Q
}Which version is better? If we don't need individual error handling of ##class(x.y).a(), ##class(x.y).b(), etc calls, I'd prefer solid1. Otherwise, solid2 seems to be more flexible: it's easy to add some extra processing of each call return. To achieve the same flexibility in solid1, we are to cover each call in an individual try-catch pair, making the coding style too heavy.
...as much as possible avoid the need to use subscript level mapping (SLM) to manage growth of a single global across multiple databases.
Ray, may I ask you: why should we avoid it?
Several ##class can be eliminated, making the code shorter and readable writable better:
ClassMethod CRUDUser(id, name) As %String
{
q:'$d(name) "Name required" s x=$classmethod("Data.User",$s($g(id)="":"%New",1:"%OpenId"),$g(id)) q:'x "User not found"
s x.Name=name q $s(x.%Save()=1:"User saved",1:"Error saving")
}Nigel,
I'm OK with the answers given by Eduard and Danny. After moving to VS Code situation with snippets would be even better.
Thank you again.
Thank you, Nigel.
Multi-line macros don't meet my needs. What I really need are fillable patterns (templates), to prompt developers on writing methods (functions) descriptions in a standardized manner, something like this:
/// -------------------------------------------------------------------------------------------- /// Method purpose /// /// **Arguments** /// /// #. *pArg1*: /// #. *pArg2*: /// /// **Returns** /// /// /// **Notes** /// /// /// **Call sample** /// :: /// /// ; code line 1 /// ; code line 2 ///
Many thanks for the help! Nice day to everybody!
Daniel,
Not talking about whether $zerror/$ztrap is good or bad, false positives and negatives can be avoided if use it right way. The well-known pattern is as follows:
rouA set $ze="",$zt="rouAerr" ... set sc=$$funB() if 'sc quit sc ... quit sc rouAerr set $zt="" ; that's of great importance! set sc=$$$ERROR(5002,$ze) ; just a sample of processing $ze... quit sc funB() set $ze="",$zt="funBerr" ; while not neccessary, local $ztrap can be defined ... quit sc funBerr set $zt="" set sc=$$$ERROR(5002,$ze) quit sc
I am not putting my hopes on GBLOCKCOPY due to its limitations
This precaution sounds strange in your context: if you are simply copying a set of globals that are actively being modified, you will get an unpredictable result nevertheless the utility you choose. To make a consistent copy, one should apply journal records or (if copying to another Cache instance) use some cross-system technique, such as Mirroring or Shadowing, while both require Multi-Server cache.key.
If you tell us more about the task you are trying to solve, we'd advise you better.
Public recognition by developer level
Other vendors from your list have some kind of developer certification as well. E.g., https://docs.microsoft.com/en-us/learn/certifications/
Or you meant something else, didn't you?
it is a part of admin-panel of our web-application
I guessed it was not ad hoc writing ;)
Even with some universal tool like your admin panel, you can generate only those tasks which code you preliminary prepared with d code.Write() commands. The reasons of preparing code this way in this very case are still unclear for me: I can hardly imagine extra functionality you can add to the traditional approach demonstrated in Evgeny's reply.
Alexandr,
why did you prefer generating the task class instead of just writing it in some text editor? Looks like overkill for such a small task.
Yes, it's possible. You can use %SYS.Task for a Task API. Its methods and properties are well documented in its superclass %SYS.TaskSuper.
Yone,
if you really moving the files every day, you don't need to check the date: there are no old files in your in-folders, because they have been deleted with mv (move) command. Most pieces of software which does the similar tasks (e-mail clients and servers, SMS processors, etc) do it this way, moving files rather than just copying them. The simpler the better, isn't it?
Please look at
https://cedocs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls?APP=1&CLASSNAME=%25SYSTEM.License
query ConnectionAppList()
The data source is the license server. The license server maintains counts of ISC.Appname license sections but does not manage other application license sections. Usage of other license sections can be examined with the ApplicationUserList query which returns license use for all applications on the current Cache instance.
"...other application license sections" is just our case, so ISC licence server can't help much.
Eduard,
If you are still interested: Application licensing doesn't support distributed license accounting; it was the stopper for us as our largest customer is running Cache based application (SP.ARM's HIS qMS) in distributed environment.
I'd like cm that stands for community. Everybody knows that we are all developers here, so this is not of great need to be reminded. Just IMHO :)
Do you plan to allow subpackages, e.g. dc.myapp?
What to do with those apps that had been already uploaded to Open Exchange before this naming convention will be established?
Hi James,
A colleague of mine developed JDBC based solution in question which works with Oracle, mySQL and Caché a while ago.
It's based on the following classes:
%Net.Remote.Java.JDBCGateway (This class is used internally by Caché. You should not make direct use of it within your applications.)
Despite the last remark, InterSystems follows the similar approach in its Ensemble / IRIS outbound adapters.
Our solution is compatible with actual versions of Caché and IRIS. Regretfully, it's too bound to our app, so I'm not sure whether it is the best source of sample code at the moment.
It depends.
Switch 10 which inhibits all global/routine access except by the process that sets this switch should meet, while setting it can interfere with your _own_ activity.
Switch 12 which disables logins can be insufficient for disabling web access, which is easier to restrict by stopping web server.
I didn't personally experiment with those switches as we have no such problem because our application utilizes its own "disable logins" flag by locking the variable.
to avoid unexpected problems with other user operations
... is usually easier to disable users sign-on by setting switch 12, or restrict their activity by some other appropriate switches combination (see Using Switches).
Another possible approach:
Hi Graham,
the code published above is a Task Manager task. If you need flexible Task to purge .cbk and .log files created by internal online backup tasks of any kind, you may also want to look at cmPurgeBackup.
Just adding 2c to Kevin's reply.
Most hosts that support TCP also support TCP Keepalive
Besides, server application should support it. 3 hours keepalive time setting is not typical; it sounds like your server app not tuned for keepalive support or doesn't support it at all.
In case of IRIS/Caché, you should explicitly set some options on connected server socket, e.g.:
start(port) // start port listener s io="|TCP|"_port o io:(:port:"APSTE"):20 e quit while 1 { u io r x u $p // connection is accepted: fork child process j child:(:5:io:io) } child use $p:(:/KEEPALIVE=60:/POLLDISCON) ...
/KEEPALIVE=60 to set keepalive time to 60 seconds
/POLLDISCON to switch on TCP probes.
Stuart,
Unless you publish the failing code fragment, it would be difficult to help you.
Congrats!
Can we expect publishing your code aimed "...to simulate NYSE data processing (Order/Fill)..." as an Open Exchange App?
As to Supported Server Platforms, it might be even Ubuntu 18.04 LTS for x86-64 rather than 16.04.
And what about RHEL 8? Which IRIS version will be supported under this OS?
Paul's warning sounded like this setting changed after the instance re-installation, while it unlikely could, as even in IRIS, according to docs:
System-Wide Security Parameters
Allow multiple security domains ... [Default is a single domain]
Walter,
Why not configure different ports for your both instances?