go to post Jon Willeke · Apr 1, 2019 The first $zf(-100) call doesn't work, because you're trying to redirect with the /STDOUT flag and the ">>" operator. You can do one or the other, but not both. If you add the /LOGCMD flag to the second $zf(-100) call, you should see something like the following in messages.log: $ZF(-100) cmd=type "" file1.txt I suggest that you not put an empty string in your options array.
go to post Jon Willeke · Mar 15, 2019 I believe that what you're looking for is available in Caché 2018.1 and should be available soon in InterSystems IRIS. Take a look at "Support for Microsoft Integrated Windows Authentication for HTTP Connections" in the release notes here: https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY...
go to post Jon Willeke · Feb 14, 2019 I'm not that familiar with adapters, but the documentation suggests that you need to use the SkipBodyAttrs property to send a header like Content-Type. Also, I think you need to decide whether you're sending form data or a body. When tFormVar is "Content-Type,apikey", the documentation says that your third data argument will be assigned to the last form variable, apikey, which is almost certainly not what you want. Your second try with three variables looks more likely to work, depending on what the service expects in the body. I don't know anything about the duplicate apikey. That's presumably specific to the service you're calling.
go to post Jon Willeke · Feb 13, 2019 Take a look at the reference for the %DynamicArray and %DynamicObject classes in the InterSystems IRIS 2019.1 preview: https://irisdocs.intersystems.com/iris20191/csp/documatic/%25CSP.Documat... In 2019.1, you can get and set the value of a field as a stream: USER>w ["abc"].%Get(0,,"stream").Size 3
go to post Jon Willeke · Feb 12, 2019 I would use the InsertFormData() method on the HttpRequest object. There's an example in the class reference: https://docs.intersystems.com/latest/csp/documatic/%25CSP.Documatic.cls
go to post Jon Willeke · Feb 6, 2019 Take heart, I think you're getting close. I think all you need now is to quote the arguments to the --before switch, as shown in the documentation: --before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /IRIS/pwd.txt" If you're still having trouble, build up the command incrementally, as I previously suggested, being careful to follow the correct form of the command. Dmitry's answer about looking at log files will also give you more information than simply noticing in docker ps that the container has exited.
go to post Jon Willeke · Feb 5, 2019 I'm finding it hard to read the output images, but every one of the commands that you listed has a Docker option after the image name. The form of the command should be: docker run <Docker opts> image_name <IRIS args> To be clear, the -e (--env), -p (--publish), and -v (--volume) switches are Docker options; they go before the image name. The --key and --before switches are IRIS arguments; they go after the image name.
go to post Jon Willeke · Feb 1, 2019 You're sort of back to where you were before, with Docker switches occurring after the image name, although you now have the image name in there twice. I don't have your environment, so I can't test this exact command, but I think you want something like this: docker run -d --privileged -v /nfs/IRIS:/IRIS \ --env ISC_DATA_DIRECTORY=/IRIS/iconfig \ --env ICM_SENTINEL_DIR=/license \ -p 52774:52774 --name IRIS4 efca5c59cbb7 \ --key /IRIS/iris.key ... If you're still having trouble, back up and build the command line incrementally. Start simple: docker run -d --name IRIS4 efca5c59cbb7 Then add in your Docker switches (-v, -e, -p, etc.), and finally add in the IRIS arguments (--key, etc.). That way you can tell which switch or argument is causing a problem.
go to post Jon Willeke · Feb 1, 2019 It looks like you're trying to run an image identified as efca5c59cbb7, but in this segment of the command line, iris is taken as the image name: --env ICM_SENTINEL_DIR=/license iris
go to post Jon Willeke · Jan 31, 2019 The --volume and --env switches are handled by the docker command. They should come before the image name.
go to post Jon Willeke · Jan 18, 2019 Even if you store the current password encrypted, consider storing just the hashes for old passwords. You might use $system.Encryption.PBKDF2() for this purpose, perhaps with fewer iterations than you'd otherwise use for a live password.
go to post Jon Willeke · Dec 4, 2018 I don't think that your test method is being run. I'm pretty sure that it has to start with "Test" (with a capital "T") for the manager to discover it.
go to post Jon Willeke · Nov 19, 2018 You've set the TranslateTable for both stream and tNewStream to "cp1252". If the input file is UTF-8, then stream.TranslateTable should be "UTF8". Otherwise, each UTF-8 code unit (i.e., byte) is read in separately as a CP1252 character.
go to post Jon Willeke · Oct 29, 2018 Apart from just redirecting to a file, and then reading from that file, the closest you can get is to use a command pipe device: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls... The main drawback, relative to $zf(-1) or $zf(-100), has been that you couldn't get the exit status of the command. I think that is now possible, but I'm not sure offhand in what versions. Note that command pipes are not supported in Cache for VMS.
go to post Jon Willeke · Oct 24, 2018 %DynamicAbstractObject is not intended to be subclassed by end users. It serves as a base for %DynamicArray and %DynamicObject, which provide JSON-style dynamic objects without a fixed schema. You seem to be looking for a way to map between JSON and plain old registered objects. We haven't yet released such a feature, but you can do something kind of similar using DocDB, which I think first shipped in IRIS 2018.1: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls... This feature helps to define a class that stores a JSON document in a persistent object and extract fields from the document into properties of the object.
go to post Jon Willeke · Sep 26, 2018 I second the recommendation to comb through the upgrade checklist archive. A couple of big ones are that 2014.1 dropped support for database extents, and for 2 KB databases.
go to post Jon Willeke · Sep 10, 2018 Your code creates a property when getting the property returns a 404. However, I'm getting a 400 when a property doesn't exist: $ curl --user _system:SYS -w "\n***\n%{http_code}\n" 'http://127.0.0.1:52773/api/docdb/v1/DEMO/prop/DEMO.TEST11/Ergebniszuf' {"status":{"errors":[{"error":"ERROR #25541: DocDB Property 'Ergebniszuf' does not exist in 'Demo.TEST11'","code":25541,"domain":"%ObjectErrors","id":"DocumentDatabasePropertyNotValid","params":["Demo.TEST11","Ergebniszuf"]}],"summary":"ERROR #25541: DocDB Property 'Ergebniszuf' does not exist in 'Demo.TEST11'"},"content":null} *** 400
go to post Jon Willeke · Aug 22, 2018 In some cases I prefer the multiple set form for readability and maintainability, as it makes explicit that all of the variables should get the same value. With separate set commands, you could change one without changing the other. It is known/expected that multiple set is slower than separate set commands, although you shouldn't see as big of a difference when the target is a global or a subscripted local. You could also abuse set $listbuild for a task like this, although it's probably even slower: s $lb(v1,v2,v3)=$lb("v1","v1","v2") For benchmarking, you may want to use a high-precision timer like $zh, rather than $h.
go to post Jon Willeke · Aug 16, 2018 The %UnitTest package is designed such that each suite stands alone: it is loaded, compiled, run, and deleted, independent of any other suite. Deleting the test classes is just part of cleaning up. You can pass the /nodelete qualifier, or even set it as a default using $system.OBJ.SetQualifiers(). Given that deletion is the default behavior, however, I suggest that you adjust your workflow accordingly. I edit XML export files directly. Some people maintain a development and test namespace, with the classes being exported from development, and imported into test.
go to post Jon Willeke · Aug 9, 2018 I'm not sure I'm reading this correctly, but I believe the key difference in the cold runs is 10,399 vs 5,853, again suggesting that ^ListData went to disk more often. I'm surprised that it makes such a big difference, but I suspect what's happening here is that your copy of ^ListData into ^StringData resulted in a more space-efficient organization. You might want to look at the packing column of a detailed report from the %GSIZE utility. It's possible that something about your data causes $list to store it less efficiently, but your data hasn't convinced me of that. If you copied ^ListData unchanged into, say, ^ListData2, my guess is that you would see a similar improvement.