go to post Timothy Leavitt · Aug 10, 2020 I'll also note - the only thing that really matters from the class query is the ID. If nothing else is using the query you could just change it to SELECT ID FROM ... - it'll constitute the model instances based on that. (This is handy because it allows reuse of class queries with different representations.)
go to post Timothy Leavitt · Aug 10, 2020 What's the MEDIATYPE parameter in Lookups.Terms (the model class)? The Accept header should be set to that. Also, you shouldn't need to set Content-Type on a GET, because you're not supplying any content in the request. (It's possible that it's throwing things off.) If you can reproduce a simple case independent of your code (that you'd be comfortable to share), feel free to file a GitHub issue and I'll try to knock it out soon.
go to post Timothy Leavitt · Aug 10, 2020 @Richard Schilke , yes, having a separate proxy for each mapping would be best practice. You could also have Data.DocHead extend Adaptor for the primary use case and have proxies for the more niche cases (if one case is more significant - typically this would be the most complete representation).
go to post Timothy Leavitt · Aug 7, 2020 @Richard Schilke, I'm glad to hear that you're planning on using this, and we're grateful for your feedback. Quick fix should just be: Do ##class(AppS.REST.ResourceMap).ModelClassDelete("Data.DocHead") Background: metadata on REST resources and actions is kept in the AppS.REST.ResourceMap and AppS.REST.ActionMap classes. These are maintained by projections and it seems there's an edge case where data isn't getting cleaned up properly. I've created a GitHub issue as a reminder to find and address the root cause: https://github.com/intersystems/apps-rest/issues/5
go to post Timothy Leavitt · Aug 6, 2020 I've had a few times where I've needed to do a targeted restore based on a journal (e.g., restoring a week of work an intern accidentally reverted; this would work for class definition changes if you could find the right window). Just to add to what Dmitriy and Erik have said, assuming your case is eligible, here's a code sample using the %SYS.Journal classes (modified from one of the times I had to do this): Class DC.Demo.JrnFix { /// Intended to be run from terminal. Find the right values to put in the variables at the top first. /// Also, use at your own risk. ClassMethod Run() { // Path to journal file (find this based on timestamps) Set file = "/path/to/journal/file" // Path to database containing data that was killed // (assuming killed during transaction so individual nodes are journalled as ZKILL) Set dbJrn = "/path/to/database/directory/" // First problem offset/address (find a real value for this via management portal or further // %SYS.Journal scripting - e.g., output from below with full range of addresses used) Set addStart = 0 // Last problem offset/address (find a real value for this via management portal or further // %SYS.Journal scripting - e.g., output from below with full range of addresses used) Set addEnd = 1000000000 // Global that you're looking to restore - as much of the global reference as is possible Set global = "MyApp.DataD" Set jrn = ##class(%SYS.Journal.File).%OpenId(file) #dim rec As %SYS.Journal.SetKillRecord TSTART Set rec = jrn.GetRecordAt(addEnd) Do { If ((rec.%IsA("%SYS.Journal.SetKillRecord"))&&(rec.DatabaseName=dbJrn)) { If (rec.GlobalNode [ global) { w rec.Address,! Set @rec.GlobalNode = rec.OldValue } Else { // Keep track of other globals we see (optional) Set skippedList($p(rec.GlobalNode,"(")) = "" } } Set rec = rec.Prev } While (rec.Address > addStart) ZWrite skippedList Break //At this point, examine things, TCOMMIT, and quit if things look good. TROLLBACK } }
go to post Timothy Leavitt · Aug 4, 2020 A good approach is adding application and/or matching roles for the web application (in the web application's security configuration). An application role is granted to users of the web application while in that context only. A matching role provides additional privileges to users holding a particular specified role. A lazy approach would be adding %All as an application role, but that likely exposes too much. This is better than giving UnknownUser %All, for sure, but it's best to provide more granular roles than %All (in this case and more generally) - say, a role that provides Read access on the namespace's default routine DB and R/W on the namespace's default global/data DB.
go to post Timothy Leavitt · Jul 8, 2020 +1 for upgrading to IRIS - there's a lot more than just improved JSON support to be gained. If you can't, you can use custom datatype classes with the JSONTYPE parameter set appropriately (e.g., to "boolean" or "number"): Class MyApplication.DataType.Boolean Extends %Library.Boolean { Parameter JSONTYPE = "boolean"; }
go to post Timothy Leavitt · Jul 8, 2020 Thanks! This is pretty much the only case in which I use zbreak (because there's a clear question and a simple way to get the answer); the rest of the time a full interactive debugger is more helpful.
go to post Timothy Leavitt · Jun 23, 2020 In this case it's also important to use foreign keys (see: https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=ROBJ_classdef_foreignkey) to ensure referential integrity. (Especially worth considering: what happens to the associated record in B/A when the other is deleted?)
go to post Timothy Leavitt · Jun 23, 2020 Another option is to use a one-to-many relationship with a Unique index on the "many" side: Class DC.Demo.OneToOne.ClassA Extends %Persistent { Relationship ClassB As DC.Demo.OneToOne.ClassB [ Cardinality = many, Inverse = ClassA ]; } Class DC.Demo.OneToOne.ClassB Extends %Persistent { Relationship ClassA As DC.Demo.OneToOne.ClassA [ Cardinality = one, Inverse = ClassB ]; Index ClassAIndex On ClassA [ Unique ]; }
go to post Timothy Leavitt · Jun 18, 2020 @Eduard Lebedyuk is probably right on this. If you add auditing for <PROTECT> events you'll probably see one before the 404.
go to post Timothy Leavitt · Jun 17, 2020 @Henrique Dias , do you have a record with ID 1? If not, you can populate some data with the following (since you extend %Populate): Do ##class(NPM.Model.Task).Populate(10)
go to post Timothy Leavitt · Jun 8, 2020 @Henrique Dias is right - that's the reason for the minimum requirement. IMO, getting an old app running on the new version of the platform is a relatively small effort compared to a Zen -> Angular migration (for example).
go to post Timothy Leavitt · Jun 5, 2020 Thank you for your interest, and for pointing out that issue. I saw it after publishing and fixed it in GitHub right away. The Open Exchange updates from GitHub at midnight, so it should be all set now.
go to post Timothy Leavitt · May 12, 2020 As an update on this topic, the approach described in earlier comments is also handy for serving a built Angular application using PathLocationStrategy (https://angular.io/api/common/PathLocationStrategy) as an alternative to webserver configuration. Our dispatch class for this purpose has: XData UrlMap [ XMLNamespace = "http://www.intersystems.com/urlmap" ] { <Routes> <Route Url="/(.*)" Method="GET" Call="ServeStaticFile" /> </Routes> } ClassMethod ServeStaticFile(pPath As %String) As %Status { #dim %request As %CSP.Request If '$Match(pPath,"^(assets/.*|.*\.(js|map|html|css|woff|woff2))$") { Set pPath = "index.html" } Do %request.Set("FILE",%request.Application_pPath) Quit ##class(%CSP.StreamServer).Page() }
go to post Timothy Leavitt · Mar 9, 2020 Agreed, I tend to use zpm for my own projects even if I don't intend to distribute. Between declaring dependencies, simpler running of unit tests, ability to script more things with my project than just "install" - it's just generally handy.
go to post Timothy Leavitt · Feb 13, 2020 Hi Javier, There are a few topics for running builds and unit tests via Jenkins (or really any CI tool): Calling in to Caché (or IRIS; the approaches are very similar) Reporting unit test results Test coverage measurement and reporting Here's a quick intro; if you have questions on any details I can drill down further. Calling in to Caché: The most common approach I've seen is writing out to a file and then using that as input to csession / iris session. You can see some examples of this (for IRIS, with containers, but quite transferrable) here: https://github.com/timleavitt/ObjectScript-Math/blob/master/.travis.yml - I'm planning to write an article on this soon. Some rules for this: Either enable OS authentication or put the username/password for the build user in the script or environment variables End the script with Halt (in case of success) or $System.Process.Terminate($Job,1) (to signal an OS-level error you can pick up from errorlevel/etc.); alternatively, always end with Halt and create a "flag file" in the case of error, the existence of which indicates that the build failed. Keep the script short - ideally, put the meat of the build logic in a class/routine that is loaded at the beginning, then run that. Sample for Windows: :: PREPARE OUTPUT FILE set OUTFILE=%SRCDIR%\outFile del "%OUTFILE%" :: NOW, PREPARE TO CALL CACHE :: :: Login with username and password ECHO %CACHEUSERNAME%>inFile echo %CACHEPASSWORD%>>inFile :: MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED echo do $system.OBJ.Load("","cb") >>inFile :: RUN JENKINS BUILD METHOD echo do ##class(Build.Class).JenkinsBuildAndTest("%WORKSPACE%") >>inFile :: THAT'S IT echo halt >>inFile :: CALL CACHE csession %INSTANCENAME% -U %NAMESPACE% <inFile echo Build completed. Press enter to exit. :: PAUSE pause > nul :: TEST IF THERE WAS AN ERROR IF EXIST "%OUTFILE%" EXIT 1 :: Clear the "errorlevel" variable that (it looks like) csession sets, causing successful builds to be marked as failure (call ) Sample for Linux: # PREPARE OUTPUT FILE OUTFILE=${WORKSPACE}/outFile rm -f $OUTFILE # PREPARE TO CALL IRIS # Login with username and password echo $IRISUSERNAME > infile.txt echo $IRISPASSWORD >> infile.txt # MAKE SURE LATEST JENKINS BUILD CLASS HAS BEEN LOADED echo 'do $system.OBJ.Load("'${WORKSPACE}'/path/to/build/class"),"cb")' >>infile.txt # RUN JENKINS BUILD METHOD echo 'do ##class(Build.Class).JenkinsBuildAndTest("'${WORKSPACE}'")' >>infile.txt # THAT'S IT echo halt >> infile.txt # CALL IRIS # csession is the equivalent for Caché iris session $IRISINSTANCE -U $NAMESPACE < infile.txt # TEST IF THERE WAS AN ERROR if [ -f $OUTFILE ] ; then exit 1 ; fi The next question is, what does Build.Class do? Given the Jenkins workspace root (WORKSPACE variable), it should load the code appropriately (likely after blowing away the code database to start with a clean slate; %Installer can help with this), then set ^UnitTestRoot based on the workspace directory, then run the tests, then report on results. Best to wrap the whole thing in a Try/Catch and throw/handle exceptions appropriately to ensure the error flag file / exit code is set. Reporting Unit Test Results: See https://github.com/intersystems-community/zpm/blob/master/src/cls/_ZPM/PackageManager/Developer/UnitTest/JUnitOutput.cls(feel free to copy/rename this if you don't want the whole community package manager) for a sample of a jUnit export; Jenkins will pick this up and report on it quite easily. Just pass an output filename to the method, then add a post-build action in Jenkins to pick up the report. (You'll want to call this from your build script class.) Measuring Test Coverage: Seeing how much of your code is covered by unit tests helps to close the feedback loop and enable developers to write better tests - I presented on this at Global Summit a few years ago. See https://openexchange.intersystems.com/package/Test-Coverage-Tool - we've successfully used this with Jenkins for both HealthShare and internal applications at InterSystems. It can produce reports in the Cobertura format, which Jenkins will accept. Instead of using %UnitTest.Manager, call TestCoverage.Manager. The parameters detailed in the readme can be passed into the third argument of RunTest as subscripts of an array; to produce a Cobertura-style export (including reexporting all source in UDL for coverage reporting in the Jenkins UI), add a "CoverageReportFile" subscript pointing to an appropriate place in the Jenkins workspace, and set the "CoverageReportClass" subscript to "TestCoverage.Report.Cobertura.ReportGenerator". If you want to use the Jenkins coverage/complexity scatter plot, use https://github.com/timleavitt/covcomplplot-plugin rather than the original; I've fixed some issues there and made it a bit more resilient to some oddities of our Cobertura-style export (relative to the data Cobertura actually produces).
go to post Timothy Leavitt · Feb 7, 2020 Oof - by "newer tricks" you meant "objects." Yikes. Really, it'd be significantly lower risk to use the object-based approach than to roll your own without objects. (e.g., see my comment on automatic cleanup via %OnClose) I don't have bandwidth to provide an object-free version, but you might look at the code for %IO.ServerSocket for inspiration.