My view on this solution started as that is will not work practically, but the more I thought about the scenarios, the more feasible it looks. My thinking was as follows.

When is temp table built, when the page is opened? Because that will cause a waiting time for the first load, especially if no filters have been applied yet. But in this scenario, you can just select like the top 200 or something else, as long as it is not all the rows.

Then also, every time a filter changes or the page size selection changes, you will have to rebuild the temp table. This means your temp table's lookup needs to include filters and page size to determine that it has changed for the session. It is not in the solution above, but not difficult to implement.


If you hit the indexes correct and get a small result this may be useful.

What about REST APIs, which are usually built to end the session after each request.
This will not work for REST APIs requiring pagination. There can be worked around this by letting the front-end pass in a value for the session which is more related to the front-end session.

You will also need to build some cleanup mechanism to purge rows after a session has ended. The front-end can send you some instruction on a logout, but not if the browser is just closed. It will have to be a task that runs at night or some other random time and truncate it.

I have also been playing around now with SQL inserts and updates to see how the DB reacts to uncommitted data.

If I insert a record after a tstart and it has not been committed yet, then in another session I set IsolationMode to 1 and do a select on the table for that specific record.

It does not give me an SQL 100 as I would have expected. It is a new record and hasn't been committed yet.
What it did give is -114 (Unable to get Shared Lock), BUT the values were still put into the binded variables I selected into.

With an update the same thing. I got a SQLCODE of -114, but the new values were actually put into the binded variables.

I guess there is no real uncommitted data functionality then.

I do want it in the Transaction. If the commit of the records related to this event being published fails, this event should not be rolled back too.

Problem is that it may have been picked up by the other process already.

I like the idea of using the PPG and merge after the commit. This does however add more complexity to the code in "remembering" to do the merge after the commit.

Here is an example of this concept working. On Ensemble, but should be the same for IRIS.
Note that I configured password authentication on the web application and set up Authentication on Postman.

Code:

Class Tests.RESTFormData Extends %CSP.REST
{

ClassMethod Test() As %Status
{
	w "Posting: ",$Get(%request.Data("Posting",1))
	q 1
}

XData UrlMap
{
<Routes>
	<Route Url="/" Method="POST" Call="Test" />
	</Routes>
}

}

Web Application Config:

 

Postman:

 

I want the class route.

I did a lot of abstraction for the UI, like the page header, sidebar (menu) and footer. Re-usable classes for each of these page sections, as well as being able to create a "template" page to extend from, then let each specific page add its content into it. I also needed security on menu items and resource access control across pages, so abstraction made sense.

I did it a bit like you would with Django templates.

I might post an example of how I did this at some point. I used the Material Dashboard Pro UI framework, so I can't share that specific content due to copyright issues on the CSS and JS included.

Can you provide the full class? Including what it extends and so forth? Also the code of where you construct and send the request.

The following is a working example of uploading a file using a CSP Page.

Class Test.FileUpload Extends %CSP.Page
{

ClassMethod OnPage() As %Status
{
	&html<<html>
<head>
</head>
<body>>
	If %request.Method = "POST" {
		Set tFile = ##class(%Stream.FileBinary).%New()
		Set tSC = tFile.LinkToFile("C:\Tmp\" _ %request.MimeData("FileStream",1).FileName)
		If 'tSC {
			Write !,"Upload Failed"
		} Else {
			Do tFile.CopyFrom(%request.MimeData("FileStream",1))
			Set tSC = tFile.%Save()
			If 'tSC {
			Write !,"Upload Failed"
				Do tFile.%Close()
			} Else {	
				Write !,"File Posted",!
			}
		}
	}
	&html<
<h2>Upload a File</h2>
<form enctype="multipart/form-data" method="post" action="">
    Enter a file to upload here: <input type=file size=30 name=FileStream>
    <hr />
    <ul><input type="submit" value="Upload file"></ul>
</form>
</body>
</html>>
	Quit $$$OK
}

}

I haven't considered it.

It is not a deadlock. An update fails in a process, I log the exception, and the process moves on to the next record to process. This sometimes happens when batch runs occur between 00:00 and 05:00 AM.

When that exception occurs, I need it to programmatically find the contending process' information and the node locked, to log that information so that I can investigate it at a later time.

Hi,

I am not sure I know what the issue is exactly. Is it speed, is it DB writes, or memory (FRAMESTACK) errors.

In the meantime, some things we've done to help with large files.

We have BPL's that process CSV files with 100k rows, and these things helped.

Agree. Use the default settings.

They can be inserted with SQL statements.

Also, if you want to add Configuration Items to the Production post deployment, without having to recompile or change the production manually, use a session script, or create a classmethod or routine that can import a CSV containing the relevant information.

E.g.

&sql( INSERT INTO Ens_Config.DefaultSettings (Deployable, Description, HostClassName, ItemName, ProductionName, SettingName, SettingValue) 
VALUES (1, 'My setting description', 'My.Operation', 'My Nifty Operation', 'My.Production', 'TheSetting', 'SettingValue'))

and for the Production:

Set tProduction = ##class(Ens.Config.Production).%OpenId("My.Production")
Set tObj = ##class(Ens.Config.Item).%New()
Set tObj.Name = "My Nifty Operation"
Set tObj.ClassName = "My.Operation"
Set tObj.PoolSize = 1
Set tObj.Enabled = 1
Set tObj.LogTraceEvents = 0
Set tSC = tProduction.Items.Insert(tObj)
Set tSC = tProduction.%Save()
w $System.Status.GetErrorText(tSC)
Set tSC = ##class(Ens.Director).UpdateProduction()
w $System.Status.GetErrorText(tSC)

You can also open existing confiritems in order to update the Pool Size or any of those other settings that are not configurable in the "System Default Settings" for some reason.