I have also been playing around now with SQL inserts and updates to see how the DB reacts to uncommitted data.

If I insert a record after a tstart and it has not been committed yet, then in another session I set IsolationMode to 1 and do a select on the table for that specific record.

It does not give me an SQL 100 as I would have expected. It is a new record and hasn't been committed yet.
What it did give is -114 (Unable to get Shared Lock), BUT the values were still put into the binded variables I selected into.

With an update the same thing. I got a SQLCODE of -114, but the new values were actually put into the binded variables.

I guess there is no real uncommitted data functionality then.

I do want it in the Transaction. If the commit of the records related to this event being published fails, this event should not be rolled back too.

Problem is that it may have been picked up by the other process already.

I like the idea of using the PPG and merge after the commit. This does however add more complexity to the code in "remembering" to do the merge after the commit.

Here is an example of this concept working. On Ensemble, but should be the same for IRIS.
Note that I configured password authentication on the web application and set up Authentication on Postman.

Code:

Class Tests.RESTFormData Extends %CSP.REST
{

ClassMethod Test() As %Status
{
	w "Posting: ",$Get(%request.Data("Posting",1))
	q 1
}

XData UrlMap
{
<Routes>
	<Route Url="/" Method="POST" Call="Test" />
	</Routes>
}

}

Web Application Config:

 

Postman:

 

I want the class route.

I did a lot of abstraction for the UI, like the page header, sidebar (menu) and footer. Re-usable classes for each of these page sections, as well as being able to create a "template" page to extend from, then let each specific page add its content into it. I also needed security on menu items and resource access control across pages, so abstraction made sense.

I did it a bit like you would with Django templates.

I might post an example of how I did this at some point. I used the Material Dashboard Pro UI framework, so I can't share that specific content due to copyright issues on the CSS and JS included.

Can you provide the full class? Including what it extends and so forth? Also the code of where you construct and send the request.

The following is a working example of uploading a file using a CSP Page.

Class Test.FileUpload Extends %CSP.Page
{

ClassMethod OnPage() As %Status
{
	&html<<html>
<head>
</head>
<body>>
	If %request.Method = "POST" {
		Set tFile = ##class(%Stream.FileBinary).%New()
		Set tSC = tFile.LinkToFile("C:\Tmp\" _ %request.MimeData("FileStream",1).FileName)
		If 'tSC {
			Write !,"Upload Failed"
		} Else {
			Do tFile.CopyFrom(%request.MimeData("FileStream",1))
			Set tSC = tFile.%Save()
			If 'tSC {
			Write !,"Upload Failed"
				Do tFile.%Close()
			} Else {	
				Write !,"File Posted",!
			}
		}
	}
	&html<
<h2>Upload a File</h2>
<form enctype="multipart/form-data" method="post" action="">
    Enter a file to upload here: <input type=file size=30 name=FileStream>
    <hr />
    <ul><input type="submit" value="Upload file"></ul>
</form>
</body>
</html>>
	Quit $$$OK
}

}

I haven't considered it.

It is not a deadlock. An update fails in a process, I log the exception, and the process moves on to the next record to process. This sometimes happens when batch runs occur between 00:00 and 05:00 AM.

When that exception occurs, I need it to programmatically find the contending process' information and the node locked, to log that information so that I can investigate it at a later time.

Hi,

I am not sure I know what the issue is exactly. Is it speed, is it DB writes, or memory (FRAMESTACK) errors.

In the meantime, some things we've done to help with large files.

We have BPL's that process CSV files with 100k rows, and these things helped.

Agree. Use the default settings.

They can be inserted with SQL statements.

Also, if you want to add Configuration Items to the Production post deployment, without having to recompile or change the production manually, use a session script, or create a classmethod or routine that can import a CSV containing the relevant information.

E.g.

&sql( INSERT INTO Ens_Config.DefaultSettings (Deployable, Description, HostClassName, ItemName, ProductionName, SettingName, SettingValue) 
VALUES (1, 'My setting description', 'My.Operation', 'My Nifty Operation', 'My.Production', 'TheSetting', 'SettingValue'))

and for the Production:

Set tProduction = ##class(Ens.Config.Production).%OpenId("My.Production")
Set tObj = ##class(Ens.Config.Item).%New()
Set tObj.Name = "My Nifty Operation"
Set tObj.ClassName = "My.Operation"
Set tObj.PoolSize = 1
Set tObj.Enabled = 1
Set tObj.LogTraceEvents = 0
Set tSC = tProduction.Items.Insert(tObj)
Set tSC = tProduction.%Save()
w $System.Status.GetErrorText(tSC)
Set tSC = ##class(Ens.Director).UpdateProduction()
w $System.Status.GetErrorText(tSC)

You can also open existing confiritems in order to update the Pool Size or any of those other settings that are not configurable in the "System Default Settings" for some reason.

If you want to use the group by, then you should probably do the count where the group by is being done, and use distinct by as you had it.

select distinct by(serial,hq) hq, count(1)
from thetable
group by hq

If you want it per that grouping.

There are no filters in it, so it is going to do a full table scan and compare and calculate values for each row. Taking the amount of time it takes, it is actually fast.

Maybe look into Bitslice indexes. It might help but at a cost of performance on insert and update:
InterSystems Documentation