Intersystems UDP implementation (e.g. EnsLib.UDP.OutboundAdapter) is assuming that any stream<MTU size: it uses stream.OutputToDevice() so it will not work with streams>MTU

when splitting parts, consider packet can arrived at random order, so you should "collect" them at receiver, then build the incoming message according to the correct order when you know you have them all

Some additional notes:

1. If you have many changes (variations) in query structure (e.g. using different column names, or different conditions on the WHERE clause) dynamic SQL is the choice. However, if your queries do not change in structure, only parameters than embedded SQL will be a better choice (e.g. select id for ref.table where name='name')

2. Embedded SQL build the cached query ONCE while you compile, embedded SQL build and compile a cached query every time your SQL structure changes.

2. Speed of dynamic SQL will be identical to embedded SQL (over time) since most of the possible SQL combinations will have cached queries in place - each time you compile a class/table, its "cached queries" are purged (so expect to have slight degradation after releases or changes in class/table)

 3. In case you can use embedded SQL consider giving your client access to a VIEW or a SP (instead of doing SQL against the original table). This way changes you do in class/table will not affect the client

4. As mentioned, security is very important to notice: if you intend to let the client send & execute (dynamically) any SQL, try to limit this for a specific table and sanitize the input to avoid potential "SQL injection" (using s SP with parameter is a good way to secure your backend)

Sets to a global with or without PC, single of multiple on 1 set command or using execute are relatively easy to find. The issue comes with indirections (@) - for that I recommend writing code that will do the searches (a real cross referencing). Over time, this code can be improved.

Another option is using visual studio code where you have an option to search with regex. which will let you find most of the easy places.

I recommend using visual studio code (vs-code) where you can search with regex. searching. Consider also seqdhing for 0-n while spaces to elimnate all spaces, tabs etc.
for example: a reference to a global could be:
set ^global( ... )= 
s ^global( ... )=
s:cond ^global( ... )=

If combined with other commands: then you should search for the comma (,) e.g.
set var=someting,^global( ...)=
 

However, use of indirection is very complex to find... (you need to skip 0-n of any characters including new lines between the set and the use of @)

1. Sound a perfect candidate for the InterSystems Ideas portal, to be able to do searches inside streams.
2. Another option:  you could use request/response messages that stores in normal properties (e.g. Extends Ens.Request or Ens.Response), you could convert those properties back to JSON or (compressed) stream at the BS level (after the response), so all your messages inside Ensemble could be searchable.

Hello Caio,

There is no DECLARE @variable (SQL server) or DECLARE variable (Oracle) on Cache but there are few options:

1. Use a host variable(s) in embedded SQL:

SET variable = 2000
&SQL(SELECT Column FROM Table WHERE ID = :variable) 

2. Use the same with Dynamic SQL

SET variable = 2000
SET sql = "SELECT Column FROM Table WHERE ID = " _ variable
DO ##class(%SQL.Statement).%ExecDirect(, sql)

3. Writing a SP

CREATE PROCEDURE Procedure(IN variable  INT)
AS
BEGIN
    SELECT Column FROM Table WHERE ID = variable;
END

Hello Frank,

You can correlate the pid in O/S with the pid in Cache processes (in management portal). there you can see which namespace this pid is running in Cache. You can also have more information for that component: last line in source code, last global reference, if there are any locks might give you a good clue what this component is doing. 
If you are using ensemble, you can match the same pid to an ensemble job

Hello Gabriel,

It seems that updates to the other database (PostgreSQL) need to be "close to real-time," though a slight delay is acceptable. What matters most to you is ensuring stability and preventing any loss of updates.

I would consider the following:
1. Using the "SQL Gateway Connection" capability to connect remote tables directly to Cache. The benefit is that you have all logic on Cache side (having a remote REST/API will need also some remote logic to return the status of the operation in case of local updates failures)
2. Loosely coupling the local updates (Cache) with the remote updates:
a. Create a "staging area" (which could be a global or a class/table) to hold all updates to the remote table. These updates will be set by a trigger, ensuring that updating the local object/table in Cache is not delayed by updates to the remote database, The staging area delete its entries only on successful update (when failing they will be kept) - so you might need a monitor process to alert when the staging area is cleaning (e.g. remote DB is down, or network issues)
b. Use a separate (dependent) component to handle the updates. If you have interoperability (Ensemble), this might be easier to implement. However, it’s not mandatory; you could also use a background job (or a task) to periodically scan the "staging area" and perform the updates