I think there is a little confusion here.  That quoted description of look ups is how Multivalue works with its Hashed storage structure.  Cache does NOT use a hashed storage model.

The point I really wanted to make is that  Cache data files, which we refer to as Globals, do not require you to perform any kind of resizing the same way you did with Universe/Unidata.  Thus reducing your operational maintenance.  In addition the database as a whole, which contains all the 'globals', will grow dynamically as more data is loaded.    

If you want to PM me I could set aside some time to do a short teams call with you.

Regards,

Rich Taylor

Robert, 

Sorry for the slow reply.  I was tied up last week and didn't get a chance to look at the community. 

Not really sure what you are looking for when you say "non-linear'.   As far as how records are stored in the database a multivalued records is a delimited string.  In Cache when stored in the database this string is placed in a global node.  The key for the node is the item id of the record stored in the MV file.  So if I create a file called TEST and add a record to it when viewed with a LIST-ITEM command you might see this.

     1
0001 ATTR1
0002 ATTR2
0003 ATT3
0004 ATTR4.1ýATTR4.2
 

If you were to view the ^TEST global where this data is stored vai the System Management Portal you would see:

^TEST    =    $lb(0)
^TEST(1)    =    "ATTR1þATTR2þATT3þATTR4.1ýATTR4.2"

See the CREATE-FILE example below.

By default, when no sort is indicated a select will select the records in key order.   Note that ALL records within the database are stored the same way.  It is the content and key structure that changes.   In this case storing a MV record.

The speed of selects or any query in Cache is going to depend on several factors.  What is the nature of the SELECT command?  Are you doing any BY-EXP options here?   I would suggest that if this is a serious reduction in performance I would call into the WRC or consult with your Sales Engineer.   You can use the (Y and/or (Z to get more information on how this command will be executed that may help.  
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RVCL_commands#RVCL_commands_select

As to what CREATE-FILE does there is also some nuance to this depending on what type of file you are creating.  A basic mv file created like this:  CREATE-FILE TEST  
will result in a VOC (MD) entry that looks like this:

    TEST
0001 F
0002 ^TEST
0003 ^DICT.TEST
0004
0005
0006
0007
0008
0009
0010

Lines 1 and 2 identify globals in the Cache database the contain the data and dictionary respectively.   There is a lot more information that may be recorded here too depending on the type of file, indexes, and so on.  I would recommend reviewing the documentation at 

https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls

Under Technology on the left side pick 'Multivalue'

Hope all this helps

Robert,

I'd be happy to respond.  First I want to confirm/clarify some things.  When you say that you have a Universe database "attached" to Cache I am assuming you mean that you have migrated said database to cache and the data is now stored natively in Cache.

So the short answer is that you never need to do resizing in Cache as was required in Multivalued databases like Universe and Unidata.  The reason for this is that the fundamental storage methodology is quite different.  

In Multivalue (MV) each table/file/Global (to use a Cache term) is a hashed entity.  When created it  starts with a known number of groups that comprise the storage for that table.  Every record stored is "hashed" to determine which group it belongs in.   Another part of defining a hashed table in what MV calls a separation.  This is an indication of the database block size.  As data is inserted into the table groups will tend to fill up.  At this time a new block is added to the group so that new records can be stored.  In a hashed table a give record key will only ever hash to the same existing group.  When looking up a record hashing to the group is very fast.  However once in the group the lookup is a linear function that examines every record in every block of the group to find the record being retrieved. Inserts are always appended to the end of the group.   This is what tends to slow down an MV system.  Pick a poor initial group size or separation size and performance will suffer.  This is the reason for the need to RESIZE.

Cache, at its base table storage level, behaves similar to a key-value store which is internally implemented as a high-performance binary structure.   No hashing is preformed and key look-ups and inserts are consistently extremely fast.  I would also like to correct one thing you stated.  You don't have to be concerned with the initial size of the Cache database as you indicated though getting size close to what is needed is always recommended.  The database can grow dynamically as records are added.  It will grow up to the maximum size of a Cache database within any OS level limitations.  The only performance concern would be disk level fragmentation.  

I hope this helps.  If you have any further questions please reach out. 

Regards,

Rich Taylor

David,

The issue you will face is that the browser features that Raj has mentioned are core to the ZEN framework.  These features have been deprecated for many years now but have not been removed due applications, such as ZEN, still relying on them.   However, there will come a time where the technical debt of maintaining long deprecated features will become too much.  Its not a matter of if, it is a question of when.   This may not happen for another 10 years or the announcement could come tomorrow.  This ambiquity would certainly make me nervous.  Once one major browser vendor remove these features the rest will quickly follow.

Further attempting to modernize ZEN to work with current web technology would be a quite sizable task and could never hope to reach feature parity with more broadly accepted frame works like Angular and the others listed above.

My advice would be to start looking at which framework best meets your needs (I'm an Angular fan) and begin an effort to convert your screens to that framework and convert any back-end code that needs accessed from the front-end to REST services.  Do this over time converting one module or even sub-module at a time.  Start with several smaller areas so you can gain knowledge and experience.  As you build out your REST services you will undoubtedly discover improvements that would require refactoring previous work.  The smaller modules will make this easier to swallow.

Adam,

First a disclaimer.  I have not worked with Polybase in MS SQL Server. 

From a purely ODBC access perspective my initial thought is the driver you are using is wrong.  Try using the InterSystems ODBC driver which you can obtain from the Software downloads area of the Worldwide Response Center web page.  (https://wrc.intersystems.com/wrc/coDistGen.csp)

Here is the docs for creating an IRIS ODBC datasource on windows just in case.  https://docs.intersystems.com/iris20241/csp/docbook/Doc.View.cls?KEY=BNETODBC_winodbc

Hope that helps

Regards,

Rich Taylor

Chris,

In my experience these represent previous jobs that were created for this business host that aborted.  If these processes were unable to perform any cleanup then information is left that provides the list for this display.  Try restarting this business host which should clean up these "ghosts".   If this is happening continually then you should be looking at logs, both Event logs and IRIS logs, to trace down why jobs are ending in this manner.

Regards,

Rich

Without seeing the entire method it is hard to tell.  Accepting that you have proven that the data is being retrieved I would look for issues with how it is being passed on.   The GetMasterOPDataExecute method of a custom query only does setup for the query.  Tasks such as initialization or defining a SQL cursor (if that is what you are using).

The GetMasterOPDataFetch method is what will return rows of data.  If this value is needed at that point you should include it in any information provided in the qHandle parameter.

In case you are not familiar with this I am including a link to the documentation for custom class queries.

Defining Custom Class Queries

You can try signing out of the account.  Then you would have to go through the sign-in process again.  Look for the Accounts icon in the lower left corner of VCS. 

 

Click it and find the namespace you are logging into in the list.  Click the arrow to the right of the name and sign out.

If you have a folder associated with a namespace that is still blocking you this folder should appear in the Explorer window on the left side of VCS.  Right click the top level of the folder and find the "Remove Folder from Workspace" option.  It should be in the middle of the menu.  That will remove it from your workspace and allow you to switch.

There are two thoughts I have on this example:

  • The use of %All is a very bad idea.  This gives the maximum permissions to the connection.  Having just attended a seminar on penetration testing it is frightening how small gaps in security can be exploited to compromise a system.  Better to identify the specific resources  and roles needed for this action to complete and give only those.
  • Authentication is only taking place after the connection has "breached the castle walls".   The code is already running inside you environment and with %All privileges

Your utility methods are fine.  However I would implement them using Delegated Authentication.  This feature of IRIS allows you to provide your own code to do authentication.  The difference is that it executes as part of the normal authentication process.  The connection has not yet gained access to the environment and will not gain access if authentication is not passed.  Any failure or attempted breach of the code causes an "Access Denied" message to be returned and the connection terminated.  It is also possible to use this in combination with other authentication methods so this only needs to be used on REST services where it is needed.  This would also remove the need to add any special permissions like %All to the Web Application definition.  Here is the documentation for this.

Delegated Authentication

Whether or not using the Adapter over an instantiation of the %Net.HttpRequest is a matter of your needs really.  The adapter seeks to make things "simpler" in some way.  However if you need greater control over the process and response using the HttpRequest directly is also a reasonable direction.  I have done both depending on my needs.

Glad it is working for you in an manner that is maintainable.  That is what is important.

First I have had success just leaving the GetCredentials out, but you are correct in that the documentation says you should have this.  Change your GetCredentials code to be just

return $$$GetCredentialsFailed

This will cause the process to revert to normal username and password prompting.   You only need to implement GetCredentials code if you are pulling the username and password from somewhere else such as taking the authentication header out of a REST call.

Link to the docs   https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GAUTHN_delegated#GAUTHN_delegated_zauthgetcreds

Sorry if I am a bit late to the discussion.  Let me address your issues in order

  • This is not necessarily a problem, but I noticed that the method SendPostRequest is executed again from start to finish for each reply, reinitializing all variables as if it were the first execution, except for the RetryCount variable, which indicates the current resend count.
    • This is expected behavior.  Messages are handled as a specific unit of work (atomic).  When a message errors and is going to retry it is basically put back on the queue to process.  When the retry occurs it is treated as a new message with the exception, as you noted, of the retry count. 
  • Setting Retry = 0 prevents message resending, even with active Reply Code Actions (e.g., E=R).
    • Also expected.  With this configuration the reply action code may be set to retry , but this setting is indicating that no (0) retries are to be attempted.
  • The timeouts are not respected. Instead, the message is resent every 30 seconds instead of 18 (I also tried other Response Timeout values like 20 or 35 seconds, but nothing changes). The Retry Interval is not respected either; the resending is attempted a number of times equal to FailureTimeout/30 instead of FailureTimeout/RetryInterval, as written in the documentation.
    • This seems to be the main issue.  The ResponseTimeout is a property of the Adapter. The default value of this setting is 30 which is what you are seeing.  It appears that you have a custom Business Operation here that is using the EnsLib.HTTP.OutboundAdapter and that you assign this adapter in code rather than using the ADAPTER parameter.  When you initialize this you should assign the reference to the Adapter property of the Business Operation.  Then when you want to set the response timeout in code you would update the property of the Adapter. I would highly recommend using the ADAPTER parameter unless you need to dynamically change the adapter at runtime.  If done in code at runtime you need to be sure that everything is initialized properly
  • After the Failure Timeout expires, an error is generated even if one or more responses have been received. Despite the responses arriving, the BO does not detect them and generates the error: "ERROR 5922: Timed out waiting for response". The message continues to be resent until the Failure Timeout expires, and finally the "ERROR <Ens>ErrFailureTimeout: FailureTimeout of 90 seconds exceeded in lombardia.OMR.BO.Sender; status from last attempt was OK".
    • What is being reported here is the last known status of the operation.  In this case that is the error indicated.   There is a difference between a reply and a response.  The Visual trace is indicating that the operation receive a  reply.  This reply was an error.   The 'Response' is what is received from the Web Server.  This never arrived hence the error.  Business Operations will always receive a reply of some kind.  This reply may indicate an error as in this case or success

If you continue to have issues I would encourage you to reach out to the Worldwide Response Center (WRC) for support.  Additionally you could contact your assigned Sales Engineer.

Regards,

Rich Taylot

Julio,

Well, the short answer is that you don't.  Solutions like Health Connect/IRIS for Health/Ensemble work autonomously.  There is no one sitting there to respond to the MFA Request.

The way that this is normally handled is by requesting from the MFA provider something referred to by several names.   For example in GitLab you go to your profile and request an "Access Token".  In Gmail you would go to security and get an "App Password".

The SFTP server administrator would likely have to provide this to you.

Usually these are setup specific for your application even specific to the operation if you want.  What this provides is a token that you would use as the password on the authentication.

Hope this helps.

Regards,

Rich Taylor