A few of things you could try:

  1. Restore any changes you made to the registry using a backup
  2. Use a system restore point to restore your system to a point before the Caché installation
  3. Use a newer build of Caché eg. Caché 2018.1.3.414 if you are able to.

Alternatively, you might want to contact Intersystems WRC directly or try an installation on a clean system to see if you get the same error. We have a number of small teams in our organisation that use Caché. Our application support team wanted to simplify upgrades to Caché so they designed a simple batch script and published it through SCCM for Windows 10 clients.  The script was based on the 'unattended installation' commands described in the installation guide and involved removing previous Caché versions before installing the desired version. You also might not need the full-kit for your needs - particularly if you are connecting to a remote Caché instance from the Windows 10 client and you use Atelier or VSCode for development.

Try this based on npm link

File Structure
/projects/my-scss
/projects/my-existingproject

Create a new project
cd /projects
mkdir my-scss
cd my-scss

Initialize the project and answer the prompts
npm init

Dump a SCSS file in there

// _base.scss
$font-stack:    Helvetica, sans-serif;
$primary-color: #333;
body {
  font: 100% $font-stack;
  color: $primary-color;
}

Navigate to your existing project
cd ../my-existingproject
npm link ../my-scss

Verify the my-scss folder exists in the node_modules of your existing project.

Then suppose you want to get all the *.scss files in your my-scss project and put them in the /wwwroot/scss folder of my-existingproject. Gulpfile.js within my-existingproject would look something like

const gulp = require('gulp');
const { src, dest } = require('gulp');
const merge = require('merge-stream'); 

var deps = {
    "my-scss": {
        "**/*.scss": ""
    }
};

function scripts() {

    var streams = [];

    for (var prop in deps) {
        console.log("Prepping Scripts for: " + prop);
        for (var itemProp in deps[prop]) {
            streams.push(src("node_modules/" + prop + "/" + itemProp)
                .pipe(dest("wwwroot/scss/" + prop + "/" + deps[prop][itemProp])));
        }
    }

    return merge(streams);
}

exports.scripts = scripts;
exports.default = scripts;

Then providing you have installed gulp and all required gulp-modules run 'gulp' from the project directory command-line. This will run the default task.

Adding the scope 'offline_access' to the 'password' grant_type generates a refresh_token in the JSON response. 

endpoint: https://{{SERVER}}:{{SSLPORT}}/oauth2/token

{
"grant_type":"password",
"username":"test1",
"password":"P@ssw0rd",
"client_id":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"client_secret":"XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
"response_type":"token",
"state":"myapp",
"scope":"myapp openid profile offline_access"    
}
 

Response JSON

{

"access_token": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",

"token_type": "bearer",

"expires_in": 180,

"refresh_token": "7NJ7tQbFBLFcUftZr9j4n6o99Og03QeM6rx51L05eIU",

"scope": "myapp offline_access openid profile",

"account_enabled": 1,

"account_never_expires": 1,

"account_password_never_expires": 1,

"change_password": 0,

"comment": "Test User",

"full_name": "test1",

"roles": "%DB_CODE,createModify,publish"

}

So... if you detect that access_token is no longer valid, you could try using the refresh_token to generate a new one without prompting the user for input. It seems a good idea to have the refresh_token interval significantly larger value than your access_token value. I will need to do more experimentation to find the ideal intervals and review the impact on license usage.

This is a common problem. Please bear in mind your system config is specific to you so what is described below may not be the answer.

Initially I used a 3rd party app called CNTLM and pointed Eclipse to the CNTLM process port, which points to the corporate proxy but I would no longer recommend this option as it doesn't account for passwords expiring regularly. 

I later discovered that Basic Proxy Authentication was disabled by default as part of a JRE 8u111 Update under the heading 'Disable Basic authentication for HTTPS tunneling'.  As the document describes, you can override this behavior either 1) globally on your machine if you have the necessary permissions or 2) locally if you have Eclipse installed on a file system you have write access permission.   

Try changing your Eclipse.ini file to include this under after -vmargs

-Djdk.http.auth.tunneling.disabledSchemes="" 

Leave your Network Connections set to 'Native'. Only HTTP Dynamic should be ticked

I would not recommend updating from 1.0 to 1.3 because there has been so many changes since then and projects will need to be migrated. It would be safer to try downloading a fresh install following the instructions to install the plugin and then test your 'Check for Updates' button. 

Using a fresh workspace is also recommended.

On a different note, you can pass proxy login details into the target url if you encode it properly. I've used this trick for Node Package Manager (NPM) configuration in the .npmrc file

proxy=http://DOMAIN%5Cusername:password@myproxyserver.net:8080/

The other common issue you might encounter is the PKIX Path building failed. This is related to HTTPS connections from your JRE running Eclipse and a missing CA certificate from your cacerts certificate store.

Consider logging the issue with WRC if you are looking for a more bespoke solution.

If you have never used Wireshark before or haven't a deep understanding of the TCP/IP suite of protocols then Wireshark might be overkill for your needs.  I have only used it in a Lab or Development environment. You also have to consider the disk storage requirements and governance issues around full-packet capture eg. HPIAA, PCI-DSS. You might want to check  Eduard Lebedyuk's article on Debugging Web for additional tools and tips.

For troubleshooting these issues, we enable a monitoring global that records every character recieved within the TCP Stream

START 
  do USE read *CHAR:2 else  do WAIT
  do CHARMONITOR
 goto START
CHARMONITOR
 if ^AC=1 CHAR'=-1 set X=^AC1,^AC1(X+1)="*"_$char(CHAR)_"*"_CHAR_"*",^AC1=X+1 ; Monitor character by character
 set ^CALLED("CHARMON")=$ZD(+$H,4)_"*"_$ZT($P($H,",",2))_"*"_CHAR
 quit

You can try a re-connect using $ZTRAP or try-catch. In this example, we do a maximum of 10 re-connect attempts

 set $ZT="ERROR"
ERROR
 if ($ZE["<READ>")
 {
     set ERRORCOUNT=ERRORCOUNT+1
     if (ERRORCOUNT<11)
     {
         set ^TCPLOG("ERROR",$ZD(+$H,4),$ZT($P($H,",",2)),ERRORCOUNT)=$ZE
         close "|TCP|"_PORT
         hang 30 goto OPEN 
      }
      else do ^%ET }
 }
 else  do ^%ET }
 quit

You might also look at extending the TCP timeout value to see if that makes a difference to the volume of errors. Check out this Ensemble Example of a SOAP Web Service

Your code is quite difficult to read without proper styling. I recommend the 'Special Container'. We created a DLL from a c# class library generated from the .NET Object Binding Wizard and placed the DLL in our bin/ folder.

Assuming lcontainerImco.Connection is a CacheConnection object from 'Intersystems.Data.CacheClient'  and you have imported 'Intersystems.Data.CacheTypes'. The following should work.

CacheConnection CacheConnect = new CacheConnection();
CacheConnect.ConnectionString = server + "Port = " + connectionPort + "; " + "Namespace = USER; "
+ "Password = " + password + ";" + "User ID = " username + "; " + "pooling = false;";
CacheConnect.Open();

There is also a little known issue about certain special characters in the password causing parsing problems, which is identifiable from the following stack trace:

at InterSystems.Data.CacheClient.CacheADOConnection.createConnectionKeyString()\r\n   at
InterSystems.Data.CacheClient.CacheADOConnection.ParseConnectionStringInternal(String connectionStri
ng)\r\n   at InterSystems.Data.CacheClient.CacheADOConnection.set_ConnectionString(String value)\r\n

Bad characters in the password include pairing and delimiting characters eg.  equals sign, single-quotes and backslashes, pound sign symbol and not symbol. This was reported to WRC two years ago but never resolved, so we created a work-a-round solution.  

Refer to the documentation relevant to your Caché version for all the valid connection string parameters (I have linked the latest release) and be sure to wrap your connection in a try-catch structure.  

I am really intrigued by what this device is and what kind of data you are looking to capture...  Is your host device a member of a specific multicast group? In Unix 'netstat -g' can show the multicast forwarding cache and in Windows 'route print' can be useful but might only apply if IP Routing is enabled on your interface in the 'ipconfig /all' output.

In a lab environment, when you ping a multicast address you would expect devices registered with that multicast address to reply with a single unicast response. Routers can also use specific multicast addresses for routing protocols like EIGRP and OSPF.

I have used https://www.codeproject.com/Articles/63201/TelnetSocket in a .NET web application to develop a "proof-of-concept". The code makes use of the System.Net.Sockets.TcpClient .NET class to establish and manage a telnet connection. In my case, I wanted something simple to allow a an authenticated user, to execute a non-interactive shell program over telnet to change their password using an internally hosted web application (not accessible to the public Internet). This had already been done in PHP but I wanted something a bit different and something that integrated well with our existing .NET web application. The concept worked but I was never completely happy with the design approach. 

Cache Web Terminal does provide an interactive shell with intellisense but limitations mean you can't use it like you would use something like PuTTY and the current issue log presents quite a few challenges.

For reference, you can get a list of error codes from General Error Codes and SQL Error Codes .  From the error description, it seems to be complaining about the 1st parameter.

If the 'gc' object is a %SQL.Statement then the %Prepare method only takes one parameter. What happens when you pass in pQuery to the %Prepare method?

Have you tried executing the SQL in the SQL Shell or System Management Portal?

do $System.SQL.Shell()

Thanks Nikita Savchenko for the mention. Excellent detail in the article, I really appreciate putting this article out to the community for comments. Interesting note about libssh2.dll/.so. I would love to see an early proof-of-concept of libssh2 working with WebTerminal. I would also like the option to use Telnet or SSH rather than being forced into using one or the other. 

To summarise, the answer is twofold:

1) Use the OnCreateResultSet event of the tablePane to get your filter value and pass it into your custom SQL. I have appended any filter values onto the end of my WHERE clause in each SQL fragment.

2) Use the 'onrefresh' event of the tablePane to call JavaScript to hide other ZEN components when the table is updated.

Thanks Wolf Koelling.  I should have made it more explicit that %vars are shared across NAMESPACES not processes. When a user logs into our system their process remains active as long as they are logged in. During their login session they can call any number of COS routines to perform a wide range of different functions. Variables not explicitly killed off still reside in memory and this was the problem I had to solve. 

I had to be sure that

a) The %ZLOG COS routine would not crash out because of missing variables the routine expects.

b) All the calls or entry points into %ZLOG would still work as normal otherwise a system-wide crash would occur across all our databases.

c) Be able to identify that if my background process had called %ZLOG then use the PPG created before the call to %ZLOG. No other code in our system uses PPGs whereas there is a plethora of globals, %variables and non-percent variables. 

I could have used a scratch or temporary global such as ^CacheTempUser.DataExtract($JOB). This type of global is killed off when the instance is brought down for our daily backup job. A PPG was very easy to implement.

I have placed a strikethrough on the erroneous statement in my answer.