Java Database Connectivity (JDBC) is an application programming interface (API) for the programming language Java, which defines how a client may access a database.
Any news about JDBC driver and Hibernate dialect on a public java repository, like mvnrepository? Today I need to download the jdbc driver and hibernate jar, add as an external resource on my maven config file to works.
I'm trying to use LOAD DATA to insert 11k (11,377) rows of data. LOAD BULK DATA is not available for the version of IRIS I am using.
After calling LOAD DATA it says only 5,500 rows has been inserted. The LOAD DATA docs says any error rows are skipped and a count of skipped rows can be found in %SQL_Diag.Result however there are no results here. There are no errors in the xDBC error log either.
I work on deploying IRIS using Kubernetes operator and Red Hat OpenShift. I encouraged another team working on Java application to consider using IRIS as database. My team deployed IRIS cluster using two mirrored data pods for the other team. The other team asked me for the connection information.
To learn how to use Java with IRIS, I attempted to deploy two apps from Open Exchange:
I also have a Caché server with "downloadedposts" table.
They are connected from Caché to MySQL via SQL Gateway
I want to keep Caché table synced with MySQL one (MySQL "posts" table is a master copy), so periodically Caché queries MySQL server and downloads data. So far so good, and if a record appears or changes in MySQL table, Caché downloads the changes.
The problem I'm encountering is that sometimes rows would be deleted from MySQL "posts" table.
I'm trying to access via JDBC my community installation but the connection is always rejected. The same code (python code) using the stand-alone version of Intersystems IRIS works fine.
Is it planned that LOAD DATA takes into account several DATE/DATETIME formats with, for example, a parameter indicating the format used in the source data?
example :
LOAD DATA .../...
USING
{
"from": {
"file": {
"dateformat": "DD/MM/YYYY"
}
}
}
Ran into an issue this morning, that I am having a hard time trying to track down what might have caused the issue. We have a Business Rule that sends HL7 ADT to a Business Process that inserts the data into a MS SQL Server using a Custom Business Operation (SQL Outbound Adapter).
I am trying to replace one of our SQL Integration Service jobs with Ensemble and I am running into an issue executing a query against a MS SQL database using JDBC drivers.
I'm trying to setup the JDBC Gateway Server so customers can connect to IRIS remotely using JDBC and not ODBC. But I'm facing a problem connecting, as our system department tells me IRIS is using the loopback address (127.0.0.1) and that makes remote systems cannot connect to port 53773 (the default port for that).
So, I would like to change this 127.0.0.1 host for the hostname but I cannot see where to do it:
Is it possible to authenticate an xDBC (ODBC/JDBC) connection to InterSystems IRIS via (a 3rd party) OAuth server?
For REST APIs this is possible, but could this be achieved with OAuth?
Out-of-the-box the ODBC/JDBC Drivers don't seem to have this option, but maybe some custom code could enable this? perhaps via Delegated Authentication and some OAuth classes customization, or some other way?
Has anyone done this already and can share how it was implemented, or someone with some guideline suggestions?
I was trying to create a query that can be exposed as a stored procedure (function actually) that would return a resultset with a random number of columns.
Unfortunately, it seems that unless I specify the ROWSPEC annotation on the Query method, I won't get any columns exposed. I was hoping to implement QueryNameGetInfo method and specify the names and number of columns I would be returning dynamically. But it seems that GetInfo information is simply ignored.
I'm trying to authenticate a user(Health Share clinician) from a Java Application.
I 'm already connected to Caché and able to run SQL commands.
My question is: How can I authenticate a user using only SQL? In fact, what I want is verify if the users exists in the base and if the given password is the same used in Health Share.
There is a column 'password' in Security.users table but I'm not able to see its content, even so, I don't know which hash function to use to compare with.
I have a general query in regards to developers experience on extracting data from cache databases and the most efficient way to do so. I work with a number of clients who have applications with cache databases and require the data off the host system and onto data warehouse platforms for research and analysis. Often they require the data in source state which means the extracts are often simply a table scan of the entire database table without any aggregation or manipulation.
On one of my team's systems, we utilize a business operation with the EnsLib.SQL.OutboundAdapter to make SQL queries to another IRIS system using JDBC. To authenticate the connection, we utilize a user account on the target system.
We have a new requirement being push down by our Data Security to no longer use Local SQL Accounts to access our Databases. So they asked me to create a Service Account that is on the Domain for our connections to each database.
I tried just changing my JDBC connection to using this Service Account and Password but I am not having any luck trying to connect to the database.
" Connection failed. Login failed for user 'osumc\CPD.Intr.Service'. ClientConnectionId:ade97239-c1c8-4ed1-8230-d274edb2e731 "
Hi,
We recover a large amount of data from an external database (SQLServer, about 1 million rows in JDBC). However, we have treatment time issue. This process takes more than 30 minutes whereas on a "classic" SQL Server Management Studio type request takes less than a minute.
I am currently experiencing frustration with trying to Authenticate an Active Directory account through JDBC as the Hospital System moves from OnPrem SQL Server to using Azure SQL Server with Microsoft Entra Authentication.
Microsoft cannot give me a straight answer of what is required from a JDBC standpoint to authenticate from a Linux environment.
I'm looking for an efficient way in DBeaver to filter system tables (ex: belonging to a schema starting with "%").
By using a user with the %All role, DBeaver shows us a long list of system schemas, which forces us to go down the list before accessing the user tables.
Hi, I would like to read a row out of an external SQL table and reference the returned results directly from my cache class. I've set up a link table and a SQL gateway connection. but I'm not real sure how to use them in COS. Anyone have examples out there, or some assistance? Thank you.
We are trying to script a High Availability Shutdown/Start script in case we need to fail over to one of our other servers we can be back up within mins. Is there a way to configure the startup procedure to Automatically Stop/Start the JDBC server when shutting down or starting up cache? is there an auto setting we can change?
Is it possible to access (read, write) to an external Oracle database via Cache SQL Gateway using JDBC in Cache Object Script? I am currently using ODBC successfully but wanted to see if JDBC was an option too. If it is possible, does anyone have a basic Object Script example(s) that I can review?
When I use jdbc driver to query the column info ,the "REMARKS" field always show the same as the "COLUMN_NAME" field.
When I use the sql "select * from INFORMATION_SCHEMA.columns a where a.table_name='some table name ' " to query columns info,there has a "DESCRIPTION" field ,the value is some comment for the current field.
So is there anyway to return the description to the jdbc remarks field? And is there anyway to update the desciption or remarks field? Or is there some other way that I don't know to manage column comment info?
I am having issues trying to obtain a JDBC driver which is backwards compatible with java 6 for a solution which will call my IRIS instance, I have already logged a WRC ticket which is looking unlikely that there will be any development to create a driver for such legacy tech.
The Java 6 app is end of life, however won't be replaced until after my project goes live, hence why I ask the question.
I am trying to pull data from CacheDB and push into elasticsearch using logstash. In the configuration file i am giving the following. But it is throwing error No Suitable Driver Found for jdbc:Cache://ipaddress:port/namespace. Could anyone please help to resolve this ? I tried both JDK17 and JDK18 but no luck.