Search

Clear filter
Announcement
Evgeny Shvarov · Nov 15, 2021

Technology Bonuses for InterSystems Security Contest 2021

Hi Developers! Here're the technology bonuses for the Security Contest 2021 that will give you extra points in the voting: Basic Authentication usage - 2 Bearer/JWT Authentication usage - 3 OAuth 2.0 usage - 5 Authorization components usage - 2 Auditing usage - 2 Data Encryption usage - 2 Docker container usage - 2 ZPM Package deployment - 2 Online Demo - 2 Code Quality pass - 1 Article on Developer Community - 2 Video on YouTube - 3 See the details below. Basic Authentication usage - 2 points Implement basic authentication (user, password) in your application with InterSystems IRIS as a backend. The backend can be e.g. a REST-API web application. Bearer/JWT Authentication usage - 3 points Implement bearer or token authentication in your application with InterSystems IRIS as a backend. The backend can be a REST-API web application. OAuth 2.0 usage - 5 Implement OAuth 2.0 authentication in your application as a client and collect 5 bonus points! We expect the sign-in to your app e.g. via Google or GitHub users. Learn more in the Documentation Authorization components usage - 2 InterSystems Authorisation is built with concepts of Users, Roles, and Resources. (learn more in the documentation). Implement it in your app to collect 2 more expert points. Auditing usage - 2 InterSystems IRIS provides the auditing capability - logging of system or user-defined events, Documentation. Your application can earn 2 more bonus points if it contains code that uses the Auditing feature of InterSystems IRIS. Data Encryption usage - 2 InterSystems IRIS provides the option of database data encryption. Use it in your application programmatically and collect 2 extra bonus points. Learn more in the documentation. Docker container usage - 2 points The application gets a 'Docker container' bonus if it uses InterSystems IRIS running in a docker container. Here is the simplest template to start from. ZPM Package deployment - 2 points You can collect the bonus if you build and publish the ZPM(ObjectScript Package Manager) package for your Full-Stack application so it could be deployed with: zpm "install your-multi-model-solution" command on IRIS with ZPM client installed. ZPM client. Documentation. Online Demo of your project - 2 pointsCollect 3 more bonus points if you provision your project to the cloud as an online demo. You can do it on your own or you can use this template - here is an Example. Here is the video on how to use it. Code quality pass with zero bugs - 1 point Include the code quality Github action for code static control and make it show 0 bugs for ObjectScript. Article on Developer Community - 2 points Post an article on Developer Community that describes the features of your project. Collect 2 points for each article. Translations to different languages work too. Video on YouTube - 3 points Make the Youtube video that demonstrates your product in action and collect 3 bonus points per each. Example. The list of bonuses is subject to change. Stay tuned! Good luck in the competition! An idea to a new bonus point criteria: 50% of code written in object script code in the github calculation.
Announcement
Anastasia Dyubaylo · Jun 23, 2022

[Webinar] InterSystems IRIS Tech Talk: Kafka

Hey Community, In this webinar, we’ll discuss how you can now easily integrate Apache Kafka within InterSystems IRIS data platform, including via interoperability productions and programmatically via API calls, as both a producer as well as a consumer of Apache Kafka messages: ⏯ InterSystems IRIS Tech Talk: Kafka Presenters:🗣 @Kinshuk.Rakshith, InterSystems Sales Engineer🗣 @Robert.Kuszewski, InterSystems Product Manager Enjoy watching the new video on InterSystems Developers YouTube and stay tuned!
Article
Timothy Leavitt · Jun 28, 2022

Unique indices and null values in InterSystems IRIS

An interesting pattern around unique indices came up recently (in internal discussion re: isc.rest) and I'd like to highlight it for the community. As a motivating use case: suppose you have a class representing a tree, where each node also has a name, and we want nodes to be unique by name and parent node. We want each root node to have a unique name too. A natural implementation would be: Class DC.Demo.Node Extends %Persistent { Property Parent As DC.Demo.Node; Property Name As %String [ Required ]; Index ParentAndName On (Parent, Name) [ Unique ]; Storage Default { <Data name="NodeDefaultData"> <Value name="1"> <Value>%%CLASSNAME</Value> </Value> <Value name="2"> <Value>Parent</Value> </Value> <Value name="3"> <Value>Name</Value> </Value> </Data> <DataLocation>^DC.Demo.NodeD</DataLocation> <DefaultData>NodeDefaultData</DefaultData> <IdLocation>^DC.Demo.NodeD</IdLocation> <IndexLocation>^DC.Demo.NodeI</IndexLocation> <StreamLocation>^DC.Demo.NodeS</StreamLocation> <Type>%Storage.Persistent</Type> } } And there we go! But there's a catch: as it stands, this implementation allows multiple root nodes to have the same name. Why? Because Parent is not (and shouldn't be) a required property, and IRIS does not treat null as a distinct value in unique indices. Some databases (e.g., SQL Server) do, but the SQL standard says they're wrong [citation needed; I saw this on StackOverflow somewhere but that doesn't really count - see also @Daniel.Pasco 's comment below on this and the distinction between indices and constraints]. The way to get around this is to define a calculated property that's set to a non-null value if the referenced property is null, then put the unique index on that property. For example: Property Parent As DC.Demo.Node; Property Name As %String [ Required ]; Property ParentOrNUL As %String [ Calculated, Required, SqlComputeCode = {Set {*} = $Case({Parent},"":$c(0),:{Parent})}, SqlComputed ]; Index ParentAndName On (ParentOrNUL, Name) [ Unique ]; This also allows you to pass $c(0) to ParentAndNameOpen/Delete/Exists to identify a root node uniquely by parent (there isn't one) and name. As a motivating example where this behavior is very helpful, see https://github.com/intersystems/isc-rest/blob/main/cls/_pkg/isc/rest/resourceMap.cls. Many rows can have the same set of values for two fields (DispatchOrResourceClass and ResourceName), but we want at most one of them to treated as the "default", and a unique index works perfectly to enforce this if we say the "default" flag can be set to either 1 or null then put a unique index on it and the two other fields. Maybe I'm missing something, but, beyond how nulls are treated, if you want parent to be unique within this definition you must define a unique index on parent (alone). The index you have defined only guarantees that the combination (parent, name) will be unique. Even if you declare the property as required it wouldn't still solve the uniqueness requirement. parameter INDEXNULLMARKER; Override this parameter value to specify what value should be used as a null marker when a property of the type is used in a subscript of an index map. The default null marker used is -1E14, if none is specfied for the datatype. However %Library.PosixTime and %Library.BigInt datatypes could have values that collate before -1E14, and this means null values would not sort before all non-NULL values. For beauty, I would also use the value of this parameter, for example:Class dc.test Extends %Persistent { Property idp As dc.test; Property idpC As %Integer(INDEXNULLMARKER = "$c(0)") [ Calculated, Private, Required, SqlComputeCode = {s {*}=$s({idp}="":$c(0),1:{idp})}, SqlComputed ]; Property Name As %String [ Required ]; Index iUnq On (idpC, Name) [ Unique ]; ClassMethod Test() { d ..%KillExtent() &sql(insert into dc.test(Name,idp)values('n1',1)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',1)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',1)) w SQLCODE,!! ;-119 &sql(insert into dc.test(Name,idp)values('n1',null)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',null)) w SQLCODE,! ;0 &sql(insert into dc.test(Name,idp)values('n2',null)) w SQLCODE,!! ;-119 zw ^dc.testD,^dc.testI } }Output:USER>d ##class(dc.test).Test() 0 0 -119 0 0 -119 ^dc.testD=4 ^dc.testD(1)=$lb("",1,"n1") ^dc.testD(2)=$lb("",1,"n2") ^dc.testD(3)=$lb("","","n1") ^dc.testD(4)=$lb("","","n2") ^dc.testI("iUnq",1," N1",1)="" ^dc.testI("iUnq",1," N2",2)="" ^dc.testI("iUnq",$c(0)," N1",3)="" ^dc.testI("iUnq",$c(0)," N2",4)="" OrProperty idpC As %Integer [ Calculated, Private, Required, SqlComputeCode = {s {*}=$s({idp}="":$$$NULLSubscriptMarker,1:{idp})}, SqlComputed ]; Output:USER>d ##class(dc.test).Test() 0 0 -119 0 0 -119 ^dc.testD=4 ^dc.testD(1)=$lb("",1,"n1") ^dc.testD(2)=$lb("",1,"n2") ^dc.testD(3)=$lb("","","n1") ^dc.testD(4)=$lb("","","n2") ^dc.testI("iUnq",-100000000000000," N1",3)="" ^dc.testI("iUnq",-100000000000000," N2",4)="" ^dc.testI("iUnq",1," N1",1)="" ^dc.testI("iUnq",1," N2",2)="" What about defining a super parent, which is the only node with a NULL parent? In that case every other root node has this node as a parent and filtering is easy (WHERE parent IS NULL filters only the super parent). And there's no need for an additional calculated property in that case. Thank you for pointing this out! I saw this in docs but believe it wouldn't work for object-valued properties. There would still need to be some enforcement of the super parent being the only node with a NULL parent (and the point here is that the unique index wouldn't do that). Also finding all of the top-level nodes (assuming we could have multiple independent trees) would be a slightly more complicated. I have to throw in my opinions and possibly a few facts regarding nulls and unique constraints. IRIS Unique index - this is primarily a syntactical shortcut as it defines not only an index but also a unique constraint on the index key. Most pure SQL implementations don't merge the two concepts and the SQL standard doesn't define indexes. The SQL Standard does define unique constraints. Keep in mind that both IDKEY and PRIMARYKEY are modifiers of a unique constraint (and, in our world, the index defined as IDKEY is also special). There can be at most one index flagged as IDKEY and one that is flagged as PRIMARYKEY. An index can be both PRIMARYKEY and IDKEY. There was once an SQL implementation that defined syntax for both "unique index" and "unique constraint" with different rules. The difference between them was simple - if an index is not fully populated (not all rows in the table appear in the index - we call this a conditional index) then the unique index only checked for uniqueness in the rows represented in the index. A unique constraint applies to all rows. Also keep in mind that an index exists for a singular purpose - to improve the performance of a subset of queries. Any SQL constraint can be expressed as a query. The SQL Standard is a bit inconsistent when it comes to null behavior. In the Framework document there is this definition: A unique constraint specifies one or more columns of the table as unique columns. A unique constraint is satisfied if and only if no two rows in a table have the same non-null values in the unique columns. In the Foundation document, there exists two optional features, F291 and F292. These features define a unique predicate (291) and unique null treatment (292). These features appear to provide syntax where the user can define the "distinct-ness" of nulls. Both are optional features, both are relatively recent (2003? 2008?). The rule when these features are not supported is left to the implementor. IRIS is consistent with the Framework document statement - all constraints are enforced on non-null keys only. A "null" key is defined as a key in which any key column value is null. Thank you Dan! The index/constraint distinction and SQL standard context are particularly helpful facts for this discussion. :) the best part of this article is "IRIS does not treat null as a distinct value in unique indices"
Announcement
Olga Zavrazhnova · Jul 11, 2022

Boston InterSystems Developer Meetup on Python Interoperability

Hi Community, We are excited to announce that InterSystems Developers Meetups are finally back in person! The first Python-related meetup will take place on July 21 at 6:00 at Democracy Brewing, Boston, MA. There will be 2-3 short presentations related to Python, Q&A, networking sessions as well as free beer with snacks and brewery tours. AGENDA: 6:00-6:20 pm First round of drinks, appetizers 6:20-6:40 pm "Run Your Python Code & Your Data Together" by @Robert.Kuszewski, Product Manager - Developer Experience at InterSystems 6:40-7:00 pm "Enterprise Application Integration in Python" by @Michael.Breen, Senior Support Specialist at InterSystems 7:00-7:20 pm Open Mic & Project Discussion: Working on something exciting? Feeling the need to share it? Drop us a line and we can add you to the lineup! 7:20-8:30 pm Brewery tours & Networking Don't miss out excellent opportunity to discuss new solutions in a great company of like-minded peers. Networking is strongly recommended! :) >> REGISTER HERE << ⏱ Time: July 21, 6:00 - 8:30 p.m. 📍Place: Democracy Brewing35 Temple Place in Downtown Crossing, Boston, MAhttps://www.democracybrewing.com/
Announcement
Evgeny Shvarov · Jul 25, 2022

HealthConnect and InterSystems IRIS For Health Comparison

Hi Community! There is a new PDF Resource published on our official site depicting key features and a comparison of InterSystems healthcare interoperability products: Health Connect and IRIS For Health. I think this could be useful for the Community. The PDF is also attached to the post.
Announcement
Anastasia Dyubaylo · Aug 16, 2022

InterSystems Interoperability Contest: Building Sustainable Solutions

Hello Developers! Want to show off your interoperability skills? Take part in our next exciting contest: 🏆 InterSystems Interoperability Contest: Building Sustainable Solutions 🏆 Duration: August 29 - September 18 More prizes: $13,500 – prize distribution has changed! >> Submit your application here << The topic 💡 Interoperability solutions for InterSystems IRIS and IRIS for Health 💡 Develop any interoperability solution or a solution that helps to develop or/and maintain interoperability solutions using InterSystems IRIS or InterSystems IRIS for Health. In addition, we invite developers to try their hand at solving one of the global issues. This time it will be a Sustainable Development Problem. We encourage you to join this competition and build solutions aimed at solving sustainability issues: 1) You will receive a special bonus if your application can solve sustainability issues, ESG, alternative energy sources, optimum utilization, etc.2) There will also be another bonus if you prepare and submit a dataset related to sustainability, ESG, alternative energy sources, or optimum utilization. General Requirements: Accepted applications: new to Open Exchange apps or existing ones, but with a significant improvement. Our team will review all applications before approving them for the contest. The application should work either on IRIS Community Edition or IRIS for Health Community Edition or IRIS Advanced Analytics Community Edition. The application should be Open Source and published on GitHub. The README file to the application should be in English, contain the installation steps, and contain either the video demo or/and a description of how the application works. 🆕 Contest Prizes: You asked – we did it! This time we've increased the prizes and changed the prize distribution! 1. Experts Nomination – winners will be selected by the team of InterSystems experts: 🥇 1st place - $5,000 🥈 2nd place - $3,000 🥉 3rd place - $1,500 🏅 4th place - $750 🏅 5th place - $500 🌟 6-10th places - $100 2. Community winners – applications that will receive the most votes in total: 🥇 1st place - $1,000 🥈 2nd place - $750 🥉 3rd place - $500 ✨ Global Masters badges for all winners included! Note: If several participants score the same amount of votes, they all are considered winners, and the money prize is shared among the winners. Important Deadlines: 🛠 Application development and registration phase: August 29, 2022 (00:00 EST): Contest begins. September 11, 2022 (23:59 EST): Deadline for submissions. ✅ Voting period: September 12, 2022 (00:00 EST): Voting begins. September 18, 2022 (23:59 EST): Voting ends. Note: Developers can improve their apps throughout the entire registration and voting period. Who Can Participate? Any Developer Community member, except for InterSystems employees (ISC contractors allowed). Create an account! Developers can team up to create a collaborative application. Allowed from 2 to 5 developers in one team. Do not forget to highlight your team members in the README of your application – DC user profiles. Helpful Resources: ✓ Instruqt plays: InterSystems Interoperability: Reddit Example ✓ Example applications: interoperability-embedded-python IRIS-Interoperability-template ETL-Interoperability-Adapter HL7 and SMS Interoperability Demo UnitTest DTL HL7 Twitter Sentiment Analysis with IRIS Healthcare HL7 XML RabbitMQ adapter PEX demo iris-megazord ✓ Online courses: Interoperability for Business Interoperability QuickStart Interoperability Resource Guide - 2019 ✓ Videos: Intelligent Interoperability Interoperability for Health Overview ✓ For beginners with IRIS: Build a Server-Side Application with InterSystems IRIS Learning Path for beginners ✓ For beginners with ObjectScript Package Manager (ZPM): How to Build, Test and Publish ZPM Package with REST Application for InterSystems IRIS Package First Development Approach with InterSystems IRIS and ZPM ✓ How to submit your app to the contest: How to publish an application on Open Exchange How to submit an application for the contest Need Help? Join the contest channel on InterSystems' Discord server or talk with us in the comment to this post. We can't wait to see your projects! Good luck 👍 By participating in this contest, you agree to the competition terms laid out here. Please read them carefully before proceeding. This article from MIT Enterprise Forum might help you on getting some inspiration about sustainability issues and spark ideas on what to develop https://mitefcee.org/sustainability-challenges/ Is it required use dtl, bpl and adapters to be considered an interoperability solution? You should have a production, receive data in services and send it to operations via messages and transmit further via operations. so BPL/DTL usage is not mandatory, but will give you a bonus points Added Instruqt resource to try InterSystems Interoperability: Developers! The technology bonuses have been announced! You can gain extra points in the voting by this! Please, check here! Added also iris-megazord by @Henrique.GonçalvesDias, @Henry.HamonPereira, and @José.Pereira. It has a very beautiful Flow Editor for interoperability productions. Not sure if the team will apply again with the solution but it can at least inspire participants. Please help me with the below. Fixed with the proper link https://play.instruqt.com/embed/intersystems/tracks/interop?token=em_XtBbRAsjw_hy2-c2 thank you! Dear Developers, Registration for the contest ends this Sunday.Hurry up to submit your application and you will still have a whole voting week to improve it)Happy coding! 🤩 Developers! Tomorrow is the last day to register for the contest! We are waiting for your applications! Hello Irina, I'm happy to say that I have joined the competition and that my application has been submitted !! Here is the link to my app and the link to the DC post about it. https://openexchange.intersystems.com/package/Sustainable-Machine-Learninghttps://community.intersystems.com/post/sustainable-machine-learning-intersystems-interoperability-contest Guess who's back?! Yaaay!! ;)))
Article
Murray Oldfield · Mar 11, 2016

InterSystems Data Platforms and performance – Part 2

In the last post we scheduled 24-hour collections of performance metrics using pButtons. In this post we are going to be looking at a few of the key metrics that are being collected and how they relate to the underlying system hardware. We will also start to explore the relationship between Caché (or any of the InterSystems Data Platforms) metrics and system metrics. And how you can use these metrics to understand the daily beat rate of your systems and diagnose performance problems. [A list of other posts in this series is here](https://community.intersystems.com/post/capacity-planning-and-performance-series-index) ***Edited Oct 2016...*** *[Example of script to extract pButtons data to a .csv file is here.](https://community.intersystems.com/post/extracting-pbuttons-data-csv-file-easy-charting)* ***Edited March 2018...*** Images had disappeared, added them back in. # Hardware food groups ![Hardware Food Groups](https://community.intersystems.com/sites/default/files/inline/images/foodgroups_0.png "Hardware Food Groups") As you will see as we progress through this series of posts the server components affecting performance can be itemised as: - CPU - Memory - Storage IO - Network IO If any of these components is under stress then system performance and user experience will likely suffer. These components are all related to each other as well, changes to one component can affect another, sometimes with unexpected consequences. I have seen an example where fixing an IO bottleneck in a storage array caused CPU usage to jump to 100% resulting in even worse user experience as the system was suddenly free to do more work but did not have the CPU resources to service increased user activity and throughput. We will also see how Caché system activity has a direct impact on server components. If there are limited storage IO resources a positive change that can be made is increasing system memory and increasing memory for __Caché global buffers__ which in turn can lower __system storage read IO__ (but perhaps increase CPU!). One of the most obvious system metrics to monitor regularly or check when users report problems is CPU usage. Looking at _top_ or _nmon_ on Linux or AIX, or _Windows Performance Monitor_. Because most system administrators look at CPU data regularly, especially if it is presented graphically, a quick glance gives you a good feel for the current health of your system -- what is normal or a sudden spike in activity that might be abnormal or indicates a problem. In this post we are going to look quickly at CPU metrics, but will concentrate on Caché metrics, we will start by looking at _mgstat_ data and how looking at the data graphically can give a feel for system health at a glance. # Introduction to mgstat mgstat is one of the Caché commands included and run in pButtons. mgstat is a great tool for collecting basic performance metrics to help you understand your systems health. We will look at mgstat data collected from a 24 hour pButtons, but if you want to capture data outside pButtons mgstat can also be run on demand interactively or as a background job from Caché terminal. To run mgstat on demand from the %SYS namespace the general format is. do mgstat(sample_time,number_of_samples,"/file_path/file.csv",page_length) For example to run a background job for a one hour run with 5 seconds sample period and output to a csv file. job ^mgstat(5,720,"/data/mgstat_todays_date_and_time.csv") For example to display to the screen but dropping some columns use the dsp132 entry. I will leave as homework for you to check the output to understand the difference. do dsp132^mgstat(5,720,"",60) > Detailed information of the columns in mgstat can be found in the _Caché Monitoring Guide_ in the most recent Caché documentation: > [InterSystems online documentation](https://docs.intersystems.com) # Looking at mgstat data pButtons has been designed to be collated into a single HTML file for easy navigation and packaging for sending to WRC support specialists to diagnose performance problems. However when you run pButtons for yourself and want to graphically display the data it can be separated again to a csv file for processing into graphs, for example with Excel, by command line script or simple cut and paste. In this post we will dig into just a few of the mgstat metrics to show how even a quick glance at data can give you a feel for whether the system is performing well or there are current or potential problems that will effect the user experience. ## Glorefs and CPU The following chart shows database server CPU usage at a site running a hospital application at a high transaction rate. Note the morning peak in activity when there are a lot of outpatient clinics with a drop-off at lunch time then tailing off in the afternoon and evening. In this case the data came from Windows Performance Monitor _(_Total)\% Processor Time_ - the shape of the graph fits the working day profile - no unusual peaks or troughs so this is normal for this site. By doing the same for your site you can start to get a baseline for "normal". A big spike, especially an extended one can be an indicator of a problem, there is a future post that focuses on CPU. ![CPU Time](https://community.intersystems.com/sites/default/files/inline/images/cpu_time.png "CPU Time") As a reference this database server is a Dell R720 with two E5-2670 8-core processors, the server has 128 GB of memory, and 48 GB of global buffers. The next chart shows more data from mgstat — Glorefs (Global references) or database accesses for the same day as the CPU graph. Glorefs Indicates the amount of work that is occurring on behalf of the current workload; although global references consume CPU time, they do not always consume other system resources such as physical reads because of the way Caché uses the global memory buffer pool. ![Global References](https://community.intersystems.com/sites/default/files/inline/images/glorefs_0.png "Global References") Typical of Caché applications there is a very strong correlation between Glorefs and CPU usage. >Another way of looking at this CPU and gloref data is to say that _reducing glorefs will reduce CPU utilisation_, enabling deployment on lower core count servers or to scale further on existing systems. There may be ways to reduce global reference by making an application more efficient, we will revisit this concept in later posts. ## PhyRds and Rdratio The shape of data from graphing mgstat data _PhyRds_ (Physical Reads) and _Rdratio_ (Read ratio) can also give you an insight into what to expect of system performance and help you with capacity planning. We will dig deeper into storage IO for Caché in future posts. _PhyRds_ are simply physical read IOPS from disk to the Caché databases, you should see the same values reflected in operating system metrics for logical and physical disks. Remember looking at operating system IOPS may be showing IOPS coming from non-Caché applications as well. Sizing storage and not accounting for expected IOPS is a recipe for disaster, you need to know what IOPS your system is doing at peak times for proper capacity planning. The following graph shows _PhyRds_ between midnight and 15:30. ![Physical Reads](https://community.intersystems.com/sites/default/files/inline/images/phyrds.png "Physical Reads") Note the big jump in reads between 05:30 and 10:00. With other shorter peaks at 11:00 and just before 14:00. What do you think these are caused by? Do you see these type of peaks on your servers? _Rdratio_ is a little more interesting — it is the ratio of logical block reads to physical block reads. So a ratio of how many reads are from global buffers (logical) from memory and how many are from disk which is orders of magnitude slower. A high _Rdratio_ is a good thing, dropping close to zero for extended periods is not good. ![Read Ratio](https://community.intersystems.com/sites/default/files/inline/images/rdratio.png "Read Ratio") Note that the same time as high reads _Rdratio_ drops close to zero. At this site I was asked to investigate when the IT department started getting phone calls from users reporting the system was slow for extended periods. This had been going on seemingly at random for several weeks when I was asked to look at the system. > _**Because pButtons had been scheduled for daily 24-hour runs it was relatively simple to go back through several weeks data to start seeing a pattern of high _PhyRds_ and low _Rdratio_ which correlated with support calls.**_ After further analysis the cause was tracked to a new shift worker who was running several reports entering 'bad' parameters combined with badly written queries without appropriate indexes causing the high database reads. This accounted for the seemingly random slowness. Because these long running reports are reading data into global buffers the result is interactive user’s data is being fetched from physical storage, rather than memory as well as storage being stressed to service the reads. Monitoring _PhyRds_ and _Rdratio_ will give you an idea of the beat rate of your systems and maybe allow you to track down bad reports or queries. There may be valid reason for high _PhyRds_ -- perhaps a report must be run at a certain time. With modern 64-bit operating systems and servers with large physical memory capacity you should be able to minimise _PhyRds_ on your production systems. > If you do see high _PhyRds_ on your system there are a couple of strategies you can consider: > - Improve the performance by increasing the number of database (global) buffers (and system memory). > - Long running reports or extracts can be moved out of business hours. > - Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise the impact on interactive users and to offload system resource use such as CPU and IOPS. Usually low _PhyRds_ is a good thing and it's what we aim for when we size systems. However if you have low _PhyRds_ and users are complaining about performance there are still things that can be checked to ensure storage is not a bottleneck - the reads may be low because the system cannot service any more. We will look at storage closer in a future post. # Summary In this post we looked at how graphing the metrics collected in pButtons can give a health check at a glance. In upcoming posts I will dig deeper into the relationship between the system and Caché metrics and how you can use these to plan for the future. Murray, thank you for the series of articles.A couple of questions I have.1) Documentation (2015.1) states that Rdratio is a Ratio of physical block reads to logical block reads,while one can see in mgstat log Rdratio values >> 1 (usually 1000 and more). Don't you think that the definition should be reversed?2) You wrote that:If you do see high PhyRds on your system there are a couple of strategies you can consider:...- Long running read only reports, batch jobs or data extracts can be run on a separate shadow server or asynchronous mirror to minimise impact on interactive users and to offload system resource use such as CPU and IOPS.I heard this advice many times, but how to return report results back to primary member? (ECP) mounting of remote database that resides on primary member is prohibited on backup member, and vise versa. Or these restrictions does not apply to asynchronous members (never played with them yet)? Murray thanks for your articles. But I think, should be mentioned metrics related to Write Daemon too, such as WDphase and WDQsz. Some time when our system looks works too slow, it may depends how quickly our disks can write. And I think in this case it is very usefull metrics. In my own experience I saw when in usual day our server started to work to slow in we saw that writedaemon all time was in 8 phase, and with PhyWrs we can count how many blocks were really written on a disk, and it was not so big count in that time, and so we found a problem in our storage, something related with snapshots. And when storage was reconfigured, our witedaemon continued to work as before quickly. I believe the correct statement is that Rdratio is the ratio of logical block reads to physical block reads, but is zero if physical block reads is zero. Thanks! Yes I wrote the wrong way around in the post. I have fixed this now. The latest Caché documentation has details and examples for setting up read only or read/write asynchronous report mirror. The asynch reporting mirror is special because it is not used for high availability. For example it is not a DR server.At the highest level running reports or extracts on a shadow is possible simply because the data exists on the other server in near real time. Operational or time-critcal reports should be run on the primary servers. The suggestion is that resource heavy reports or extracts can use the shadow or reporting server. While setting up a shadow or reporting asynch mirror is part of Caché, how a report or extract is scheduled or run is an application design question, and not something I can answer - hopefully someone else can jump in here with some advice or experience.Posibilities may include web services or if you use ODBC your application could direct queries to the shadow or a reporting asynch mirror. For batch reports or extracts routines could be scheduled on the shadow/reporting asynch mirror via task manager. Or you may have a sepearte application module for this type of reporting. If you need to have results returned to the application on the primary production that is also application dependant.You should also consider how to handle (e.g. via global mapping) any read/write application databases such as audit or logs which may be overwritten by the primary server. If you are going to do reporting on a shadow server search the online documentation for special considerations for "Purging Cached Queries". There are several more aticles to come before we are done with storage IO I will focus more on IOPS and writes in comming weeks. And will show some examples and solutions to the type of problem you mentioned.Thanks, for the comment. I have quite a few more articles (in my head) for this series, I will be using the comments to help me decide which topics you all are interested in. Rdratio is a little more interesting — it is the ratio of logical block reads to physical block reads.Don't you think that zero values of Rdratio is a special case, as David Marcus mentioned?In mgstat (per second) logs I have at hand I've found them always accompanied with zero values of PhyRds. Just one thing; one good tool to use on Linux is dstat.It is not installed by default, but once you have it (apt-get install dstat on Debian and derivates, yum install dstat on RHEL), you can observe the live behavior of your system as a whole with:dstat -ldcymsnIt gives quite a lot of information! Would it be possible to fix the image links to this post? Hi Mack, sorry about that, the images are back!
Announcement
Paul Gomez · Apr 11, 2016

InterSystems Global Summit 2016 - Session Content

Please use the following link to see all Global Summit 2016 sessions and links to additional content, including session materials.https://community.intersystems.com/global-summit-2016
Announcement
Janine Perkins · Apr 13, 2016

Featured InterSystems Online Course: Troubleshooting Productions

Learn how to start troubleshooting productions.Troubleshooting ProductionsLearn how to start troubleshooting productions, with a focus on locating and understanding some of the key Ensemble Management Portal pages when troubleshooting. Learn More.
Question
sansa stark · Aug 24, 2016

How to access InterSystems Caché after installation

Hi All,Now I installed the Cache 5.0 that terminal is not open can any one know the default username and password for Cache 5.0 Try username SYS password XXX. You can change the password using Control Panel|Security|User accounts ( TRM: account).Thanks. As I remember, you could create user TRM or TELNET, with routine %PMODE, and will get access without authentication.Don't remember how it was in 5.0, but currently default login/password is _SYSTEM/SYS
Announcement
Evgeny Shvarov · Mar 14, 2017

Join InterSystems Global Masters Gamification Platform!

Hi Community! We want to invite you to join the InterSystems Gamification Platform called Global Masters Advocate Hub! The Global Masters Advocacy Hub is our customer engagement platform where you will be invited to have some fun completing entertaining challenges, earning badges for the contribution to Developer Community, communicating with other advocates, and accumulating points which you can redeem for a variety of rewards and special honors. In addition to the challenges we prepared special Global Masters rewards for you. How to get rewards? It is simple — just redeem your points and get the reward you want. Here are some prizes from our Rewards Catalog: What's more? There is also Global Masters Leaderboard which counts your advocacy activity and contribution to InterSystems Developer Community. Update:Join Global Masters now: use your InterSystems SSO credentials to access the program. See you on InterSystems Global Masters today! There is a problem with the global masters link - the certificate is invalid. Thanks, Jon!I think I've fixed that I also have a problem with the certificate. Nope, certificate error still remains. Our web security policy blocks access to sites with certificate errors. Error details are as follows VERIFY DENY: depth=0, CommonName "*.influitive.com" does not match URL "globalmasters.intersystems.com" Hi, Stephen!Thanks for the comment!We are looking into that. Should this post be a sticky post on the homepage?Should there be a link to it in main menu? I assume Evgeny's initial fix was to change the hyperlink in the article so it's an http one rather than an https one. Hi,I'd like to join.Cheers,Juergen I would like a join code. Thanks.Stephen No more certificate errors. I believe the issue with the certificate is now resolved. Great! The invitation sent. Hi, Stephen!You are invited! Stephen, are you still using the original https URL Evgeny posted, i.e. https://globalmasters.intersystems.com/ ? My browser still reports an issue with the certificate for that. Hi, John and Steve!The issue is not resolved yet, but would be solved in a few days. Thanks, Jon!The link to Global Masters introduced to the Community menu. I would like a join code too, I'm very interested Hi, Francisco!You're invited! I would like to join :) I would like to join. I would like to join! My e-mail on the community is Amir.Samary@intersystems.com. Hi, Chris! You are invited! Hi, Scott! You are invited! Hi, Amir! You are invited! I'll give it a try! Hi, Jack!You are invited! I would like a join code. Thanks.Dietmar Hi, Deitmar! You are invited! I want to join to gain knowledge on cache. I would like a join code.Thanks Hi, Kishan!You are invited! Hi, Henrique!See the invitation link in your box! I am not able to get in the global masters is there any code to get me log in like that? Hi, Kishan!I sent the personal invitation to your email. Maybe your anti-spam filter is too cruel?Resent it now. I would like a join code. Thanks. Hi, Felix!You are invited! I would like a join code as well. Thanks! If still available I'd like to join.Many thanks! Hi, Stephen!You are invited! Hi, John!It is and you are invited.Welcome to the club! I would like a join code. Thanks. Hi, Charles!You are invited! I'd like to join as well. Thx Hi, Josef!You are invited! I'd like to joinThanks Hi, Thiago!You are invited! I'd like to join! Hi, Anna!You are invited! Welcome to Global Masters! Hi, could I join the Global Masters Lonely Hearts Club Band? Thx. Michal Hi, Michal!Sure! The invitation has been sent. Welcome to the club! ) I'd like to join as well, Evgeny. Thank you. Sure, Jeffrey! You are invited. Hi,I would like to join too ! Hi,I would like to join the community. I have got two invites previously but i couldn't join can you please help me.ThanksKishan R I'd like to join please :) Hi, Paul!You are invited! Hi, Kishan!You are invited! If still available I'd like to join. Hi, Thiago! It is!You are invited! Hi EvgenyCould you send me a join code?RegardsCristiano José da Silva Hi, Cristiano!You are invited! Hi Evgeny,Could you send me again the invitation?Thanks. Hi, Cristiano! Did it again) I'd like to join as well. Thanks :D Hi, Wesley! You are invited! Hi. I'd like to join! Hi, Gilberto!You are invited! Hi, Community!We introduced Single Sign On for Global Masters (GM).So you can use your WRC credentials to join globalmasters now. First time it would also ask you for a GM specific password.How it works:For those who have no active WRC account yet the invite is a way to join! Comment this post to get the personal invite to the Intersystems Global Masters Advocates Hub.And you can use your login and password if you are already a member. Please add me tooPeter Hi, Peter!You are invited! Hi,I am in, can you please add me. Thanks Hi, Aman!You are invited. I'd like to join, Evgeny. Thanks. Thank you Evgeny! Hi, Joe! You are invited!And you can use your WRC account if you want to join GM. Hi,I would like to joinThanks. Hi, Guilherme!You are invited!Also you are able to join with your WRC account, as it is shown here.Welcome to the club! HII'd like to join. Hi, Josnei!You are invited! I would like to join. Thank you i want to join. Very Nice ! :) I will get some informations on the site. Hi, Minsu!Sent you the invite to Global Masters. I want to join to gain knowledge on cache. I want to join to gain knowledge on cache. Hi, Ron!You are invited!And to gain knowledge on InterSystems Caché I recommend you InterSystems online Learning and this Developer Community: ask your questions! Hi,I want join global masters.Thanks Hi, Kuldeep!You are invited. Hi.I'd like to join! Hi, Derek!You are invited! I want to join You are invited! I would like to join Global Masters.Regards I joined Global Masters ages ago, but still can't find a way how to suppress it's notifications. How can I do it? Hi Goran! You are invited Alexey, sorry for disturbing with GM notifications.You can turn it off in Profile settings. See the gif: Evgeny --Thanks, I've already tried it before writing here. My current settings were: Prospect Accepted Unsubscribe from all except privacy, security, and account emailswhile I still received eMails about new challenges, etc. Just tried to turn off "Prospect Accepted", guessing that it can help, but this setting seems to be unswitchable (still active). It is hardly the source of a "problem" as I was never notified on the "prospect" stuff, even don't know its meaning.All these eMails are not a great deal of disturbance, I just dislike when I can't control their frequency. I don't insist on correction, if it causes extra efforts of your team - just sign me off from GM. I would like to Join :) Sounds fun.. May I have the link. Thanks Hi Jimmy!You're invited. Please check you mail! Hi, Thank you for your invitation. Cheers I want to join Global Masters. Thanks! Hi Rayana!You are invited! Hi Rayana,You're invited. Please check you mail! Hi. If it's possible, I would like a join code too.Thanks!Félix Hi Félix,You're invited, please check you mail. Welcome to Global Masters! I would like to join Thanks Evgeny! I did it. Please add me. Hi Suman,You are invited. Welcome to the club! :) Hi Walter,You are invited. Please check you mail! :) Please send an invite, thanks! Hi Drew,Welcome to the club! Please invite me ;) Hi Jan, You're invited. Please check your mail! I would like yo join. Thanks! Hi Daniel, you are invited! Please check your mail I'd like to join Hi Relton, Please check your mail and welcome to the club! 😉 I just got a badge! Evidently a post of mine was added to favorites 10 times. How does one now which post this was though? Best, Mike Hi, Mike! I think we have a problem with this badge. Investigating. Thanks for the feedback! hi, i would like to join to global masters . please guide me. thanks hasmathulla I would like to join. I would like a join code too, I'm very interested.
Announcement
Janine Perkins · Oct 12, 2016

Featured InterSystems Online Course: Designing Productions

Learn to successfully design your HL7 production.After you have built your first HL7 production in a test environment, you can start applying what you have learned and begin building in a development environment. Take this course to learn to create namespaces and databases for your production, apply recommended design principles to your production, and use appropriate naming conventions in your production.Learn More.
Article
Murray Oldfield · Mar 8, 2016

InterSystems Data Platforms and performance – Part 1

Your application is deployed and everything is running fine. Great, hi-five! Then out of the blue the phone starts to ring off the hook – it’s users complaining that the application is sometimes ‘slow’. But what does that mean? Sometimes? What tools do you have and what statistics should you be looking at to find and resolve this slowness? Is your system infrastructure up to the task of the user load? What infrastructure design questions should you have asked before you went into production? How can you capacity plan for new hardware with confidence and without over-spec'ing? How can you stop the phone ringing? How could you have stopped it ringing in the first place? A list of other posts in this series is here This will be a journey This is the first post in a series that will explore the tools and metrics available to monitor, review and troubleshoot systems performance as well as system and architecture design considerations that effect performance. Along the way we will head off down a quite a few tracks to understand performance for Caché, operating systems, hardware, virtualization and other areas that become topical from your feedback in the comments. We will follow the feedback loop where performance data gives a lens to view the advantages and limitations of the applications and infrastructure that is deployed, and then back to better design and capacity planning. It should go without saying that you should be reviewing performance metrics constantly, it is unfortunate the number of times customers are surprised by performance problems that would have been visible for a long time, if only they were looking at the data. But of course the question is - what data? We will start the journey by collecting some basic Caché and system metrics so we can get a feel for the health of your system today. In later posts we will dive into the meaning of key metrics. There are many options available for system monitoring – from within Caché and external and we will explore a lot of them in this series. To start we will look at my favorite go-to tool for continuous data collection which is already installed on every Caché system – ^pButtons. To make sure you have the latest copy of pButtons please review the following post: https://community.intersystems.com/post/intersystems-data-platforms-and-performance-%E2%80%93-how-update-pbuttons Collecting system performance metrics - ^pButtons The Caché pButtons utility generates a readable HTML performance report from log files it creates. Performance metrics output by pButtons can easily be extracted, charted and reviewed. Data collected in the pButtons html file includes; Caché set up: with configuration, drive mappings, etc. mgstat: Caché performance metrics - most values are average per second. Unix: vmstat and iostat: Operating system resource and performance metrics. Windows: performance monitor: Windows resource and performance metrics. Other metrics that will be useful. pButtons data collection has very little impact on system performance, the metrics are being collected already by the system, pButtons simply packages these for easy filing and transport. To keep a baseline, for trend analysis and for troubleshooting it is good practice to collect a 24-hour pButtons (midnight to midnight) every day for a complete business cycle. A business cycle could be a month or more, for example to capture data from end of month processing. If you do not have any other external performance monitoring or collection you can run pButtons year-round. The following key points should be noted: Change the log directory to a location away from production data to store accumulated output files to avoid disk full problems! Run an operating system script or otherwise compress and archive the pButtons file regularly, this is especially important on Windows as the files can be large. Review the data regularly! In event of a problem needing immediate analysis pButtons data can be previewed (collected immediately) while metrics continue to be stored for collection at the end of the days run. For more information on pButtons including preview, stopping a run and adding custom data gathering please see the Caché Monitoring Guide in the most recent Caché documentation: http://docs.intersystems.com The pButtons HTML file data can be separated and extracted (to CSV files for example) for processing into graphs or other analysis by scripting or simple cut and paste. We will see examples of the output in graphs later in the next post. Of course if you have urgent performance problems contact the WRC. Schedule 24 hour pButtons data collection ^pButtons can be started manually from the terminal prompt or scheduled. To schedule a 24-hour daily collection: 1. Start Caché terminal, switch to %SYS namespace and run pButtons manually once to set up pButtons file structures: %SYS>d ^pButtons Current log directory: /db/backup/benchout/pButtonsOut/ Available profiles: 1 12hours - 12 hour run sampling every 10 seconds 2 24hours - 24 hour run sampling every 10 seconds 3 30mins - 30 minute run sampling every 1 second 4 4hours - 4 hour run sampling every 5 seconds 5 8hours - 8 hour run sampling every 10 seconds 6 test - A 5 minute TEST run sampling every 30 seconds Select option 6. for test, 5 minute TEST run sampling every 30 seconds. Note your numbering may be different, but the test should be obvious. During the run, run a Collect^pButtons (as shown below), you will see information including the runid. In this case “20160303_1851_test”. %SYS>d Collect^pButtons Current Performance runs: 20160303_1851_test ready in 6 minutes 48 seconds nothing available to collect at the moment. %SYS> Notice that this 5 minute run has 6 minutes and 48 seconds to go? pButtons adds a 2 minute grace period to all runs to allow time for collection and collation of the logs into html format. 2. IMPORTANT! Change pButtons log output directory – the default output location is the <cache install path>/mgr folder. For example on unix the path to the log directory may look like this: do setlogdir^pButtons("/somewhere_with_lots_of_space/perflogs/") Ensure Caché has write permissions for the directory and there is enough disk space available for accumulating the output files. 3. Create a new 24 hour profile with 30 second intervals by running the following: write $$addprofile^pButtons("My_24hours_30sec","24 hours 30 sec interval",30,2880) Check the profile has been added to pButtons: %SYS>d ^pButtons Current log directory: /db/backup/benchout/pButtonsOut/ Available profiles: 1 12hours - 12 hour run sampling every 10 seconds 2 24hours - 24 hour run sampling every 10 seconds 3 30mins - 30 minute run sampling every 1 second 4 4hours - 4 hour run sampling every 5 seconds 5 8hours - 8 hour run sampling every 10 seconds 6 My_24hours_30sec- 24 hours 30 sec interval 7 test - A 5 minute TEST run sampling every 30 seconds select profile number to run: Note: You can vary the collection interval – 30 seconds is fine for routine monitoring. I would not go below 5 seconds for a routine 24 hour run (…”,5,17280) as the output files can become very large as pButtons collects data at every tick of the interval. If you are trouble-shooting a particular time of day and want more granular data use one of the default profiles or create a new custom profile with a shorter time period, for example 1 hour with 5 second interval (…”,5,720). Multiple pButtons can run at the same time so you could have a short pButtons with 5 second interval at running at the same time as the 24-hour pButtons. 4. Tip For UNIX sites review the disk command. The default parameters used with the 'iostat' command may not include disk response times. First display what disk commands are currently configured: %SYS>zw ^pButtons("cmds","disk") ^pButtons("cmds","disk")=2 ^pButtons("cmds","disk",1)=$lb("iostat","iostat ","interval"," ","count"," > ") ^pButtons("cmds","disk",2)=$lb("sar -d","sar -d ","interval"," ","count"," > ") In order to collect disk statistics, use the appropriate command to edit the syntax for your UNIX installation. Note the trailing space. Here are some examples: LINUX: set $li(^pButtons("cmds","disk",1),2)="iostat -xt " AIX: set $li(^pButtons("cmds","disk",1),2)="iostat -sadD " VxFS: set ^pButtons("cmds","disk",3)=$lb("vxstat","vxstat -g DISKGROUP -i ","interval"," -c ","count"," > ") You can create very large pButton html files by having both iostat and sar commands running. For regular performance reviews I usually only use iostat. To configure only one command: set ^pButtons("cmds","disk")=1 More details on configuring pButtons are in the online documentation. 5. Schedule pButtons to start at midnight in the Management Portal > System Operation > Task Manager: Namespace: %SYS Task Type: RunLegacyTask ExecuteCode: Do run^pButtons("My_24hours_30sec") Task Priority: Normal User: superuser How often: Once daily at 00:00:01 Collecting pButtons data pButtons shipped in more recent versions of InterSystems data platforms include automatic collection. To manually collect and collate the data into an html file; In %SYS namespace, run the following command to generate any outstanding pButtons html output files: do Collect^pButtons The html file will be in the logdir you set at step 2 (if you did not set it go and do it now!). Otherwise the default location is the <Caché install dir/mgr> Files are named <hostname_instance_Name_date_time_profileName.html> e.g. vsan-tc-db1_H2015_20160218_0255_test.html Windows Performance Monitor considerations If the operating system is Windows then Windows Performance Monitor (perfmon) can be used to collect data in synch with the other metrics collected. On older Caché distributions of pButtons, Windows perfmon needs to be configured manually. If there is demand from the post comments I will write a post about creating a perfmon template to define the performance counters to monitor and schedule to run for the same period and interval as pButtons. Summary This post got us started collecting some data to look at. Later in the week I will start to look at some sample data and what it means. You can follow along with data you have collected on your own systems. See you then. http://docs.intersystems.com Great article Murray. I'm looking forward to reading subsequent ones. Thanks for this Murray. I am just sending to our system manager!(We don't run pButtons all the time but perhaps we should) Thank you, Murray! Great article!People! See also other articles in InterSystems Data Platform Blog and subscribe not to miss new ones. When that's not enough, there are a ton of tools to go deeper and look at the OS side of things: http://www.brendangregg.com/linuxperf.html A nice intro., moving on to part two. Thank you. this one is so popular it's showing up three times in the posting list. Will try to figure out why... Just adding my 2c to "4. Tip For UNIX sites":All necessary unix/linux packages should be installed before the first invocation of Do run^pButtons(<any profile>) otherwise some commands may be missed in ^pButtons("cmds"). I've recently faced it at the server where sysstat wasn't installed: `sar -d` and `sar -u` commands were absent. If you decide to install it later (`sudo yum install sysstat`in my case), ^pButtons("cmds") would not be automatically updated without little help from you: just kill it before calling the run^pButtons(). This is actual at least for pButtons v.1.15c-1.16c and v.5 (which recently occurred on ftp://ftp.intersys.com/pub/performance/), in Caché 2015.1.2. Excellent Article. I have a question about options for running pButtons. Please Note: It has been awhile since I last used pButtons. But I can attest that it is a very useful tool. My question: Is there an option to run pButtons in "basic mode" versus "advanced mode"? I thought I recalled such a feature, but cannot seem to remember how to select the run mode. And from what I recall, basic mode collects less data/information than advanced mode. This can be helpful for support teams and others when they only need the high-level details.Thanks! And I look forward to reading the next 5 articles in this series. Basic and advanced mode were in an old version of another tool named ^Buttons. With ^pButtons you have an option to reduce the number of OS commands being performed, as it was shown in Tip #4. It is a pretty good idea to run pButtons all the time. That way you know you’ll have data for any acutely weird performance behavior during the time of the problem.The documentation has an example of setting up a 24 hour pButtons to run every week at a specific time using Task Manager:http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCM_pbuttons#GCM_pButtons_runsmpYou can also just set the task to run every day rather than weekly.If you're worried about space, the reports don't take up too much room, and (since they're fairly repetitive) they zip pretty well. Fantastic article. Congrats. thanks :) Hi @Murray.Oldfield, we hv such need of Cache on Windows OS. Really appreciate if you can help do one. Thx!
Question
Evgeny Shvarov · Apr 24, 2016

Import/export data from InterSystems Caché

Hi! Suppose I have full access to Caché database instance A and want to export consistent part of the data and import it into another Caché instance B. Classes are equal. What are the most general and convenient options for me? TIA! Thank you Kenneth!But what if you need a part of data? Say the records only from current year or from particular customer?And what if you need not all the classes, but part of them - what globals should I choose to export?I believe in this cases we should use SQL to gather data. The question is how to export/import it. If so, create a new database on instance A and then use GBLOCKCOPY to copy from the existing database to the new one. Then just move the new database to instance B.That can help sometimes. Thank you. Just move - you mean unmount and download cache.dat file?Is this a one-time migration of data from instance A to instance B?My question is a request for general approaches. But my task now is to extract some part of consistent data from the large database to use it as test data in my local database for development purposes. One approach would be to use %XML.DataSet to convert SQL results into XML: Set result=##class(%XML.DataSet).%New() Do result.Prepare("SELECT TOP 3 ID, Name FROM Sample.Person") Do result.Execute() 1 Do result.WriteXML("root",,,,,1) Outputs: <root> <s:schema id="DefaultDataSet" xmlns="" attributeFormDefault="qualified" elementFormDefault="qualified" xmlns:s="http://www.w3.org/2001/XMLSchema" xmlns:msdata="urn:schemas-microsoft-com:xml-msdata"> <s:element name="DefaultDataSet" msdata:IsDataSet="true"> <s:complexType> <s:choice maxOccurs="unbounded"> <s:element name="SQL"> <s:complexType> <s:sequence> <s:element name="ID" type="s:long" minOccurs="0" /> <s:element name="Name" type="s:string" minOccurs="0" /> </s:sequence> </s:complexType> </s:element> </s:choice> </s:complexType> <s:unique name="Constraint1" msdata:PrimaryKey="true"> <s:selector xpath=".//SQL" /> <s:field xpath="ID" /> </s:unique> </s:element> </s:schema> <diffgr:diffgram xmlns:msdata="urn:schemas-microsoft-com:xml-msdata" xmlns:diffgr="urn:schemas-microsoft-com:xml-diffgram-v1"> <DefaultDataSet xmlns=""> <SQL diffgr:id="SQL1" msdata:rowOrder="0"> <ID>96</ID> <Name>Adam,Wolfgang F.</Name> </SQL> <SQL diffgr:id="SQL2" msdata:rowOrder="1"> <ID>188</ID> <Name>Adams,Phil H.</Name> </SQL> <SQL diffgr:id="SQL3" msdata:rowOrder="2"> <ID>84</ID> <Name>Ahmed,Edward V.</Name> </SQL> </DefaultDataSet> </diffgr:diffgram> </root> There is also %SQL.Export.Mgr class, which does SQL export. Thank you, Ed. And I can import the result on Instance B with class .... ? %XML.Reader and %SQL.Import.Mgr respectively. I am glad to see that you subsequently posted this as a new question here and it has already received an answer. You could use %GOF for export and %GIF for import from Terminal. These tools export block level data. The ultimate size of the export will be much less than other tools Is this a one-time migration of data from instance A to instance B?If so, create a new database on instance A and then use GBLOCKCOPY to copy from the existing database to the new one. Then just move the new database to instance B If it is more complex to determine the data set, because you have specific parameters in mind it makes sense to select the data via SQL and insert the selected record into the other instance via SQL. You can either use linked tables, which allows you to write this simple logic in Caché Object Script, or you can write a simple Java application and go directly via JDBC. Obviously, any supported client-side language can solve this challenge, Java is just one option.In case you have to migrate data where the model includes foreign-key constraints, you have to use the %NOCHECK keyword in your SQL INSERT statement: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RSQL_insertThis approach is definitely more work than just exporting/importing the data, but it allows to easily add simple logic with some benefits, e.g. anonymization, batch-loading and parallelization. Depending on your use case some of these topics may be relevant. Hi,Another option is try to use the SQL Data Migration Wizard. You can copy just the data and or create the schema as well.To select the data from a specific Year, Customer, etc. you can create a view on the source side and then use the migration wizard to migrate to importe the data.I hope it helps.Fábio. Hi All,I need urgent help,I want to export the values from Global to CSV file.Values are in global are :^Global1(1)="1,2,3,4"^Global1(2)="5,6,7,8"...^Global1(n)="n,n,n,n"I want output in CSV File as:1,2,3,45,6,7,8...n,n,n,nI made a class:ClassMethod ExportNewSchemaGlobals(pFile){ Set ary("^Global1")="" Set pFile = "C:/Test.csv" Set ary = ##class(%Library.Global).Export(,.ary,pFile)}But its not giving expected Output. Cannot make an answer on my own question. Anyway, here are some answers from Russian forum:DbVisualizer and Caché Monitor can export/import InterSystems Caché data partially via SQL queries.There is also %Global class wrapper for %GI, %GIF,..etc routines which can help to export/import global nodes partially. Documentation.
Announcement
Evgeny Shvarov · Oct 19, 2016

InterSystems UK Technology Summit 2016 started!

Hi, Community! I'm glad to announce UK Technology Summit 2016 started! #ISCTech2016 tag will help be in touch with all is happening in the Summit. You are very welcome to discuss the Summit here in comments too!