Search

Clear filter
Announcement
Evgeny Shvarov · Jun 15, 2017

Meet John Murray, InterSystems Developer Community Moderator

Hi, Community!I'm pleased to introduce John Murray as a new Developer Community moderator.John is constantly posting helpful articles and answering questions showing the great knowledge and experience in InterSystems technology.Recently John kindly agreed to become InterSystems Developer Community moderator and joined InterSystems moderators team!Here are some details John wants to share:I am Senior Product Engineer at George James Software (http://georgejames.com), where I have worked for nearly two decades. I have specialized in InterSystems technologies and their predecessors ever since my IT career began almost thirty years ago.I develop and support tools covering design, development, and deployment including:• Yuzinji – code metrics, structure and dependency analysis.• Umlanji – UML visualization of classes.• Deltanji – rule-driven versioning, process control, deployment and auditing across the software cycle.• Serenji – rich code editing and unrivaled debugging, including at Ensemble message level.We also provide consulting, programming and support services. Our tools are designed specifically for people working with InterSystems products. We are immersed in the technology we build for, and we use the tools we create. Our services focus specifically on the InterSystems community.I like to learn new things, solve problems, and assist people. So I'm pleased to have been invited to become a DC Moderator.I live in London, UK.Thank you and congratulations, John! Congratulations John! Congratulations, John.To be a moderator is a big responsibility, so I wish you a lot of patience. John, thanks for all your valuable contributions. I can say first hand that John is very patient and thorough. Good move! Good News Indeed! Sounds great!!Congratulations John! As others have said, and I'd like to reaffirm, John's not just got the knowledge, experience, expertise - he's got the heart too. Glad to hear that! I have met John more than 15 years ago in Cambridge when he kindly helped me with MSM-Workstation issues... Yes, he was a MSM authority too :) Congratulations, John!
Article
AndreClaude Gendron · Sep 19, 2017

UnitTest : A Mocking Framework for InterSystems objectscript classes.

It is with great pleasure that the CIUSSS de l'Estrie - CHUS is sharing the mocking framework it developed and presented at the InterSystems Summit 2017. I will update this post with more detailed instructions in the next few weeks but I wanted to share the code and presentation quickly : https://gitlab.com/ciussse-drit-srd-public/Mocking-Framework I hope you'll find this useful for your unit testing. We are using this extensively for the last 2 years and it really works well! The repo is public, feel free to submit enhancements! Do not forget to enable the %UnitTest in your SMP. Instructions are online at http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=TUNT_ExampleTestPortal Kind regards André-Claude Gendron Hi Andre Claude,Thank you for sharing your framework, and for great presentation at Global Summit. I'm looking forward to see your detailed article in Developers Community!We have a documentation available online, you can link directly to it: http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=TUNT_ExampleTestPortalAlso, video recording of Andre's presentation and PDF of his slides are available at https://learning.intersystems.com/course/view.php?id=664 A Mocking Framework video is available on the InterSystems Developers YouTube Channel. Old thread, but here's the link to that video:https://www.youtube.com/watch?v=Bn5VPKAUs0U(start at 3:50) Hi @AndreClaude.Gendron! What a wonderful tool! Do you want to share it on InterSystems Open Exchange? Sure ! Is there any "how-to" ? Here it is! And the video If is only on Gitlab or you have a Github mirror too? Hi Andre-Claude, Thank you for the contribution! We've begun to fold the use of this Mock framework into our daily activities. I know this post is rather old, but I do have a question. Some of the classes in the repository use classes that are not present in the repo any where (Tests.Fw.Mock.CIsEqualObjectParamValidator for example). If you are willing, would you be able to commit those to the GitLab site? Again, thank you for sharing this with the rest of us. Sorry about that, I will commit them in the next few days or perhaps tonight. At the time they were not added to the repo because of dependencies but I'm sure I can arrange something. Plus I'll add the latest development we did at CIUSSSE-CHUS. I'm glad someone else uses it ! Kind regards, AC Hi, I know it is a old thread but I thought I'd ask if there are any updates to this framework? The repo is 4 years old and I'm looking for a solution for a mocking framework atm. I noticed also that some classes are missing like Jonathan Lent stated. Is there any possibility for you to update the repo with the latest changes if there are any? I'm working on unit tests and trying to find out a way to mock different business hosts and I found this framework to be a promising one. Cheers,Kari Yes the mocking framework is still under active development here. I'm sorry there are missing classes in the provided example. I'll completely forgot about my reply a year ago... Yes testing business hosts works very well with it. I'll try to arrange something today and I'll reply here. If I don't, reply back ! I just remembered, I fixed this but was waiting for a review. I merged it this morning. https://github.com/GendronAC/InterSystems-UnitTest-Mocking/pull/3 (code was sent to github per InterSystems request) Latest code is here : https://github.com/GendronAC/InterSystems-UnitTest-Mocking Let me know if you need something else. Have a look at the CTestCustomPassthroughOperation.cls class Hi @AndreClaude.Gendron ! What about publishing it on OEX? There will be more exposure! Here is the documentation! Thanks in advance! Hi, Thank you very much for such a quick reply! I will take a look at the repo you provided and will see if I can get the framework up and running on my end. Thanks again and have a good day! //Kari Hi @AndreClaude.Gendron ! Raising again this topic - could you please share your project on Open Exchange? Thanks in advance!
Announcement
Anastasia Dyubaylo · Sep 28, 2017

Two new videos of the week: InterSystems Cloud Manager

Hi, Community! This week we have two videos. Check all new videos on InterSystems Developers YouTube Channel:1. What is InterSystems Cloud Manager? This video provides an introduction to InterSystems Cloud Manager (ICM) and its capabilities.2. Instant Gratification: Pick Your CloudIn this video learn, how to quickly define and create a cloud infrastructure on the top three cloud IaaS providers. You can provision a cloud application within any one of those environments. The approach is to use containers and InterSystems new IPD tool with a DevOps approach to define, create, and provision an application. See additional resources to this video here.You can also read this article about InterSystems Cloud Manager.What's new on InterSystems Developers YouTube Channel? We have created two dedicated playlists: InterSystems Cloud Manager playlistInterSystems Data Platform playlistEnjoy and stay tuned!
Question
Soufiane Amroun · Sep 8, 2017

How to Edit Fields in an InterSystems Ensemble Business Rule

Hi world :) , i've a question about editing an ensemble rule:how can i edit fields in a BPL rule with programming (i need to modify target value for a condition send label ). until now i can edit only a condition value. thank you for your collaborations Hi SoufianeThe graphical rules editor used in the UI generates a class, and an xml block in that class, to represent your rules.If you want a target to be different, based on some code, why don't you create multiple conditions for the different targets you have, then, base each condition on some database setting.The rule itself remains static (except when you need to define a new target) and it will show all the possible paths that can be taken by the rule.Then.. programatically...you change that value in the database which is behind all the conditional statements you have and thus - programatically, you effect a target change.The other alternative is to use a Business Process. You can send your message to be handled and routed by a Business Process in BPL. The process, can programatically resolve the name of the target component in a context variable (let's say, context.TargetName). After context.TargetName has been programatically resolved, make a BPL Call action, and for the Call action's property "target" don't hardcode a value.Instead supply "@context.TargetName", and the message will be sent to whatever the value of context.TargetName is at that point in time.Steve
Question
Mike Kadow · Nov 8, 2017

Documentation type, standard InterSystems type or Atelier type?

In trying to understand Atelier I am directed to go through its hierarchy type of documentation.Is the Atelier documentation going to continue as a hierarchy or at some point is it going to be integrated into the InterSystems type of documentation?When looking for an answer it would be nice to use only one method. On a side note, the attached Relevant Articles seem to have nothing to do with the subject of my query. There are currently no plans to merge the Atelier documentation with the docs for other InterSystems technologies (Caché, Ensemble, HealthShare, InterSystems IRIS Data Platform). Atelier is a separate product and will continue to have its own documentation that follows industry standards for Eclipse plug-ins.
Announcement
Derek Robinson · Nov 22, 2017

New InterSystems Online Course: Getting Started with ICM

Hi all! We have just released a new online course, Getting Started with ICM, that provides an introduction to InterSystems Cloud Manager (ICM) -- one of the new technologies coming with the release of InterSystems IRIS!After taking this one-hour course, you will be able to:Explain what ICM is and the business benefits that come with itIdentify the major cloud computing providers and the benefits of cloud computingProvision a multi-node infrastructure on your selected cloud platformDeploy your InterSystems IRIS applications to your provisioned infrastructureUnprovision your infrastructure to avoid costly chargesRun additional commands to further manage and modify your cloud deployments with ICMWe hope you enjoy the course!
Article
Mark Bolinsky · Jan 29, 2016

Linux Transparent HugePages and the impact to InterSystems IRIS

** Revised Feb-12, 2018 While this article is about InterSystems IRIS, it also applies to Caché, Ensemble, and HealthShare distributions. Introduction Memory is managed in pages. The default page size is 4KB on Linux systems. Red Hat Enterprise Linux 6, SUSE Linux Enterprise Server 11, and Oracle Linux 6 introduced a method to provide an increased page size in 2MB or 1GB sizes depending on system configuration know as HugePages. At first HugePages required to be assigned at boot time, and if not managed or calculated appropriately could result in wasted resources. As a result various Linux distributions introduced Transparent HugePages with the 2.6.38 kernel as enabled by default. This was meant as a means to automate creating, managing, and using HugePages. Prior kernel versions may have this feature as well however may not be marked as [always] and potentially set to [madvise]. Transparent Huge Pages (THP) is a Linux memory management system that reduces the overhead of Translation Lookaside Buffer (TLB) lookups on machines with large amounts of memory by using larger memory pages. However in current Linux releases THP can only map individual process heap and stack space. The Problem The majority of memory allocation in any Cache' system is the shared memory segments (global and routine buffers pools) and because THP does not handle these shared memory segments. As a result THP are not used for shared memory, and are only used for each individual process. This can be confirmed using a simple shell command. The following is an example from a test system at InterSystems which shows 2MB THP allocated to Cache' processes: # grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} ' /proc/2945/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 2945 1 0 2015 ? 01:35:41 /usr/sbin/rsyslogd -n /proc/70937/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70937 70897 0 Jan27 pts/0 00:01:58 /bench/EJR/ycsb161b641/bin/cache WD /proc/70938/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70938 70897 0 Jan27 pts/0 00:00:00 /bench/EJR/ycsb161b641/bin/cache GC /proc/70939/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70939 70897 0 Jan27 pts/0 00:00:39 /bench/EJR/ycsb161b641/bin/cache JD /proc/70939/smaps:AnonHugePages: 4096 kB UID PID PPID C STIME TTY TIME CMD root 70939 70897 0 Jan27 pts/0 00:00:39 /bench/EJR/ycsb161b641/bin/cache JD /proc/70940/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70940 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 1 /proc/70941/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70941 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 2 /proc/70942/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70942 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 3 /proc/70943/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70943 70897 0 Jan27 pts/0 00:00:33 /bench/EJR/ycsb161b641/bin/cache SWD 7 /proc/70944/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70944 70897 0 Jan27 pts/0 00:00:29 /bench/EJR/ycsb161b641/bin/cache SWD 4 /proc/70945/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70945 70897 0 Jan27 pts/0 00:00:30 /bench/EJR/ycsb161b641/bin/cache SWD 5 /proc/70946/smaps:AnonHugePages: 2048 kB UID PID PPID C STIME TTY TIME CMD root 70946 70897 0 Jan27 pts/0 00:00:30 /bench/EJR/ycsb161b641/bin/cache SWD 6 /proc/70947/smaps:AnonHugePages: 4096 kB In addition, there are potential performance penalties in the form of memory allocation delays at runtime especially for applications that may have a high rate of job or process creation. The Recommendation InterSystems recommends for the time being to disable THP as the intended performance gain is not applicable to IRIS shared memory segment, and the potential for a negative performance impact in some applications. Check to see if your Linux system has Transparent HugePages enabled by running of the following commands: For Red Hat Enterprise Linux kernels: # cat /sys/kernel/mm/redhat_transparent_hugepage/enabled For other kernels: # cat /sys/kernel/mm/transparent_hugepage/enabled The above command will display whether the [always], [madvise], or [never] flag is enabled. If THP is removed from the kernel then the /sys/kernel/mm/redhat_transparent_hugepage or /sys/kernel/mm/redhat/transparent_hugepage files do not exist. To disable Transparent HugePages during boot perform the following steps: 1. Add the following entry to the kernel boot line in the /etc/grub.conf file: transparent_hugepage=never 2. Reboot the operating system There is a method to also disable THP on-the-fly, however this may not provide the desired result as that method will only stop the creation and usage of THP for new processes. THP already created will not be disassembled into regular memory pages. It is advised to completely reboot the system to have THP disabled at boot time. *Note: It is recommended to confirm with your respective Linux distributor to confirm the methods used for disabling THP. One clarification comment I would like to add is the use of "traditional" HugePages through the process of boot-time reservation is still highly recommended for optimal performance . This process is detailed in the Cache' Installation Guide: http://docs.intersystems.com/cache20152/csp/docbook/DocBook.UI.Page.cls?KEY=GCI_unixparms#GCI_unixparms_huge_page I think further clarification is also needed, You mention that various Linux distributions introduce this with the 2.6.38 Kernel. However this starts with RHEL 6.0/Centos 6 .0 General Availability release. 6.8 is currently only kernel 2.6.32-642 and it has this available in it. Additional information about it's availability in version 6.0 can be found in the RHEl slideshow page 2 http://www.slideshare.net/raghusiddarth/transparent-hugepages-in-rhel-6 and on page 102 of the redhat 6.0 technical documentation https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/pdf/6.0_Technical_Notes/Red_Hat_Enterprise_Linux-6-6.0_Technical_Notes-en-US.pdf . I have not researched when this was rolled into fedora prior to 2.6.38 but as fedora tends to be a precursor to RHEL, it might also have been before kernel 2.6.38.It might be better to suggest that people run the check to see if it is enabled or not and that they should not be surprised if they are running a Linux with a kernel less than 2.6.38that does not support it. Mark, may I ask your for some clarification? You wrote:As a result THP are not used for shared memory, and are only used for each individual process. What's a problem here? Shared memory can use "normal" huge pages, meanwhile individual processes - THP. The memory layout on our developers' serber shows that it's possible.# uname -aLinux ubtst 4.4.0-59-generic #80-Ubuntu SMP Fri Jan 6 17:47:47 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux# cat /sys/kernel/mm/transparent_hugepage/enabled[always] madvise never# tail -11 /proc/meminfoAnonHugePages: 131072 kBCmaTotal: 0 kBCmaFree: 0 kBHugePages_Total: 1890HugePages_Free: 1546HugePages_Rsvd: 898HugePages_Surp: 0Hugepagesize: 2048 kBDirectMap4k: 243808 kBDirectMap2M: 19580928 kBDirectMap1G: 49283072 kB# ccontrol listConfiguration 'CACHE1' directory: /opt/cache1 versionid: 2015.1.4.803.0.16768 ...# cat /opt/cache1/mgr/cconsole.log | grep Allocated...01/27/17-16:41:57:276 (1425) 0 Allocated 1242MB shared memory using Huge Pages: 1024MB global buffers, 64MB routine buffers# grep -e AnonHugePages /proc/*/smaps | awk '{ if($2>4) print $0} ' | awk -F "/" '{print $0; system("ps -fp " $3)} '.../proc/165553/smaps:AnonHugePages: 2048 kBUID PID PPID C STIME TTY TIME CMDcacheusr 165553 1524 0 фев07 ? 00:00:00 cache -s/opt/cache1/mgr -cj -p18 SuperServer^%SYS.SERVER... Hi Alexey,Thank you for your comment. Yes, both THP and traditional/reserved Huge_pages can be used at the same time, however there is not benefit and in fact systems with many (thousands) of Caché processes, especially if there is a lot of process creation, has shown a performance penalty in testing. The overhead of instantiating the THP for those processes at a high rate can be noticeable. Your application may not exhibit this scenario and may be ok. The goal of this article is to provide guidance for those that may not know which is the best option to choose and/or point out that this is a change in recent Linux distributions. You may find that THP usage is perfectly fine for your application. There is no replacement for actual testing and benchmarking your application. :)Kind regards,Mark B- Of course their is no replacement to actual testing. What I am trying to say is that had I started reading the article straight through instead of skimming and jumping to the how to check if it was on, I probably would have read at the top "various Linux distibutions introduced Transparent HugePages with the 2.6.38 kernel' and stopped because my kernel is less than that. I really think that the current wording will lead people whom work at shops that are still rolling out new builds in RHEL or Centos 6 not to use the ideal settings. Maybe a complete re-arrange of the first three paragraphs into two or three paragraphs where the RHEL 6,... might make this clearer. With a sentence that reads something like, "This was first introduced in Red Hat Enterprise Linux 6, SUSE Linux Enterprise Server 11, and Oracle Linux 6; and then later introduced in may other Linux variants with the 2.6.38 kernel.Additionally it might make things clearer if you where to mention that for It's setting the item in brackets is what is the current setting as in my redhat the lines reads . [always] madvise never.It might also be useful to people on this to mention what to do in the case where the transparent Huge pages enabled is set to madvise . Hi Alexander,Thank you for you post. We are only relying on what RH documentation is stating as to when THP was introduced to the main stream kernel (2.6.38) and enabled by default as noted in the RH post you referenced. The option may have existed in previous kernels (although I would not recommending to try it), it may not have been enabled by default. All the documentation I can find on THP support in RH references the 2.6.38 kernel where is was merged feature.If you are finding it in previous kernels, confirm that THP are enabled by default or not. That would be interesting to know. Unfortunately there isn't much we can do other than to do the checks for enablement as mentioned in the post. As the ultimate confirmation, RH and the other Linux distributions would need to update their documentation to confirm when this behavior was enacted in the respective kernel versions. As I mentioned in other comments, the use of THP is not necessarily a bad thing and won't cause "harm" to a system, but there may be performance impacts for applications that have a large amount of process creation as part of their application.Kind regards,Mark B- I will revise the post to be more clear that THP is enabled by default in 2.6.38 kernel but may be available in prior kernels and to reference your respective Linux distributions documentation for confirming and changing the setting. Thanks for your comments. I do not claim to be a Huge Pages expert, but I have been doing some more reading on Transparent Huge pages and the madvise option. The following is untested and un-verified.It seem like if you are running Kernel 2.6.38 or newer that you may be able to use the madvise instead of never for the Transparent Huge Pages setting. According to http://manpages.ubuntu.com/manpages/trusty/man2/madvise.2.html the 2.6.38 kernel’s madvise has a MADV_HUGEPAGE option, that allows applications to enable Transparent Huge pages, If no MADV_* flag is thrown then it defaults to MADV_NORMAL or no special treatment. I believe this means that transparent huge pages should be off by default. If you are using RHEL 6 or probably most of its derivatives even though they have a madvise setting for their Transparent Huge pages settings it appears RHEL did not backport the MADV_HUGEPAGE Option to their madvise/Kernel (At least 2.6.32-504.81 and lower), so you have to set the box’s transparent Huge pages to never. (Man page in RHEL 6 with kernel= 2.6.32-504.8.1 lacking a MADV_HUGEPAGE and https://groups.google.com/forum/#!topic/tokudb-dev/_1YNBMlHftU Bradly Kuszmaul’s 5/8/13 post.)RHEL 7 & it’s derivatives are running the 3.X kernel and that man pages show a MADV_HUGEPAGE option so it looks like you can set the box to madvise and it will not use transparent huge pages.Once again I am not a Transparent Huge Pages expert and have not done any testing to verify the validity of this. Please update this a bit as RHEL 8.6 keeps track of wether Transparent Huge pages is enabled or not in cat /sys/kernel/mm/transparent_hugepage/enabled and not the redhat_transparent_hugepage that older version did.
Article
Sergey Mikhailenko · Jan 23, 2018

Recommendations on installing the InterSystems Caché DBMS for production environment

This article was written as an attempt to share the experience of installing the InterSystems Caché DBMS for production environment. We all know that the development configuration of a DBMS is very different from real-life conditions. As a rule, development is carried out in “hothouse conditions” with a bare minimum of security measures, but when we publish our project online, we must ensure its reliable and uninterrupted operation in a very aggressive environment. ##The process of installing the InterSystems Caché DBMS with maximum security settings **OS security settings** **The first step is the operating system. You need to do the following:** > * Minimize the rights of the technical account of the Caché DBMS > * Rename the administrator account of the local computer. > * Leave the necessary minimum of users in the OS. > * Timely install security updates for the OS and used services. > * Use and regularly update anti-virus software > * Disable or delete unused services. > * Limit access to database files > * Limit the rights to Caché data files (leave owner and DB admin rights only) __For UNIX/Linux systems, create the following group and user types prior to installation:__ > * Owner user ID > * Management group ID > * Internal Caché system group ID > * Internal Caché user ID __InterSystems Caché installation-time security settings__ __InterSystems, the DBMS developer, strongly recommends deploying applications on Caché 2015.2 and newer versions only.__ __During installation, you need to perform the following actions:__ Select the “Locked Down” installation mode Select the “Custom Setup” option, then select only the bare minimum of components that are required for the work of the application During installation, choose the SuperServer port that is different from the standard TCP port 1972 During installation, specify the port of the internal web server that is different from the standard TCP port 57772 Specify a Caché instance location path that is different from the standard one (the default option for Windows systems is C:\lnterSystems\Cache, for UNIX/Linux systems — /usr/Cachesys) Post-installation Caché security settings **The following actions need to be performed after installation (most of them are already performed in the “Locked down” installation mode):** All services and resources that are not used by application should be disabled. For services using network access, IP addresses that can be used for remote interaction must be explicitly specified. Unused CSP web applications must be disabled. Access without authentication and authorization must be disabled. Access to the CSP Gateway must be password-protected and restricted. Audit must be enabled. The Data encryption option must be enabled for the configuration file. To ensure the security of system settings, Security Advisor must be launched from the management portal and its recommendations must be followed. __[Home] > [Security Management] > [Security Advisor]__ [For services (Services section):](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_svcs) >**Ability to set % globals should be turned off** — the possibility to modify % globals must be disabled, since such globals are often used for system code and modification of such variables can lead to unpredictable consequences. >**Unauthenticated should be off** — unauthenticated access must be disabled. Unauthenticated access to the service makes it accessible to all users. >**Service should be disabled unless required** — if a service is not used, it must be disabled. Access to any service that is not used by an application can provide an unjustifiably high level of access to the entire system. >**Service should use Kerberos authentication** — access through any other authentication mechanism does not provide the maximum level of security >**Service should have client IP addresses assigned** — IP addresses of connections to the services must be specified explicitly. Limiting the list of IP addresses that will be allowed to connect let you have greater control over connections to Caché >**Service is Public** — public services allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché [Applications (CSP, Privileged Routine, and Client Applications)](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_secmgmt_secadv_apps) >**Application is Public** — Public applications allow all users, including the UnknownUser account that requires no authentication, to get unregulated access to Caché >**Application conditionally grants the %AII role** — a system cannot be considered secure if an application can potentially delegate all privileges to its users. Applications should not delegate all privileges >**Application grants the %AII role** — the application explicitly delegates all privileges to its users. Applications should not delegate all privileges #1.Managing users ##1.1 Managing system accounts You need to make sure that unused system accounts are disabled or deleted, and that passwords are changed for used system accounts. To identify such accounts, you need to use the Security Advisor component of the management portal. To do it, go to the management portal here: __[Home] > [Security Management] > [Security Advisor]__. Change corresponding users’ passwords in all records in the Users section where Recommendations = “Password should be changed from default password”. ![Form_Advisor](https://habrastorage.org/webt/0v/mi/z6/0vmiz6lojt41q5ps66bkhgqg5py.jpeg) ##1.2 Managing privileges accounts If the DBMS has several administrators, a personal account should be created for each of them with just a minimum of privileges required for their job. ##1.3 Managing rights and privileges When delegating access rights, you should use the minimum privileges principle. That is, you should forbid everything and then provide a bare minimum of rights required for this particular role. When granting privileges, you should use a role-based approach – that is, assign rights to a role, not a user, then assign a role to the necessary user. ##1.4 Delegation of access rights In order to check security settings in terms of access rights delegation, launch Security Advisor. You need to perform the following actions depending on the recommendations provided by Security Advisor. To roles: This role holds privileges on the Auditing database — this role has privileges for accessing the auditing database. Read access makes it possible to use audit data in an inappropriate way. Write access makes it possible to compromise the audit data This role holds the %Admin_Secure privilege — This role includes the %Admin_Secure resource, which allows holder to change access privileges for any user This role holds WRITE privilege on the CACHESYS database — this role allows users to write to the CACHESYS system database, thus making it possible to change the system code and Caché system settings Users: At least 2 and at most 5 users should have the %AII role — at least 2 and no more than 5 users can have the %Аll role. Too few users with this role may result in problems with access during emergencies; too many users may jeopardize the overall security of the system. This user holds the %AII role — this user has the %Аll role. You need to verify the necessity of assigning this role to the user. UnknownUser account should not have the %AII role — the system cannot be considered secure if UnknownUser has the %Аll role. Account has never been used — this account has never been used. Unused accounts may be used for unauthorized access to the system. Account appears dormant and should be disabled — the account is inactive and must be disabled. Inactive accounts (ones that haven’t been used for 30 days) may be used for unauthorized access. Password should be changed from default password — the default password value must be changed. After deleting a user, make sure that roles and privileges created by this user have been deleted, if they are no longer required. ##1.5 Configuring the password policy Password case sensitivity is enabled in Caché by default. The password policy is applied through the following section of the management portal: __[Home]>[Security Management] > [System Security Settings] > [System-wide Security Parameters]__. The configuration of the necessary password complexity is carried out by specifying a password template in the Password Pattern parameter. By default, maximum security uses Password Pattern = 8.32ANP, which means that passwords must be 8 – 32 characters long, contain numbers, characters, and punctuation marks. The “Password validation routine” parameter is used for invoking specific password validity checking algorithms. A detailed description is provided in [1], section “Password Strength and Password Policies”. In addition to using internal mechanisms, authentication in Caché can be delegated to the operating system, Kerberos or LDAP servers. Just recently, I had to check whether the Caché DBMS complied with the new edition of PCI DSS 3.2, the main security standard of the bank card industry adopted in April 2016. **Compliance of Caché DBMS security settings with the requirements of the PCI DSS version 3.2 [5] standard** ![Table1](https://community.intersystems.com/sites/default/files/inline/images/tt1.jpg) ![Table2](https://community.intersystems.com/sites/default/files/inline/images/t3.jpg) ##1.6 Configuration of terminating an inactive database connection Database disconnect settings for inactive user sessions depend on the type of connection to Caché. For SQL and object access via TCP, the parameter is set in the **[Home] > [Configuration] > [SQL Settings] > [General SQL Settings]** section of the management portal. Look for a parameter called TCP Keep Alive interval (in seconds): set it to 900, which will correspond to 15 minutes. For web access, this parameter is specified in the “No Activity Timeout” for **[Home] > [Configuration] > [CSP Gateway Management]**. Replace the default parameter with 900 seconds and enable the “Apply time-out to all connections” parameter #2 Event logging ##2.1 General settings To enable auditing, you need to enable this option for the entire Caché DBMS instance. To do it, open the system management portal, navigate to the auditing management page (([Home] > [Security Management] > [Auditing]** and make sure that the “Disable Auditing” option is available, and “Enable Auditing” is unavailable. The opposite will mean that auditing is disabled. If auditing is disabled, it should be enabled by selecting the “Enable Auditing” command from the menu. You can view the event log through the system management portal: **[Home] > [Security Management] > [Auditing] > [View Audit Database]** ![Form_Audit](https://hsto.org/webt/ih/2u/2m/ih2u2mmhb8_dddk6vjstyfhqmqw.jpeg) There are also system classes (utilities) for viewing the event log. The log contains the following records, among others: -Date and time -Event type -Account name (user identification) Access to audit data is managed by the %DB_CACHEAUDIT resource. To disable public access to this resource, you need to make sure that both Read and Write operations are closed for Public access in its properties. Access to the list of resources is provided through the system management portal [Home] > [Security Management] > [Resources] > [Edit Resource]. Select the necessary resource, then click the Edit link. By default, the %DB_CACHEAUDIT resource has the same-name role %DB_CACHEAUDIT. To limit access to logs, you need to define a list of users with this role, which can be done in the system management portal: **[Home] > [Security Management] > [Roles] > [Edit Role]**, then use the Edit button in the %DB_CACHEAUDIT role ##2.2 List of logged event types ###2.2.1 Logging of access to tables containing bank card details (PCI DSS 10.2.1) Logging of access to tables (datasets) containing bank card data is performed with the help of the following mechanisms: >1. A system auditing mechanism that makes records of the “ResourceChange” type whenever access rights are changed for a resource responsible for storing bank card information (access to the audit log is provided from the system management portal: [Home] > [Security Management] > [Auditing] > [View Audit Database]); >2. On the application level, it is possible to log access to a particular record by registering an application event in the system and calling it from your application when a corresponding event takes place. **[System] > [Security Management] > [Configure User Audit Events] > [Edit Audit Event]** ###2.2.2 Logging attempts to use administrative privileges (PCI DSS 10.2.2) The Caché DBMS logs the actions of all users and the configuration of the logging method is carried out by specifying the events that should be logged **[Home] > [Security Management] > [Auditing] > [Configure SystemEvents]** Logging of all system events needs to be enabled. ###2.2.3 Logging of event log changes (PCI DSS 10.2.3) The Caché DBMS uses a single audit log that cannot be changed, except for the natural change of its content and error entries, log purging, the change of audited events, which add corresponding AuditChange entries to the log. The task of logging the AuditChange event is accomplished by enabling the auditing of all events (see 2.2.2). ###2.2.4 Logging of all unsuccessful attempts to obtain logical access (PCI-DSS 10.2.4) The task of logging an unsuccessful attempt to obtain logical access is accomplished through enabling the auditing of all events (see 2.2.2). When an attempt to obtain logical access is registered, a LoginFailure event is created in the audit log. ###2.2.5 Logging of attempts to obtain access to the system (PCI DSS 10.2.5) The task of logging an attempt to access the system is accomplished by enabling the auditing of all events (see 2.2.2). When an unsuccessful attempt to obtain access is registered, a “LoginFailure” event is created in the audit log. A successful log-in creates a “Login” event in the log. ###2.2.6 Logging of audit parameter changes (PCI DSS 10.2.6) The task of logging changes in audit parameters is accomplished by enabling the auditing of all events (see 2.2.2). When an attempt to obtain logical access is made, an “AuditChange” event is created in the audit log. ###2.2.7 Logging of the creation and deletion of system objects (PCI DSS 10.2.7) The Caché DBMS logs the creation, modification, and removal of the following system objects: roles, privileges, resources, users. The task of logging the creation and deletion of system objects is accomplished by enabling the auditing of all events (see 2.2.2). When a system object is created, changed or removed, the following events are added to the audit log: “ResourceChange”, “RoleChange”, “ServiceChange”, “UserChange”. ##2.3 Protection of event logs You need to make sure that access to the %DB_CACHEAUDIT resource is restricted. That is, only the admin and those responsible for log monitoring have read and write rights to this resource. Following the recommendations above, I have managed to install Caché in the maximum security mode. To demonstrate compliance with the requirements of PCI DSS section 8.2.5 “Forbid the use of old passwords”, I created a small application that will be launched by the system when the user attempts to change the password and will validate whether it has been used before. To install this program, you need to import the source code using Caché Studio, Atelier or the class import page through the control panel ROUTINE PASSWORD PASSWORD ; password verification program #include %occInclude CHECK(Username,Password) PUBLIC { if '$match(Password,"(?=.*[0-9])(?=.*[a-zA-Z]).{7,}") quit $$$ERROR($$$GeneralError,"Password does not match the standard PCI_DSS_v3.2") set Remember=4 ; the number of most recent passwords that cannot be used according to PCI-DSS set GlobRef="^PASSWORDLIST" ; The name of the global link set PasswordHash=$System.Encryption.SHA1Hash(Password) if $d(@GlobRef@(Username,"hash",PasswordHash)){ quit $$$ERROR($$$GeneralError,"This password has already been used ") } set hor="" for i=1:1 { ; Traverse the nods chronologically from new to old ones set hor=$order(@GlobRef@(Username,"datetime",hor),-1) quit:hor="" ; Delete the old one that’s over the limit if i>(Remember-1) { set hash=$g(@GlobRef@(Username,"datetime",hor)) kill @GlobRef@(Username,"datetime",hor) kill:hash'="" @GlobRef@(Username,"hash",hash) } } ; Save the current one set @GlobRef@(Username,"hash",PasswordHash)=$h set @GlobRef@(Username,"datetime",$h)=PasswordHash quit $$$OK } Let’s save the name of the program in the management portal. ![Form_Password](https://habrastorage.org/webt/fh/27/s_/fh27s_lr75atpdmgg1enviw9klg.jpeg) It happened so that my product configuration was different from the test one not only in terms of security but also in terms of users. In my case, there were thousands of them, which made it impossible to create a new user by copying settings from an existing one. ![Form_EditUser](https://habrastorage.org/webt/fz/bu/jx/fzbujxdud_5fi40qlbxixtf20_i.jpeg) DBMS developers limited list output to 1000 elements. After talking to [the InterSystems WRC technical support service](https://login.intersystems.com/login/SSO.UI.Login.cls?referrer=https%253A//wrc.intersystems.com/wrc/login.csp), I learned that the problem could be solved by creating a special global node in the system area using the following command: %SYS>set ^CacheTemp.MgtPortalSettings($Username,"MaxUsers")=5000 This is how you can increase the number of users shown in the dropdown list. I explored this global a bit and found a number of other useful settings of the current user. However, there is a certain inconvenience here: this global is mapped to the temporary CacheTemp database and will be removed after the system is restarted. This problem can be solved by saving this global before shutting down the system and restoring it after the system is restarted. To this end, I wrote [two programs](http://docs.intersystems.com/ens20172/csp/docbook/DocBook.UI.Page.cls?KEY=GSTU_customize#GSTU_customize_startstop),^%ZSART and ^%ZSTOP, with the required functionality. The source code of the %ZSTOP program %ZSTOP() { Quit } /// save users’ preferences in a non-killable global SYSTEM() Public { merge ^tmpMgtPortalSettings=^CacheTemp.MgtPortalSettings quit } The source code of the %ZSTART program %ZSTART() { Quit } /// restore users’ preferences from a non-killable global SYSTEM() Public { if $data(^tmpMgtPortalSettings) merge ^CacheTemp.MgtPortalSettings=^tmpMgtPortalSettings quit } Going back to security and the requirements of the standard, we can’t ignore the backup procedure. The PCI DSS standard imposes certain requirements for backing up both data and event logs. In Caché, all logged events are saved to the CACHEAUDIT database that can be included in the list of backed up databases along with other ones. The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones. In another project, I had to estimate the size of the future backup file. In the end, I had to write my own backup task. CustomListBackup.cls Include %occKeyword /// Backup task class Class App.Task.CustomListBackup Extends %SYS.Task.Definition [ LegacyInstanceContext ] { /// If ..AllDatabases=1, include all databases into the backup copy ..PrefixIncludeDB and ..IncludeDatabases are ignored Property AllDatabases As %Integer [ InitialExpression = 0 ]; /// If ..AllDatabases=1, include all databases into the backup copy, excluding from ..IgnoreForAllDatabases (comma-delimited) Property IgnoreForAllDatabases As %String(MAXLEN = 32000) [ InitialExpression = "Not applied if AllDatabases=0 " ]; /// If ..IgnoreTempDatabases=1, exclude temporary databases Property IgnoreTempDatabases As %Integer [ InitialExpression = 1 ]; /// If ..IgnorePreparedDatabases=1, exclude pre-installed databases Property IgnorePreparedDatabases As %Integer [ InitialExpression = 1 ]; /// If ..AllDatabases=0 and PrefixIncludeDB is not empty, we will be backing up all databases starting with ..PrefixIncludeDB Property PrefixIncludeDB As %String [ SqlComputeCode = {S {*}=..ListNS()}, SqlComputed ]; /// If ..AllDatabases=0, back up all databases from ..IncludeDatabases (comma-delimited) Property IncludeDatabases As %String(MAXLEN = 32000) [ InitialExpression = {"Not applied if AllDatabases=1"_..ListDB()} ]; /// Name of the task on the general list Parameter TaskName = "CustomListBackup"; /// Path for the backup file Property DirBackup As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ]; /// Path for the log Property DirBackupLog As %String(MAXLEN = 1024) [ InitialExpression = {##class(%File).NormalizeDirectory("Backup")} ]; /// Backup type (Full, Incremental, Cumulative) Property TypeBackup As %String(DISPLAYLIST = ",Full,Incremental,Cumulative", VALUELIST = ",Full,Inc,Cum") [ InitialExpression = "Full", SqlColumnNumber = 4 ]; /// Backup file name prefix Property PrefixBackUpFile As %String [ InitialExpression = "back" ]; /// The maximum number of backup files, delete the oldest ones Property MaxBackUpFiles As %Integer [ InitialExpression = 3 ]; ClassMethod DeviceIsValid(Directory As %String) As %Status { If '##class(%Library.File).DirectoryExists(Directory) quit $$$ERROR($$$GeneralError,"Directory does not exist") quit $$$OK } ClassMethod CheckBackup(Device, MaxBackUpFiles, del = 0) As %Status { set path=##class(%File).NormalizeFilename(Device) quit:'##class(%File).DirectoryExists(path) $$$ERROR($$$GeneralError,"Folder "_path_" does not exist") set max=MaxBackUpFiles set result=##class(%ResultSet).%New("%File:FileSet") set st=result.Execute(path,"*.cbk",,1) while result.Next() { If result.GetData(2)="F" { continue:result.GetData(3)=0 set ts=$tr(result.GetData(4),"-: ") set ts(ts)=$lb(result.GetData(1),result.GetData(3)) } } #; Let’s traverse all the files starting from the newest one set i="" for count=1:1 { set i=$order(ts(i),-1) quit:i="" #; Get the increase in bytes as a size difference with the previous backup if $data(size),'$data(delta) set delta=size-$lg(ts(i),2) #; Get the size of the most recent backup file in bytes if '$data(size) set size=$lg(ts(i),2) #; If the number of backup files is larger or equals to the upper limit, delete the oldest ones along with logs if count'$g(free) $$$ERROR($$$GeneralError,"Estimated size of the new backup file is larger than the available disk space:("_$g(size)_"+"_$g(delta)_")>"_$g(free)) quit $$$OK } Method OnTask() As %Status { do $zu(5,"%SYS") set list="" merge oldDBList=^SYS("BACKUPDB") kill ^SYS("BACKUPDB") #; Adding new properties for the backup task set status=$$$OK try { ##; Check the number of database copies, delete the oldest one, if necessary ##; Check the remaining disk space and estimate the size of the new file set status=..CheckBackup(..DirBackup,..MaxBackUpFiles,1) quit:$$$ISERR(status) #; All databases if ..AllDatabases { set vals="" set disp="" set rss=##class(%ResultSet).%New("Config.Databases:List") do rss.Execute() while rss.Next(.sc) { if ..IgnoreForAllDatabases'="",(","_..IgnoreForAllDatabases_",")[(","_$zconvert(rss.Data("Name"),"U")_",") continue if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name")) if ..IgnorePreparedDatabases continue:..IsPreparedDB(rss.Data("Name")) set ^SYS("BACKUPDB",rss.Data("Name"))="" } } else { #; if the PrefixIncludeDB property is not empty, we’ll back up all DB’s with names starting from ..PrefixIncludeDB if ..PrefixIncludeDB'="" { set rss=##class(%ResultSet).%New("Config.Databases:List") do rss.Execute(..PrefixIncludeDB_"*") while rss.Next(.sc) { if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name")) set ^SYS("BACKUPDB",rss.Data("Name"))="" } } #; Include particular databases into the list if ..IncludeDatabases'="" { set rss=##class(%ResultSet).%New("Config.Databases:List") do rss.Execute("*") while rss.Next(.sc) { if ..IgnoreTempDatabases continue:..IsTempDB(rss.Data("Name")) if (","_..IncludeDatabases_",")'[(","_$zconvert(rss.Data("Name"),"U")_",") continue set ^SYS("BACKUPDB",rss.Data("Name"))="" } } } do ..GetFileName(.backFile,.logFile) set typeB=$zconvert($e(..TypeBackup,1),"U") set:"FIC"'[typeB typeB="F" set res=$$BACKUP^DBACK("",typeB,"",backFile,"Y",logFile,"NOINPUT","Y","Y","","","") if 'res set status=$$$ERROR($$$GeneralError,"Error: "_res) } catch { set status=$$$ERROR($$$GeneralError,"Error: "_$ze) set $ze="" } kill ^SYS("BACKUPDB") merge ^SYS("BACKUPDB")=oldDBList quit status } /// Get file names Method GetFileName(aBackupFile, ByRef aLogFile) As %Status { set tmpName=..PrefixBackUpFile_"_"_..TypeBackup_"_"_$s(..AllDatabases:"All",1:"List")_"_"_$zd($h,8)_$tr($j($i(cnt),3)," ",0) do { s aBackupFile=##class(%File).NormalizeFilename(..DirBackup_"/"_tmpName_".cbk") } while ##class(%File).Exists(aBackupFile) set aLogFile=##class(%File).NormalizeFilename(..DirBackupLog_"/"_tmpName_".log") quit 1 } /// Check if the database is pre-installed ClassMethod IsPreparedDB(name) { if (",ENSDEMO,ENSEMBLE,ENSEMBLEENSTEMP,ENSEMBLESECONDARY,ENSLIB,CACHESYS,CACHELIB,CACHETEMP,CACHE,CACHEAUDIT,DOCBOOK,USER,SAMPLES,")[(","_$zconvert(name,"U")_",") quit 1 quit 0 } /// Check if the database is temporary ClassMethod IsTempDB(name) { quit:$zconvert(name,"U")["TEMP" 1 quit:$zconvert(name,"U")["SECONDARY" 1 quit 0 } /// Get a comma-delimited list of databases ClassMethod ListDB() { set list="" set rss=##class(%ResultSet).%New("Config.Databases:List") do rss.Execute() while rss.Next(.sc) { set list=list_","_rss.Data("Name") } quit list } ClassMethod ListNS() [ Private ] { set disp="" set tRS = ##class(%ResultSet).%New("Config.Namespaces:List") set tSC = tRS.Execute() While tRS.Next() { set disp=disp_","_tRS.GetData(1) } set %class=..%ClassName(1) $$$comSubMemberSet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPparameter,"VALUELIST",disp) quit "" } ClassMethod oncompile() [ CodeMode = generator ] { $$$defMemberKeySet(%class,$$$cCLASSproperty,"PrefixIncludeDB",$$$cPROPtype,"%String") set updateClass=##class("%Dictionary.ClassDefinition").%OpenId(%class) set updateClass.Modified=0 do updateClass.%Save() do updateClass.%Close() } } All our major concerns are addressed here: limitation of the number of copies, removal of old copies, estimation of the size of the new file, different methods of selecting or excluding databases from the list. Let’s import it into the system and create a new task using the Task manager. And include the database into the list of copied databases. All of the examples above are provided for Caché 2016.1 and are intended for educational purposes only. They can only be used in a product system after serious testing. I will be happy if this code helps you do your job better or avoid making mistakes. [Github repository](https://github.com/SergeyMi37/ExampleBackupTask) >The following materials were used for writing this article: >1. Caché Security Administration Guide (InterSystems) >2. Caché Installation Guide. Preparing for Caché Security (InterSystems) >3. Caché System Administration Guide (InterSystems) >4. Introduction to Caché. Caché Security (InterSystems) >5. PCI DSS.RU. Requirements and the security audit procedure. Version 3.2 Great articleI found this too...The Caché DBMS comes with several pre-configured backup jobs, but they didn’t always work for me. Every time I needed something particular for a project, it wasn’t there in “out-of-the-box” jobs. In one project, I had to automate the control over the number of backup copies with an option of automatic purging of the oldest ones.I thought I'd share the class we use - this only deletes old backups Class App.PurgeOldBackupFiles Extends %SYS.Task.Definition { Property BackupsToKeep As %Integer(MAXVAL = 30, MINVAL = 1) [ InitialExpression = 30, Required ]; Property BackupFolder As %String [ Required ]; Property BackupFileType As %String [ Required ]; Method OnTask() As %Status { //s BackupsToKeep = 2 //s Folder = "c:\backupfolder" //s BackupFileType = "FullAllDatabases" // or "FullDBList" s SortOrder = "DateModified" If ..BackupsToKeep<1 Quit $$$ERROR($$$GeneralError,"Invalid - Number of Backups to Keep must be greater than or equal to 1") If ..BackupFolder="" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFolder - not supplied") if ..BackupFileType = "" Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType - not supplied") if (..BackupFileType '= "FullAllDatabases")&&(..BackupFileType '= "FullDBList") Quit $$$ERROR($$$GeneralError,"Invalid - BackupFileType") s BackupCount=0 k zPurgeOldBackupFiles(..BackupFileType) Set rs=##class(%ResultSet).%New("%Library.File:FileSet") w !,"backuplist",! s BackupFileWildcard = ..BackupFileType _ "*.cbk" set status=rs.Execute(..BackupFolder, BackupFileWildcard, SortOrder) WHILE rs.Next() { Set FullFileName=rs.Data("Name") Set FName=##class(%File).GetFilename(FullFileName) Set FDateTime=##class(%File).GetFileDateModified(FullFileName) w "File "_FName_" "_FDateTime,! Set FDate=$PIECE(FDateTime,",") Set CDate=$PIECE($H,",") s BackupCount=$I(BackupCount) s zPurgeOldBackupFiles(..BackupFileType, BackupCount)=FullFileName } s zPurgeOldBackupFiles(..BackupFileType, "BackupCount")=BackupCount do rs.Close() if BackupCount > ..BackupsToKeep { for i=1:1:BackupCount-..BackupsToKeep { s FullFileName = zPurgeOldBackupFiles(..BackupFileType, i) d ##class(%File).Delete(FullFileName) w "File Purged "_FullFileName_" Purged",! } } q status } } Thanks!
Announcement
David Reche · Jan 22, 2018

InterSystems Iberia Summit February 15th in Barcelona

Come to Barcelona and join us !! Agenda 09.00 – 09.30 Registration General/Management Sessions 09.30 – 09.45 Welcome ( Jordi Calvera) 09.45 – 10.15 Your Success – Our Success ( Jordi Calvera) 10.15 – 11:00 Choosing a DBMS to Build Something that Matters of the Third Platform ( IDC, Philip Carnelley) 11:00 – 11:45 InterSystems IRIS Data Platform (Industries & in Action) ( Joe Lichtenberg) 11:45 – 12:15 Café 12:15 – 13:00 InterSystems Guide to the Data Galaxy ( Benjamin de Boe) 13:00 – 13:20 InterSystems Worldwide Support: A Competitive Advantage ( Stefan Schulte Strathaus) 13:20 – 13:50 Developers Community Meet Up ( Evgeny Shvarov & Francisco J. López) 13:50 – 14:00 Morning Sessions Closing ( Jordi Calvera) 14:00 – 15:15 Lunch & Networking Technical Sessions 15.15 – 16.00 Docker Containers ( José Tomás Salvador) 16.00 – 16.30 ICM – InterSystems Cloud Manager* ( Luca Ravazzolo) 16.30 – 17:00 Escalabilidad & Sharding ( Pierre Yves Duquesnoy) 17:00 – 17:15 Coffee Break 17:15 – 17:45 Interacting Faster and with More Technologies ( David Reche) 17:45 – 18:15 Atelier: Fast and Intuitive Development Environment ( Alberto Fuentes) 18:15 Q&A panel ( Jordi Calvera) Closing & Build Walkout Video David, I edited your post, just to give a bit more information, and to be more than just Twitter. Don't miss my session ;) Thanks Dmitry is a good idea Of course!! For sure the best one I'll be there, sure
Announcement
James Schultz · Jun 14, 2018

InterSystems on DeveloperWeek NYC, June 18-20

Hi Community!Come join us on Developer Week in NYC on 18-20 of June!InterSystems has signed on for a high-level sponsorship and exhibitor space at this year's DeveloperWeek, billed as "New York City’s Largest Developer Conference & Expo". This is the first time we have participated in the event that organizers expect will draw more than 3,000 developers from 18th to 20th June.“The world is changing rapidly, and our target audience is far more diverse in terms of roles and interests than it used to be... DeveloperWeek NYC is a gathering of people who create applications for a living, and we want developers to see the power and capabilities of InterSystems. We need to know them, and they need to know us, as our software can be a foundation for their success.” – says Marketing Vice President Jim Rose.The main feature at InterSystems booth 812 is the new sandbox experience for InterSystems IRIS Data Platform™. Meanwhile Director of Data Platforms Product Management @Iran.Hutchinson is delivering two presentations on the conference agenda. One, "GraalVM: What it is. Why it’s important. Experiments you should try today" will be on the Main Stage on June 19 between 11:00 a.m. and 11:20 a.m. GraalVM is an open source set of projects driven by Oracle Labs, the Institute for Software Kepler University Linz Austria, and a community of contributors.On the following day, Hutchinson will lead a follow-on presentation to Frictionless Systems Founder Carter Jernigan's productivity boosting "Power Up with Flow: Get “In the Zone” to Get Things Done", which runs from 11:00 a.m. - 11:45 a.m. on Workshop Stage 2. In "Show and Tell Your Tips and Techniques – and Win in Powers of 2!" he leads an open exchange of productivity ideas, tips, and innovations culminating in prizes to the "Power of 2" for the best ideas. If you are attending, it takes place between 11:45 a.m. and 12:30 p.m. also on Workshop Stage 2.Don't forget to check these useful links:All details about the DeveloperWeek NYC The original agendaRegister now and see you in New York! Very cool Thanks! We're very excited to be participating!
Announcement
Michelle Spisak · Apr 30, 2018

(Webinar May 17) Introducing InterSystems Cloud Manager

When you hear people talk about moving their applications to the cloud, are you unsure of what exactly they mean? Do you want a solution for migrating your local, physical servers to a flexible, efficient cloud infrastructure? Join Luca Ravazzolo for Introducing InterSystems Cloud Manager, (May 17th, 2:00 p.m. EDT). In this webinar, Luca — Product Manager for InterSystems Cloud Manager — will explain cloud technology and how you can move your InterSystems IRIS infrastructure to the cloud in an operationally agile fashion. He will also be able to answer your questions following the webinar about this great new product from InterSystems! Thanks Michelle!I'm happy to answer any question anybody may have on the webinar where I presented InterSystems Cloud Manager and generally on the improvement an organization can achieve in its software-factory with the newly available technologies from InterSystems. This webinar recording has been posted: https://learning.intersystems.com/course/view.php?name=IntroICMwebinar And now this webinar recording is available on InterSystems Developers YouTube Channel: Please welcome!
Announcement
Jon Jensen · Jun 5, 2018

InterSystems Global Summit 2018 registration now open!

Hi Community!Save the date for InterSystems Global Summit 2018! This year Global Summit is Sept. 30 – Oct. 3, 2018 at the JW Marriott Hill Country Resort & Spa in San Antonio, TX.Registration for Global Summit is now open!Empowering What MattersInterSystems Global Summit 2018 is all about empowering you, because you and the applications you build with our technology MATTER. It is an unparalleled opportunity to connect with your peers and with InterSystems’ executives and technical experts.InterSystems Global Summit is an annual gathering for everyone in the InterSystems community. It is composed of three conferences:Solution Developer ConferenceTechnology Leadership ConferenceHealthcare Leadership ConferenceThe super early bird registration rate of $999 is available until August 10, 2018.Register now. See you at InterSystems Global Summit 2018!
Announcement
Evgeny Shvarov · May 1, 2017

Webinar Configuring IIS for Better Performance and Security with InterSystems

Have you ever thought about leveraging IIS (Internet Information Services for Windows) to improve performance and security for your Caché web applications? Are you worried about the complexity of properly setting up IIS?See the webinar Configuring a Web Server presented by @Kyle.Baxter, InterSystems Senior Support Specialist. Learn how to install IIS, set up it up to work with the CSP Gateway, and configure the CSP Gateway to talk to Caché.If you have not subscribed to our Developer Community YouTube Channel yet, let's get started right now. Enjoy!
Announcement
Janine Perkins · May 5, 2017

Featured InterSystems Online Course: Using SAML for Security

Take this online course to learn the basics of SAML (Security Assertion Markup Language), the ways in which it can be used within Caché security features, and some use cases that can be applied to HealthShare productions.Audience: This course is for Caché and HealthShare developers who may want to use SAML as a part of their security strategy, want to learn SAML basics, and the capabilities of Caché and HealthShare for using SAML.Learn more.
Question
Kishan Ravindran · Jul 1, 2017

How to Enable iKnow Functionality in InterSystems Caché

In my cache studio i couldn't find the a namespace of iknow so how can i check is my studio version is compatible to to the one i am using now. If i don't have one then can be able to create a new namespace in studio? I checked out how it can be done using intersystem doc. But for mine i think for my user the license does not apply is there any other way i can work on. Can anyone help me on this. Thank you John Murray and Benjamin DeBoe for your answers. Both your Document was helpful. Here's a way of discovering if your license includes the iKnow feature:USER>w $system.License.GetFeature(11)1USER>A return value of 1 indicates that you are licensed for iKnow. If the result is 0 then your license does not include iKnow.See here for documentation about this method, which tells you that 11 is the feature number for iKnow.Regarding namespaces, these are created in Portal, not in Studio. See this documentation. Thanks John,indeed, you'd need a proper license in order to work with iKnow. If the method referred above would return 0, please contact your sales representative to request a temporary trial license and appropriate assistance for implementing your use case.Also, iKnow doesn't come as a separate namespace. You can create (regular) namespaces as you prefer and use them to store iKnow domain data. You may need to enable your web application for iKnow, which is disabled by default for security reasons in the same way DeepSee is. See this paragraph here for more details.