Search

Clear filter
Question
Rathinakumar S · Nov 7, 2022

How InterSystems cache calculate the license count

Hi Team, How InterSystems cache calculate the license count based on process? How to we determine the product required license count ? Thanks Rathinakumar License counting depends on how you access Cache/IRIS (i.e. Web interface or some kind of client). there is a quite wide selection of licenses.the best for details is to contact your local sales rep from InterSystems to find your optimal solution
Announcement
Raj Singh · Nov 8, 2022

ZPM is now InterSystems Package Manager (IPM)

I'm pleased to announce a milestone in the lifecycle of the ObjectScript package manager, ZPM. The package manager has offered developers the ability to neatly package up ObjectScript code and deployment configuration settings and version information in a convenient way. Over the last few years, it has evolved greatly into an integral part of many development workflows. It has proven so important that InterSystems has decided to use it for packaging our own components, and that has led us to a decision to move the GitHub repository from the community into our corporate one, and rename it InterSystems Package Manager (IPM). IPM will still be open source. Community members will be able to review the code and submit pull requests. But this change gives us the ability to ensure the security of the software in a way we could not with non-employees being able to make changes to the code base directly. And a heightened level of security and trust is key with software that can install code alongside your data. So please celebrate the life of ZPM with me, welcome the birth of IPM, and give thanks to the contributors -- especially Nikolay Soloviev and @Dmitry.Maslennikov, who has once again shown amazing insight into developer needs, coupled with the skills and dedication to build great software! --- https://github.com/intersystems/ipm This is super exciting and I look forward to seeing what's next! Exciting news! absolutely! Great work - long time coming and well worth it!!! Good to see this. Minor nit is the Info panel at https://openexchange.intersystems.com/package/ObjectScript-Package-Manager still has some links pointing to the old intersystems-community repo. They forward correctly but deserve to be updated anyway. Will this change the commands from zpm to ipm? Nope, it's it will require changes in the language itself. And I'm sure there is no reasons for it. Thanks a lot to all the contributors and to the community that supported with feedback, pull-requests and which adopted broadly the tool! Currently there are 250+ packages published on OEX, at least 300+ developers who install packages every month, and the amount of installed packages is above 2,000 every month. Thank you! my review on OEX now shows also if the package supports IPMall 19 reviews have been updated thank you so much for your efforts on this @Robert.Cemper1003 !!
Announcement
Evgeniy Potapov · Nov 10, 2022

InterSystems Adaptive Analytics training in a focus group

Developers, we have prepared a tutorial to get you started with InterSystems Adaptive Analytics powered by AtScale. In it, in addition to setting up AtScale and working with data cubes, we will also touch on methods of working with InterSystems Reports and other analytical systems. Now the course is ready and we want to conduct a pilot training course on a small group of volunteers (3-5 people). The course will be held in the form of two-hour classes for three consecutive days from 11/14/2022 to 11/16/2022. The time range is approximately 2pm to 6pm UTC+4 (Dubai). We will choose the most suitable 2 hours based on the results of your answers in the form. We invite enthusiasts and volunteers to participate in this educational event. Participation is free.All you need is your time, a computer on which you can complete practical tasks and feedback from you every day. To participate, fill out a short form https://forms.gle/v6kj8BoxiW5v1i6W9 and we will contact you with further instructions. This sounds interesting!
Announcement
Anastasia Dyubaylo · Apr 21, 2023

[Video] Connecting to InterSystems Cloud Services with Java

Hi Community, Watch this video to see how to connect to InterSystems Cloud Services from your Java application using the InterSystems JDBC driver: ⏯ Connecting to InterSystems Cloud Services with Java Subscribe to InterSystems Developers YouTube and stay tuned!
Announcement
Anastasia Dyubaylo · Apr 28, 2023

[Video] Connecting to InterSystems Cloud Services with Python

Hey Community, Watch this video to see how to connect to InterSystems Cloud Services from your Python application using the InterSystems DB-API driver interface: ⏯ Connecting to InterSystems Cloud Services with Python Stay tuned and don't forget to subscribe to InterSystems Developers YouTube!
Announcement
Anastasia Dyubaylo · May 5, 2023

[Video] Connecting to InterSystems Cloud Services with .NET

Hey Community, Watch this video to see how to connect to InterSystems Cloud Services from your .NET application using the InterSystems ADO.NET Managed Provider: ⏯ Connecting to InterSystems Cloud Services with .NET Stay tuned for more educational videos on our InterSystems Developers YouTube channel!
Announcement
Anastasia Dyubaylo · May 12, 2023

[Video] Connecting to InterSystems Cloud Services with ODBC

Hi Community, Watch this video to see how to connect to InterSystems Cloud Services from your C++ application, using the InterSystems ODBC Driver: ⏯ Connecting to InterSystems Cloud Services with ODBC Subscribe to our InterSystems Developers YouTube channel to stay updated!
Question
Robson Tenorio · Oct 14, 2022

InterSystems Caché - ARM64 ODBC driver

Hi! Is there any ARM64 ODC driver available for InterSystems Caché? It would be really very useful! There is not. Cache has not been ported to ARM64. IRIS has ARM ODBC drivers if that's an option for your application.
Announcement
Shannon Puebla · Oct 21, 2022

REMOTE InterSystems Object Developer with Docker Experience

We are seeking an InterSystems Object Developer With Docker Experience to join our team! You will develop and implement unique web-based applications. This role is Remote, Full Time and a 1099 position. Required Qualifications: 4+ years InterSystems Object Technology developing applications utilizing industry best practices for software development. 2+ years using Docker Preferred: Healthcare industry experience Demonstrate the ability to adapt and work with team members of various experience levels. Ability to work in a fast paced environment while maintaining standards and best practices. Strong Communicator VA Experience Only applicants with the required qualifications will be considered. If you are interested in this position provide your resume, phone number, and times you are available.
Article
Stefan Cronje · Jan 25, 2023

InterSystems IRIS Persistent Class Audit Package

Hi folks, I am announcing a new package I have loaded on the OEX, which I am also planning on entering into the contest this month. In a nutshell, what it offers you are the following. Base classes to use on Persistent (table) classes for InterSystems IRIS to keep record history These classes enable the historizing of persistent class records into another persistent class when touched. This provides for a full history of any record. It allows for record rollback to a specific version. It can automatically purge old history records. Do you need it? Have you ever had the scenario where a data fix has gone terribly wrong, and you need to roll back the update?Have you ever tried to find out what or who updated or inserted a specific record?Have you ever needed to find out what has happened to a record over its lifetime? You can get all this now, by simply extending from two classes. What this article covers is what it offers. The package contains all the instructions needed, and there are only a few. The Basics The table that contains the "current" record have two sets of fields Create This contains the details of when the entry was created and is immutable. Update This contains the information of the last time the record was updated. The table that contains the history records have three sets of fields Create This is copied as is from the current record when inserted. Update This is copied as is from the current record when inserted. Historize This contains details on the historization entry on insertion, and is immutable. Each of the sets above contain the following fields: Name Content DateTimeStamp The system's date and time Job The PID value ($JOB) SystemUser The system user performing the action. $USERNAME BusinessHost The interoperability business host that was involved, if available. ClientIP The client's IP address that instructed this, if available. CSPSessionID The CSP Session ID that was involved, if available. Routine The calling routing. This can really help pinpoint where in the code this originated from. The "current" record table has a Version property, and this is set when extending from the base class.The history racoed table has a Version property, which is the version that the "current" record was at that point. What can it historize? Serial Properties "Normal" Properties Class References Arrays of Data types Lists of Data types Arrays of Objects Lists of Objects Relationship - where cardinality is "one" Note that in this case, the history table's prpperty must be an object reference and not a relationship. What can't it historize? Relationships where the cardinality is "many" What about my other triggers? The triggers activate in order slot number 10. This means you can add your triggers before or after the execution of these triggers. Additional Functionality Record historization can be switched off for a specific table by setting a global. This is to cater for bulk updates where is the history is not required, for example, populating a new field during a deployment. An auto purge setting can be set per table in a global. If you have a table that does get a lot of updates, and you only need the previous two records for example, you can set it to keep the last two records in hte history and remove the older ones. This happens during the historization of the record. Restoring Previous Versions The base classes will generate methods in the "current" record class that can be used to restore a specific version, or the previous version, back into the current table.These can be invoked via SQL too, if you need to restore from a selection of rows. Note that the restored record will bump the version number on the "current" table and will not have its old version number, which is probably a good thing. Congrats on your first entry to our contests on Open Exchange! good luck :) Hi Stefan, Your video is now on InterSystems Developers YouTube: ⏯ IRIS Table Audit Demo
Announcement
Anastasia Dyubaylo · Mar 17, 2023

[Video] Passwordless mode for development with InterSystems IRIS

Hey Community, Tired of entering login-password during the docker build with your InterSystems IRIS every time? There is a handy way to turn it on and off – use the passwordless zpm module. Watch this video to explore how to use the passwordless ipm module to turn on and off entering login-password during docker build with your InterSystems IRIS: ⏯️ Passwordless mode for development with InterSystems IRIS Add zpm "install passwordless" in %SYS namespace during your docker build phase, and IRIS will no longer ask for a password. Example application with passwordless.
Article
Rizmaan Marikar · Mar 20, 2023

InterSystems Embedded Python with Pandas - Part 1

Introduction Data analytics is a crucial aspect of business decision-making in today's fast-paced world. Organizations rely heavily on data analysis to make informed decisions and stay ahead of the competition. In this article, we will explore how data analytics can be performed using Pandas and Intersystems Embedded Python. We will discuss the basics of Pandas, the benefits of using Intersystems Embedded Python, and how they can be used together to perform efficient data analytics. What's Pandas for? Pandas is a versatile tool that can be used for a wide range of tasks, to the point where it may be easier to list what it cannot do rather than what it can do. Essentially, pandas serves as a home for your data. It allows you to clean, transform, and analyze your data to gain familiarity with it. For instance, if you have a dataset saved in a CSV file on your computer, pandas can extract the data into a table-like structure called a DataFrame. With this DataFrame, you can perform various tasks such as: Calculating statistics and answering questions about the data such as finding the average, median, maximum, or minimum of each column, determining if there is correlation between columns, or exploring the distribution of data in a specific column. Cleaning the data by removing missing values or filtering rows and columns based on certain criteria. Visualizing the data with the help of Matplotlib, which enables you to plot bars, lines, histograms, bubbles, and more. Storing the cleaned and transformed data back into a CSV, database, or another type of file. Before delving into modeling or complex visualizations, it's essential to have a solid understanding of your dataset's nature, and pandas provides the best way to achieve this understanding. Benefits of using Intersystems Embedded Python Intersystems Embedded Python is a Python runtime environment that is embedded within the Intersystems data platform. It provides a secure and efficient way to execute Python code within the data platform, without having to leave the platform environment. This means that data analysts can perform data analytics tasks without having to switch between different environments, resulting in increased efficiency and productivity. Combining Pandas and Intersystems Embedded Python By combining Pandas and Intersystems Embedded Python, data analysts can perform data analytics tasks with ease. Intersystems Embedded Python provides a secure and efficient runtime environment for executing Python code, while Pandas provides a powerful set of data manipulation tools. Together, they offer a comprehensive data analytics solution for organizations. Installing Pandas. Install a Python Package To use Pandas with InterSystems Embedded Python, you'll need to install it as a Python package. Here are the steps to install Pandas: Open a command prompt as Administrator mode (on Windows). Navigate to the <installdir>/bin directory in the command prompt. Run the following command to install Pandas: irispip install --target <installdir>\mgr\python pandas This command installs Pandas into the <installdir>/mgr/python directory, which is recommended by InterSystems. Note that the exact command may differ depending on the package you're installing. Simply replace pandas with the name of the package you want to install. That's it! Now you can use Pandas with InterSystems Embedded Python. irispip install --target C:\InterSystems\IRIS\mgr\python pandas Now that we have Pandas installed, we can start working with the employees dataset. Here are the steps to read the CSV file into a Pandas DataFrame and perform some data cleaning and analysis: First Lets create a new instance of python Set python = ##class(%SYS.Python).%New() Import Python Libraries, in this case i will be importing pandas and builtins Set pd = python.Import("pandas") #;To import the built-in functions that are part of the standard Python library Set builtins = python.Import("builtins") Importing data into the pandas library There are several ways to read data into a Pandas DataFrame using InterSystems Embedded Python. Here are three common methods.I am using the following sample file as a example. Read data from a CSV. Use read_csv() with the path to the CSV file to read a comma-separated values Set df = pd."read_csv"("C:\InterSystems\employees.csv") Importing text files Reading text files is similar to CSV files. The only nuance is that you need to specify a separator with the sep argument, as shown below. The separator argument refers to the symbol used to separate rows in a DataFrame. Comma (sep = ","), whitespace(sep = "\s"), tab (sep = "\t"), and colon(sep = ":") are the commonly used separators. Here \s represents a single white space character. Set df = pd."read_csv"("employees.txt",{"sep":"\s"}) Importing Excel files To import Excel files with a single sheet, the "read_excel()" function can be used with the file path as input. For example, the code df = pd.read_excel('employees.xlsx') reads an Excel file named "diabetes.xlsx" and stores its contents in a DataFrame called "df". Other arguments can also be specified, such as the header argument to determine which row becomes the header of the DataFrame. By default, header is set to 0, which means the first row becomes the header or column names. If you want to specify column names, you can pass a list of names to the names argument. If the file contains a row index, you can use the index_col argument to specify it. It's important to note that in a pandas DataFrame or Series, the index is an identifier that points to the location of a row or column. It labels the row or column of a DataFrame and allows you to access a specific row or column using its index. The row index can be a range of values, a time series, a unique identifier (e.g., employee ID), or other types of data. For columns, the index is usually a string denoting the column name. Set df = pd."read_excel"("employees.xlsx") Importing Excel files (multiple sheets) Reading Excel files with multiple sheets is not that different. You just need to specify one additional argument, sheet_name, where you can either pass a string for the sheet name or an integer for the sheet position (note that Python uses 0-indexing, where the first sheet can be accessed with sheet_name = 0) #; Extracting the second sheet since Python uses 0-indexing Set df = pd."read_excel"("employee.xlsx", {"sheet_name":"1"}) Read data from a JSON. Set df = pd."read_json"("employees.json") Lets look at the data in the dataframe. How to view data using .head() and .tail() For this we can use the builtins library which we imported (ZW works too ) do builtins.print(df.head()) Let's list all the columns on the dataset Do builtins.print(df.columns) Lets Cleanup the data Convert the "Start Date" column to a datetime object. Set df."Start Date" = pd."to_datetime"(df."Start Date") the updated dataset looks as follows. Convert the 'Last Login Time' column to a datetime object Set df."Last Login Time" = pd."to_datetime"(df."Last Login Time") Fill in missing values in the 'Salary' column with the mean salary Set meanSal = df."Salary".mean() Set df."Salary" = df."Salary".fillna(meanSal) Perform Some Analysis. Calculate the average salary by gender. Do builtins.print(df.groupby("Gender")."Salary".mean()) Calculate the average bonus percentage by team. Do builtins.print(df.groupby("Team")."Bonus %".mean()) Calculate the number of employees hired each year. Do builtins.print(df."Start Date".dt.year."value_counts"()."sort_index"()) Calculate the number of employees by seniority status. Do builtins.print(df."Senior Management"."value_counts"()) Outputting data in pandas Just as pandas can import data from various file types, it also allows you to export data into various formats. This happens especially when data is transformed using pandas and needs to be saved locally on your machine. Below is how to output pandas DataFrames into various formats. Outputting a DataFrame into a CSV file A pandas DataFrame (here we are using df) is saved as a CSV file using the ."to_csv"() method. do df."to_csv"("C:\Intersystems\employees_out.csv") Outputting a DataFrame into a JSON file Export DataFrame object into a JSON file by calling the ."to_json"() method. do df."to_json"("C:\Intersystems\employees_out.json") Outputting a DataFrame into an Excel file Call ."to_excel"() from the DataFrame object to save it as a “.xls” or “.xlsx” file. do df."to_excel"("C:\Intersystems\employees_out.xlsx") Let's create a basic bar chart that shows the number of employees hired each year. for this i am using matplotlib.pyplot //import matplotlib Set plt = python.Import("matplotlib.pyplot") //create a new dataframe to reprecent the bar chart set df2 = df."Start Date".dt.year."value_counts"()."sort_index"().plot.bar() //export the output to a png do plt.savefig("C:\Intersystems\barchart.png") //cleanup do plt.close() That's it! With these simple steps, you should be able to read in a CSV file, clean the data, and perform some basic analysis using Pandas in InterSystems Embedded Python. Video You are now able to access the video by utilizing the link provided below. The video itself serves as a comprehensive overview and elaboration of the above tutorial.https://youtu.be/hbRQszxDTWU Conclusion The tutorial provided only covers the basics of what pandas can do. With pandas, you can perform a wide range of data analysis, visualization, filtering, and aggregation tasks, making it an invaluable tool in any data workflow. Additionally, when combined with other data science packages, you can build interactive dashboards, develop machine learning models to make predictions, automate data workflows, and more. To further your understanding of pandas, explore the resources listed below and accelerate your learning journey. Disclaimer It is important to note that there are various ways of utilizing Pandas with InterSystems. The article provided is intended for educational purposes only, and it does not guarantee the most optimal approach. As the author, I am continuously learning and exploring the capabilities of Pandas, and therefore, there may be alternative methods or techniques that could produce better results. Therefore, readers should use their discretion and exercise caution when applying the information presented in the article to their respective projects. Great article, if your are looking for an approach without objectscript and making use of "irispyhton", check this code : python code : ```python import pandas as pd from sqlalchemy import create_engine,types engine = create_engine('iris+emb:///') df = pd.read_csv("/irisdev/app/notebook/Notebooks/date_panda.csv") # change type of FullDate to date df['FullDate'] = pd.to_datetime(df['FullDate']) df.head() df.to_sql('DateFact', engine, schema="Demo" ,if_exists='replace', index=True, dtype={'DayName': types.VARCHAR(50), 'FullDate': types.DATE, 'MonthName': types.VARCHAR(50), 'MonthYear': types.INTEGER, 'Year': types.INTEGER}) ``` requirements.txt : ``` pandas sqlalchemy==1.4.22 sqlalchemy-iris==0.5.0 irissqlcli ``` date_panda.csv ``` ID,DayName,FullDate,MonthName,MonthYear,Year 1,Monday,1900-01-01,January,190001,1900 2,Tuesday,1900-01-02,January,190001,1900 3,Wednesday,1900-01-03,January,190001,1900 4,Thursday,1900-01-04,January,190001,1900 5,Friday,1900-01-05,January,190001,1900 6,Saturday,1900-01-06,January,190001,1900 7,Sunday,1900-01-07,January,190001,1900 8,Monday,1900-01-08,January,190001,1900 9,Tuesday,1900-01-09,January,190001,1900 ``` @Guillaume.Rongier7183 that’s awesome thank you for sharing, will check this out. Hi Rizman, Your video is available on InterSystems Developers YouTube: ⏯Pandas with embedded python Thank you!
Announcement
Vadim Aniskin · Sep 15, 2022

Categorizing your ideas on InterSystems Ideas is easy!

Hello Community, In the previous announcement, we introduced our feedback portal – InterSystems Ideas! Now we'd like to tell you more about it, notably about the topics which are covered there. You can submit your ideas in the following categories: 💡 InterSystems Products where you can post ideas for new development directions for our products: InterSystems IRIS data platform InterSystems IRIS for Health InterSystems HealthShare InterSystems TrakCare 💡 InterSystems Services where you can post ideas on how we can make our services even better than they are now: Developer Community Open Exchange app gallery Global Masters gamification platform Partner Directory Documentation Certification Learning and InterSystems Ideas Portal itself There is also the category "Other" for feedback that is not related directly neither to InterSystems Products or Services. After choosing a category, feel free to also add keywords / tags: Feel free to share your suggestions for categories and keywords worth adding to the portal. We will be glad to hear from you! See you on the InterSystems Ideas portal ✌️
Announcement
Olga Zavrazhnova · Oct 4, 2022

Meet InterSystems at TechCrunch Disrupt 2022

Meet InterSystems at TechCrunch Disrupt 2022 - the biggest event for startups! This year we will host 4 roundtables at TechCrunch as well as a developer meetup in San Francisco on October 19! At TechCrunch we invite you to join our roundtables to discuss the following topics: Roundtable: Breaking Into the Healthcare Monolith: Strategies for working with Payors and Providers How do you build a health-tech startup that can achieve high growth? What can startups do to make their technologies more compelling to the biggest players in healthcare: payors and health systems? In this session, we will discuss the pain points of getting into healthcare, as well as strategies to open doors to these organizations for pilots and sustainable revenue. You’ll leave this round table discussion with a list of best practices from InterSystems’ 40+ years in the healthcare industry. The session will run twice: Tuesday 10/18 and Wednesday at 10:30 - 11:00 am Roundtable: What the heck is Interoperability Anyways? In a world where data is commonly available at a coder’s fingertips, why is it so hard to connect to some customers? Doesn’t everyone have APIs we can use to get to data? What is interoperability anyways? Can cloud technologies and data fabrics solve our problems? How can our startups be better prepared to enter the data ecosystems in industries like healthcare, financial services, or supply chain? This round table will give you an overview of what interoperability is, why these established industries interface the way they do, and strategies to make this process less painful as you develop your products. The session will run twice: Tuesday 10/18 and Wednesday at 10:30 - 11:00 am Speaker: Neal Moawed, Global Head of Industry Research, InterSystems Who will be there? Let us know!
Question
Tyffany Coimbra · Nov 10, 2022

download InterSystems Caché ODBC Data Source

I need to download InterSystems Caché ODBC Data Source 2018 and I can't.I want to know where I can download it. Have you looked in ftp://ftp.intersystems.com/pub/ ? Try here: ftp://ftp.intersystems.com/pub/cache/odbc/ Hello guys!I can't download from these links. Hi. If you mean the ODBC driver, then it gets installed when you install Cache. So, any Cache install file for that version has it. I don't know if you can select to only install the driver and nothing else as I always want the full lot on my PC. (... just tried and a "custom" setup allows you to remove everything but the ODBC driver, but it's fiddly.) ODBC drivers are available for download from the InterSystems Components Distribution page here: https://wrc.intersystems.com/wrc/coDistGen.csp Howdy all, Iain is right that you can get the ODBC driver from the WRC site directly if you are a customer of ours, but the new spot InterSystems hosts drivers for ODBC and so on is here: https://intersystems-community.github.io/iris-driver-distribution/ edit: I realized this was asking for Cache, not IRIS drivers, so my answer doesn't address it. How are you trying to access the ftp link? I tested and it should be publicly available. Try pasting the link into your file explorer on Windows, for example.