Clear filter
Question
CM Wang · Jun 27, 2018
I am trying to read some binary data from a local file or through socket.The binary data is like H.264 codec, I need to read data BYTE by BYTE (even BIT by BIT) and decide the next step based on the data read so far.I check the documentation and it seems like most of the sample focus on human readable IO, ie: LINE by LINE.Could I achieve my goal through COS?Thanks. you have the option to read by character usingREAD *varThen you read exactly 1 byte in its decimal representation set file="C:\ZZ\myfile.bin"open file:"RS":0 else write "file not open", ! quitfor use file read *byte use 0 write byte,?5,"x\"_$ZHEX(byte),?12,"char:",$C(byte),!Docs on READYou may also use %File Class and use a Read method with length 1 You can use %Stream.FileBinary and %Stream.GlobalBinary classes. In addition to other answers, if want to operate with bits, you can use functions:$zhex - converts integer to hex and vice versa, be careful with type of variables. If you pass string it will convert from hex to dec, if you will pass integer it will convert from dec to hex.$factor - converts integer to a $bit string$bit (with bunch of $bit* functions) - operates with bitstrings.You may possible need some encryption functions, you can findAnd possible zlib, $system.Util.Compress and $system.Util.Decompress from class %SYSTEM.Util If you are going to work with H.264, I would not recommend to try to implement it in COS, it will not be fast enough. I'm not very know with this technology but I'm sure I know, it is too complicated and encoding should work in GPU. But it is impossible with Caché. I don't know why exactly do you need it, but I would recommend you to look at external tool ffmpeg, which is very useful and leader on work with video. And I think you can connect ffmpeg.dll, to work from Caché with $zf functions.
Article
Eduard Lebedyuk · Sep 10, 2018
Generally speaking, InterSystems products supported dynamic objects and JSON for a long while, but version 2016.2 came with a completely new implementation of these features, and the corresponding code was moved from the ObjectScript level to the kernel/C level, which made for a substantial performance boost in these areas. This article is about innovations in the new version and the migration process (including the ways of preserving backward compatibility).Working with JSONLet me start with an example. The following syntax works now and it’s the biggest innovation in ObjectScript syntax:
Set object = { "property": "val", "property2": 2, "property3": null }
Set array = [ 1, 2, "string", true ]
As you can see, JSON is now an integral part of ObjectScript. So what happens when values are assigned this way? The “object” becomes an instance of the %Library.DynamicObject object, while “array” is an instance of %Library.DynamicArray. They are both dynamic objects.
Dynamic objects
Cache had dynamic objects for a while in the form of the %ZEN.proxyObject class, but now the code has been moved to the kernel for greater performance. All dynamic object classes are inherited from %Library.DynamicAbstractObject, which offers the following functionality:
Getting an object from a JSON string, stream, fileOutput of an object in the JSON format to a string or variable, automatic detection of the output format depending on contextWriting an object to a file in the JSON formatWriting an object to a globalReading an object from a global
Transition from %ZEN.proxyObject
So you want to migrate from %ZEN.proxyObject/%Collection.AbstractIterator towards %Library.DynamicAbstractObject? This is a relatively simple task that can be completed in several ways:
If you are not interested in compatibility with versions of Caché earlier than 2016.2, just use Ctrl+H thoughtfully and you’ll be fine. Keep in mind, though, that indexes in arrays start with a zero now and you need to add $ to the names of system methodsUse macros that transform the code into the necessary form during compilation depending on the version of Caché.Use an abstract class as a wrapper for corresponding methods
The use of the first method is obvious, so let’s focus on the remaining two.
Macros
An approximate set of macros that work with either new or old dynamic objects class depending on the availability of %Library.AbstractObject.
Macros
property
#if $$$comClassDefined("%Library.dynamicAbstractObject")
#define NewDynObj {}
#define NewDynDTList []
#define NewDynObjList $$$NewDynDTList
#define Insert(%obj,%element) do %obj.%Push(%element)
#define DynObjToJSON(%obj) w %obj.%ToJSON()
#define ListToJSON(%obj) $$$DynObjToJSON(%obj)
#define ListSize(%obj) %obj.%Size()
#define ListGet(%obj,%i) %obj.%Get(%i-1)
#else
#define NewDynObj ##class(%ZEN.proxyObject).%New()
#define NewDynDTList ##class(%ListOfDataTypes).%New()
#define NewDynObjList ##class(%ListOfObjects).%New()
#define Insert(%obj,%element) do %obj.Insert(%element)
#define DynObjToJSON(%obj) do %obj.%ToJSON()
#define ListToJSON(%obj) do ##class(%ZEN.Auxiliary.jsonProvider).%ObjectToJSON(%obj)
#define ListSize(%obj) %obj.Count()
#define ListGet(%obj,%i) %obj.GetAt(%i)
#endif
#define IsNewJSON ##Expression($$$comClassDefined("%Library.DynamicAbstractObject"))
This code:
Set obj = $$$NewDynObj
Set obj.prop = "val"
$$$DynObjToJSON(obj)
Set dtList = $$$NewDynDTList
Set a = 1
$$$Insert(dtList,a)
$$$Insert(dtList,"a")
$$$ListToJSON(dtList)
Will compile into the following code for Cache version 2016.2+:
set obj = {}
set obj.prop = "val"
w obj.%ToJSON()
set dtList = []
set a = 1
do dtList.%Push(a)
do dtList.%Push("a")
w dtList.%ToJSON()
For previous versions, it will look like this:
set obj = ##class(%ZEN.proxyObject).%New()
set obj.prop = "val"
do obj.%ToJSON()
set dtList = ##class(%Library.ListOfDataTypes).%New()
set a = 1
do dtList.Insert(a)
do dtList.Insert("a")
do ##class(%ZEN.Auxiliary.jsonProvider).%ObjectToJSON(dtList)
Abstract class
The alternative is to create a class that abstracts the dynamic object being used. For example:
Class Utils.DynamicObject Extends %RegisteredObject
{
/// A property storing a true dynamic object
Property obj;
Method %OnNew() As %Status
{
#if $$$comClassDefined("%Library.DynamicAbstractObject")
Set ..obj = {}
#else
Set ..obj = ##class(%ZEN.proxyObject).%New()
#endif
Quit $$$OK
}
/// Getting dynamic properties
Method %DispatchGetProperty(pProperty As %String) [ Final ]
{
Quit ..obj.%DispatchGetProperty(pProperty)
}
/// Setting dynamic properties
Method %DispatchSetProperty(pProperty As %String, pValue As %String) [ Final ]
{
Do ..obj.%DispatchSetProperty(pProperty,pValue)
}
/// Converting to JSON
Method ToJSON() [ Final ]
{
Do ..obj.%ToJSON()
}
}
Using this class is completely identical to working with a regular one:
Set obj = ##class(Utils.DynamicObject).%New()
Set obj.prop = "val"
Do obj.ToJSON()
What to choose
It’s totally your call. The option with a class looks more conventional, the one with macros is a bit faster thanks to the absence of intermediate calls. For my MDX2JSON project, I chose the latter option with macros. The transition was fast and smooth.
JSON performance
The speed of JSON generation went up dramatically. The MDX2JSON project has JSON generation tests that you can download to see the performance boost with your own eyes!
Conclusion
New dynamic objects and improvements in JSON support will help your applications work faster.
Links
Documentation
Discussion
Sean Connelly · Sep 12, 2018
I know we have nearly 5000 members on the DC site, but not even sure if this is a majority share or a minority share.
Anyone have a good guesstimate?
Sean. Thanks Evgeny, probably sounds about right.I would hazard a guess then that upwards of 0.1% of developers worldwide use Caché in one shape or form.It's interesting to compare that to the last stackoverflow survey...https://insights.stackoverflow.com/survey/2018/#technology-databasesespecially since some of those at the bottom of the list are developed on top of Hadoop, opens up some ideas for what could be possible on top of the IRIS Hadoop platform.On a side note, how about an annual DC survey? Would give some fascinating insights. Hi Sean! Moved the discussion to the Other group.4,800+ are the registered members, stats. We also have about 20,000 unique DC visitors monthly.From them we have about 3,000 visitors who come directly to the site - so we can assume as "real" developers who reads/writes on DC daily. But this touches only the English world. And not all of them know about DC. My evaluation is 10,000-15,000 developers worldwide.E.g. last the IT Planet contest in Russia this year gathered 2,400 participants for InterSystems contest (mostly students). Should we count students as developers? Today maybe not, but tomorrow...
Announcement
Evgeny Shvarov · Feb 14, 2018
Hi, Community!It is just a small announcement that Community is growing and we just reached 4,000 registered members!You can track the public DC analytics in this DeepSee dashboards in Community->Analytics menu:
Article
Bob Binstock · May 16, 2018
InterSystems supports use of the InterSystems IRIS Docker images it provides on Linux only. Rather than executing containers as native processes, as on Linux platforms, Docker for Windows creates a Linux VM running under Hyper-V, the Windows virtualizer, to host containers. These additional layers add complexity that prevents InterSystems from supporting Docker for Windows at this time.
We understand, however, that for testing and other specific purposes, you may want to run InterSystems IRIS-based containers from InterSystems under Docker for Windows. This article describes the differences between Docker for Windows and Docker for Linux that InterSystems is aware of as they apply to working with InterSystems-provided container images. Other, unanticipated issues may arise. When using InterSystems IRIS images and containers on a Windows platform, ensure that you have access to the Docker documentation for convenient reference; see in particular Getting started with Docker for Windows
Because handling by a container of external persistent storage under Docker for Windows involves both the Windows and Linux file systems and file handling, the differences noted are largely storage-related.
For general information about running InterSystems IRIS in Docker containers using image provided by InterSystems, see Running InterSystems IRIS in Docker Containers and First Look: InterSystems IRIS in Docker Containers.
Share Disk Drives
On Windows, you must give Docker access to any storage with which it interacts by sharing the disk drive on which it is located. To share one or more drives, follow these steps (you can combine this with the procedure in the previous item):
Right-click the Docker icon in the system tray and select Settings ... .
Choose the Shared Drives tab, then select the drive(s) on which the storage is located and click Apply. If a drive is already selected (the C drive is selected by default), clear the checkbox and click Apply, then select it and click Apply again.
Enter your login credentials when prompted.
Docker automatically restarts after applying the changes; if it does not, right-click the Docker icon in the system tray and select Restart.
Copy External Files Within the Container
When using Docker, it is often convenient to mount a directory in the external file system as a volume inside the container, and use that as the location for all the external files needed by the software inside the container. For example, you might mount a volume and place the InterSystems IRIS licence key, iris.key, and a file containing the intended InterSystems IRIS password in the external directory for access by the --key option of iris-main program and the password change script, respectively (see The iris-main Program and Changing the InterSystems IRIS Password in Running InterSystems IRIS in Containers). Under Docker for WIndows, however, file-handling and permissions differences sometimes prevent a file on a mounted external volume from being used properly by a program in the container. You can often overcome permissions difficulties by having the program copy the file within the container and then use the copy.
For example, the iris-main --before option is often used to change the password of the InterSystems IRIS instance in the container, for example:
--before "changePassword.sh /external/password.txt"
If this fails to change the password as intended on Windows, try the following:
--before "cp /external/password.txt /external/password_copied.txt && \changePassword.sh /external/password_copied.txt"
Use Named Volumes
When numerous dynamic files are involved, any direct mounting of the Windows file system within a container is likely to lead to problems, even if individual mounting and copying of all the files were feasible. In the case of InterSystems IRIS, this applies in particular to both the durable %SYS feature for persistent storage of instance-specific data (see Durable %SYS for Persistent Instance Data in Running InterSystems IRIS in Containers) and journal file storage. You can overcome this problem by mounting a named volume, which is a storage volume with a mount point in the filesystem of the Linux VM hosting the containers on your system. Because the VM is file system-based, the contents of such a volume are saved to the Windows disk along with the rest of the VM, even when Docker or your system goes down.
For example, the standard way to enable durable %SYS when running an InterSystems IRIS container is to mount an external volume and use the --env option to set the ISC_DATA_DIRECTORY environment variable to a location on that volume, for example:
docker run ... \--volume /nethome/pmartinez/iris_external:/external \--env ISC_DATA_DIRECTORY=/external/durable/
This will not work with Docker for Windows; you must instead create a named volume with the docker volume create command and locate the durable %SYS directory there. Additionally, you must include the top level of the durable %SYS directory, /irissys, in the ISC_DATA_DIRECTORY specification, which is not the case on Linux. On Windows, therefore, your options would look like this:
docker volume create durabledocker run ... \--volume durable:/durable \--env ISC_DATA_DIRECTORY=/durable/irissys/
To use this approach for the instance's journal files, create and mount a named volume, as in the durable %SYS example above, and then use any configuration mechanism (the ^JOURNAL routine, the Journal Settings page in the Management Portal, or the iris.cpf file) to set the current and alternate journal directories to locations on the named volume. To separate the current and alternate journal directories, create and mount a named volume for each. (Note that this approach has not been thoroughly tested and that journal files under this scheme may therefore be at risk.)
To discover the Linux file system mount point of a named volume within the VM, you can use the docker volume inspect command, as follows:
docker volume inspect durable_data[ { "CreatedAt": "2018-05-04T12:11:54Z", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/durable_data/_data", "Name": "durable_data", "Options": null, "Scope": "local" }]
Command Comparison
Taking all of the preceding items together, the following presents a comparison between the final docker run command described Run and Investigate the InterSystems IRIS-based Container in First Look: InterSystems IRIS in Docker Containers, which is intended to be executed on a Linux platform, and the equivalent docker run command using Docker for Windows.
Linux
$ docker run --name iris3 --detach --publish 52773:52773 \ --volume /nethome/pmartinez/iris_external:/external \ --env ISC_DATA_DIRECTORY=/external/durable \ --env ICM_SENTINEL_DIR=/external iris3:test --key /external/iris.key \ --before "changePassword.sh /external/password.txt"
Windows
C:\Users\pmartinez>docker volume create durableC:\Users\pmartinez>docker volume create journalsC:\Users\pmartinez>docker run --name iris3 --detach --publish 52773:52773 \--volume durable:/durable\--volume journals:/journals--env ISC_DATA_DIRECTORY=/durable/irissys \--env ICM_SENTINEL_DIR=/durable iris3:test --key /external/iris.key \--before "cp /external/password.txt /external/password_copied.txt && \changePassword.sh /durable/password_copied.txt"
If you have any information to contribute about using InterSystems-provided containers with Docker for Windows, please add as a comment here, or post your own article! Hi Bob,in Windows, for the key/password it works if you just define a volume where the files are. Then the call would be simpler/smaller:docker run --name iris3 --detach --publish 52773:52773 \--volume C:\pmartinez\iris_external:/external \--volume durable_data:/durable --env ISC_DATA_DIRECTORY=/durable/irissys \--env ICM_SENTINEL_DIR=/durable iris3:test --key /external/iris.key \--before "/usr/irissys/dev/Cloud/ICM/changePassword.sh /external/password.txt"I understand that using a named volume will store the durable %SYS within the Linux VM itself which would avoid issues with Windows FS regarding database files updating, permissions,... but, is there any reason why you choose to mount each file separately instead of this way I include? In the end we just use these 2 files (iris.key and password.txt) once when starting the container. Some of the examples developed for Windows came from me within the Learning Services team. We map the key and password files in separately as we have to pull different key files for different product training. We also rotate passwords and keys on a regular basis, so we found it was easier to have them living in their own directories on the local host so we can manage them better.That said, you are correct that you can put them in one folder and only map it once. The docker run commands get a bit complex, so we have moved to mostly using docker-compose and an .ENV file to help us parameterize different settings as we move containers from test (on a local Windows 10 machine) to staging to production (on Linux). in deference to the wisdom of Salva and Doug, i have removed the section about mounting files individually. Please note, if you are running into permissions issues, that seems to be a windows only problem and can be worked around by creating a derivative image like this:
~~~
FROM intersystems/iris:2018.2.0.490.0
RUN adduser irisusr root && adduser irisusr irisusr
~~~
And use that.
The errors you might expect look like this:
~~~
Sign-on inhibited.
See messages.log for details.
[ERROR] Execvp failure. Executable /usr/irissys/bin//./irisdb. errno=13, file uid=0 gid=1000
perms=r-xr-x---, user uid=1000 gid=0
Call InterSystems Technical Support. : Permission denied
[ERROR] Possible causes:
[ERROR] - InterSystems IRIS was not installed successfully
[ERROR] - Invalid InterSystems IRIS instance name
[ERROR] - Insufficient privilege to start InterSystems IRIS (proc not in InterSystems IRIS group?)
[FATAL] Error starting InterSystems IRIS
~~~
Alternatively you could add those statements to the before parameter, but that seems less elegant.
Thanks,
Fab Hi Bob,I believe the docker for windows command for creating the named volume inside the linux vm should bedocker volume create durablerather than docker create volume durable David, you are correct the command is actually docker volume create <name of volume>You can then do docker volume ls to list your existing volumes, or docker volume purge to delete volumes that are no longer associated with a running container.We should probably update this article a bit as the new Docker for Windows no longer supports AUFS, but the Overlay2 driver issues have been fixed. So setting your driver to AUFS isn't needed anymore if you are running on the newest 2.0+ version.I also tend to prefer using docker-compose for some of this so I can map out the volumes, a specific network, container names, etc that all help you connect other containers to your IRIS, such as Zeppelin/Spark, a browser based IDE, or a Java app, etc. i corrected it, David. thank you. The article is considered as InterSystems Data Platform Best Practice. finally removed the Storage Driver section, thanks Doug Hello Fabian,
is exactly my problem - could you go a bit in detail; I've no idea how and where exactly to use the mentioned commands;
btw. pls. keep in mind i'm using Windows ;-)
Thanks in advance,
Stefan Hi Stefan,
I believe Fabian was describing creating a new image, based off of the default InterSystems image, but modifying the user and group. (By the way, even though you are on Docker for Windows, InterSystems IRIS images are based on Ubuntu, and Docker for Windows runs in a Linux VM) The excerpt above would be placed in a new Dockerfile and you would build a new custom image. This approach is described here: https://irisdocs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=ADOCK#ADOCK_iris_creating
However, may I ask, what version of InterSystems IRIS are you using? I have seen these "Sign-on inhibited" errors frequently in the past, but I think they've been mitigated in recent versions. 2019.3 was just released, and should be available in the Docker Store and cloud marketplaces in the next few days.
-Steve Hey, i am currently facing permission issues with bind mount. I tried your solution - My Dockerfile:
FROM store/intersystems/iris-community:2020.2.0.204.0RUN adduser irisusr root && adduser irisusr irisusr
But I am getting an error when building the Dockerfile: "adduser: Only root may add a user or group to the system."
Could you help me what is going wrong?
Announcement
Evgeny Shvarov · Oct 26, 2018
Hi Community!I'm pleased to announce that InterSystems Developer Community reached 5,000 registered members!Thank you, developers, not only for registering but rather for making this place more and more helpful for everyone who develops and supports solutions on InterSystems Data Platforms all over the world! Big applause to all of us! Here are some other statistics what makes Developer Community crowdy and helpful:800+ articles, stats2,800+ questions and answers, stats100 applications on Open Exchange100+ videos on Developer Community YouTube600+ InterSystems Global Masters members – a club of InterSystems technology Advocates.Thank you for your continuous feedback in DC Feedback group and public issue tracker. And special thanks to our noble Developer Community Team: Content Manager @Anastasia.Dyubaylo, Global Masters Managers: @Olga.Zavrazhnova2637 and @Julia.Fedoseeva, and DC Moderators: @Robert.Cemper1003, @Eduard.Lebedyuk , @Dmitry.Maslennikov and @John.Murray.So!Dear Developers! We in DC team will continue to work hard to make InterSystems Developer Community the best and the most convenient place to ask questions, share experience and discuss the best practices on InterSystems Data Platforms.Thank you for your choice and your contribution, and do not hesitate to provide feedback and enhancement requests!Stay tuned! Evgeny Shvarov,Community Manager. Congratulations to the community 5000 - is a house number. Congratulations to all of us.
Announcement
Anastasia Dyubaylo · Oct 31, 2018
Hi Community!
See all the Key Notes videos from Global Summit 2018 in a dedicated Global Summit 2018 Keynotes playlist on DC YouTube Channel!
1. Our Investment In the Future – presented by Terry Ragon, CEO, Founder, and Owner of InterSystems.
2. Data Fueling the 21st Century – presented by Paul Grabscheid Vice President, Strategic Planning at InterSystems.
3. InterSystems IRIS: The Engine For Next Generation Solutions – presented by Carlos Kühl Nogueira, General Manager, Data Platform Initiatives, InterSystems.
4. Where Vision Meets Reality - Our Partnership With InterSystems – presented by Vish Anantraman, Chief Information Architect, Northwell Health.
5. Turning Data into Results with InterSystems IRIS – presented by Manoel Amorim, President, Facilit.
6. Accelerating What Matters in Healthcare – presented by @Donald.Woodlock, Vice President, HealthShare, InterSystems.
7. Disrupting Healthcare: Clinical Lab 2.0 – presented by Sam Merkouriou, President, Rhodes Group and Steve Ayer, Chief Information Officer, TriCore Reference Laboratories.
8. Creating A High-Performance Organization - Panel Discussion – presented by Tom Keppeler, Director of Communications, InterSystems.
9. Driving High Performance Through Digital Customer Engagement – presented by Ian Bonnet, Managing Director, PricewaterhouseCoopers.
10. What Can a Developer Learn From a Robot to Unlock Creativity? – presented by Gil Weinberg, Ph.D., Founding Director, Georgia Tech Center for Music Technology.
11. Technology Trends on InterSystems IRIS Data Platform – presented by @Jeffrey.Fried, Director of Product Development, Data Platforms, InterSystems and @Joseph.Lichtenberg, Product and Industry Marketing Director, Data Platforms, InterSystems.
12. How Affective Computing is Changing Patient Care – presented by Rosalind Picard, Sc.D., Founder and Director, Affective Computing Research Group, MIT Media Lab.
BIG APPLAUSE TO ALL THE SPEAKERS!
And...
If you want to learn more about Global Summit 2018, follow this link.
Enjoy and stay tuned on Developer Community YouTube Channel!
Announcement
Jeff Fried · Nov 12, 2018
InterSystems IRIS Data Platform™ version 2018.2 is now available as a preview release. This is the first release in our new quarterly continuous-delivery (CD) release stream. (Check out the announcement about InterSystems’ new release cadence for InterSystems IRIS) InterSystems IRIS 2018.2 includes important updates and fixes, as well as new features that improve performance, interoperability, and cloud deployment. For example, you'll find in this release:InterSystems Cloud Manager support for availability zones, async mirroring, and service discoveryKey Management Interoperability Protocol (KMIP) supportIntegrated Windows Authentication support for HTTPSQL performance optimizations, including Auto-parallel queries and Tune TableJava support for Hibernate 5.2 or 5.3, Bulk loader utility with Java, and shared memory support for Java GatewayPreview releases like this one give our customers an early start working with new features and functionality. They are supported for evaluation, development, and test purposes, but not for production.You can download the release on a new preview portal at the WRC download center, and the documentation is here.
Announcement
Anastasia Dyubaylo · Nov 7, 2018
Hey Developers!New session recording from Global Summit 2018 is available on InterSystems Developers YouTube Channel:Using Blockchain with InterSystems IRIS This video will cover the applicability of blockchain technology to address current business challenges, and will present a live application using InterSystems IRIS with blockchain integration.Takeaway: I can integrate blockchain technology with InterSystems IRIS.Presenters: @Joseph.Lichtenberg and @Evgeny.ShvarovBig applause for these speakers, thank you guys! Additional materials to the video you can find in this InterSystems Online Learning Course.Enjoy and stay tuned with InterSystems Developers YouTube! Thanks, Anastasia!Also the Ethereum Interoperability Adapter is available on InterSystems Open Exchange. Hi Anastasia, The video is very interesting. Is-it possible to retrieve the script of the demo ? I work on this subject with my team.RegardsEric Hi Eric!Thanks for your interest!Conctacted you directly.
Announcement
Evgeny Shvarov · Jul 25, 2019
Hi Developers!We are ready to present you a new release of InterSystems Open Exchange. What's new?Applications rating;Open Exchange profile <-> InterSystems Developers profile linkage.See the details below.Apps ratingYes, starting with this release you can show your attitude to different applications. If you like the app - open it and click on a star on a top-right corner of the application page:The feature is available for registered members. If you don't yet - sign in with your InterSystems Developers community account.You can sort the apps by rating: click on a sorting selector 2 times until "Sorted by stars" appears - you'll see the apps sorted with the most of stars on top.And you can filter the apps which you starred - the way to see only your favorite apps.InterSystems Developers Profile LinkageWe've introduced a new field in a member's profile - DC Link. Put the link to your member's page and it will appear on your Open Exchange profile page - like here.Also, we fixed a lot of bugs and introduced a set of small features - check the Open Exchange kanban page.Introduce your feature requests, submit your apps and stay tuned!
Announcement
Anastasia Dyubaylo · Aug 9, 2019
Hi Community!
You're very welcome to watch a new video on InterSystems Developers YouTube, recorded by @Stefan.Wittmann, InterSystems Product Manager:
InterSystems API Manager Introduction
InterSystems API Manager is a tool that allows you to manage your APIs within InterSystems IRIS™ and InterSystems IRIS for Health™. In this video, @Stefan.Wittmann describes what API Manager is and describes several use cases in which the tool would be useful.
And...
Additional info about InterSystems API Manager you can find in this post.
Enjoy and stay tuned!
Announcement
Anastasia Dyubaylo · Aug 29, 2019
Hi Everyone!
New video, recorded by @Benjamin.DeBoe, is already on InterSystems Developers YouTube:
Scaling Fluidly with InterSystems IRIS
In this video, @Benjamin.DeBoe, InterSystems Product Manager, explains how sharding capabilities in InterSystems IRIS provide a more economical approach to fluidly scaling your systems than traditional vertical scaling.
And...
To learn more about InterSystems IRIS and its scalability features, you can browse more content at http://www.learning.intersystems.com.
Enjoy watching the video!
Announcement
Evgeny Shvarov · Aug 2, 2019
Hi Developers!Happy to share with you the release notes of InterSystems Developers community site in August 2019. What's new?Direct messages between registered members;Accounts merging;Better navigation for serial posts;Spanish DC enhancements: translated tags, analytics.See the details below.Direct messagesYou asked about this a lot - so we've added this. Now members can communicate with each other, respond directly for job offers or job-seeking announcements. We allow the feature only for those members who contribute to the site. Learn more on direct messages.Accounts mergingInterSystems Developers account depends on the email. You register with another email and you are another DC member. Sometimes it causes numerous alter-egos on the site of one person if a developer changes emails, changes work but keep staying with Community. We appreciate that you stay with the community for years. But we can make this convenient and we can move your content from one account to another.If you want to move your assets from different accounts to a one and/or block the unused accounts please contact our content manager @Irina.Podmazko and Irina will help you.Serial posts navigationWe love when you guys introduce the posts with several parts. There are a lot of examples of such: check this one or another one. But the support from the site for the navigation of such posts was poor. Until this release! Today you can connect the multi-part postings with Next and Prev parts and let your readers easily navigate between episodes of the story. How to do that?Edit the post and add the link to the Next or Prev part or both and this will introduce the arrows in the title of the article.Check how it works with these posts: part1, part2.And looking for more multi-episode stories about InterSystems IRIS!Spanish Tags and Spanish AnalyticsWe keep enhancements for the Spanish DC. This release brings the totally translated tags and public analytics for the members, postings, and views of Spanish DC.As always we've fixed tons of bugs, introduced some plans for the next release and are looking forward to your feature requests and bug reports.Stay tuned!
Article
Sean Connelly · Sep 10, 2019
In this article, we will explore the development of an IRIS client for consuming RESTful API services that have been developed to the OData API standard.We will be exploring a number of built-in IRIS libraries for making HTTP requests, reading and writing to JSON payloads, and seeing how we can use them in combination to build a generic client adaptor for OData. We will also explore the new JSON adapter for deserializing JSON into persistent objects.Working with RESTful APIsREST is a set of engineering principles that were forged from the work on standardizing the world wide web. These principles can be applied to any client-server communication and are often used to describe an HTTP API as being RESTful.REST covers a number of broad principles that include stateless requests, caching, and uniform API design. It does not cover implementation details and there are no general API specifications to fill in these gaps.The side effect of this ambiguity is that RESTful APIs can lack some of the understanding, tools, and libraries that often build up around stricter ecosystems. In particular, developers must construct their own solutions for the discovery and documentation of RESTful APIs.ODataOData is an OASIS specification for building consistent RESTful API's. The OASIS community is formed from a range of well-known software companies that include Microsoft, Citrix, IBM, Red Hat, and SAP. OData 1.0 was first introduced back in 2007, and the most recent version 4.1 was released this year.The OData specification covers things like metadata, consistent implementations of operations, queries, and exception handling. It also includes additional features such as actions and functions.Exploring the TripPinWS OData APIFor this article we’ll be using the TripPinWS API, which is provided as an example by Odata.org.As with any RESTful API, we would typically expect a base URL for the service. Visiting this base URL in OData will also return a list of API entities.https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRWWe can see that the API includes entities for Photos, People, Airlines, Airports, Me, and a function called GetNearestAirport.The response also includes a link to the TripPinWS metadata document.https://services.odata.org/V4/(S(djd3m5kuh00oyluof2chahw0))/TripPinServiceRW/$metadataThe metadata is implemented as an XML document and includes its own XSD document. This opens up the possibility of consuming metadata documents using code generated from the IRIS XML schema wizard.The metadata document might look fairly involved at first glance, but it's just describing the properties of types that are used to construct entity schema definitions.We can get back a list of People from the API by using the following URL.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/PeopleThis returns a list of 8 people, 8 being a hard limit for the number of entities per result. In the real world, we would probably use a much larger limit. It does, however, provide an example of how OData includes additional hypertext links such as the @odata.nextLink, which we can use to fetch the next page of People in the search results.We can also use query string values to narrow down the results list, such as selecting only the top 1 result.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/People?$top=1We can also try filtering requests by FirstName.https://services.odata.org/V4/(S(4hkhufsw5kohujphemn45ahu))/TripPinServiceRW/People?$filter=FirstName eq 'Russell'In this instance, we used the eq operator to filter on all FirstNames that equal 'Russell'. Note the importance of wrapping strings in single quotes. OData provides a variety of different operators that can be used in combination to build up highly expressive search queries.IRIS %Net PackageIRIS includes a comprehensive standard library. We’ll be using the %Net package, which includes support for protocols such as FTP, Email, LDAP, and HTTP.To use the TripPinWS service we will need to use HTTPS, which requires us to register an HTTPS configuration in the IRIS management portal. There are no complicated certificates to install so it’s just a few steps:Open the IRIS management portal.Click on System Administration > Security > SSL/TLS Configurations.Click the "Create New Configuration" button.Enter the name "odata_org" and hit save.You can choose any name you’d like, but we’ll be using odata_org for the rest of the article.We can now use the HttpRequest class to get a list of all people. If the Get() worked, then it will return 1 for OK. We can then access the response object and output the result to the terminal:DC>set req=##class(%Net.HttpRequest).%New()DC>set req.SSLConfiguration="odata_org"DC>set sc=req.Get("https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People")DC>w sc1DC>do req.HttpResponse.OutputToDevice()Feel free to experiment with the base HttpRequest before moving on. You could try fetching Airlines and Airports or investigate how errors are reported if you enter an incorrect URL.Developing a generic OData ClientLet's create a generic OData client that will abstract the HttpRequest class and make it easier to implement various OData query options.We’ll call it DcLib.OData.Client and it will extend %RegisteredObject. We’ll define several subclasses that we can use to define the names of a specific OData service, as well as several properties that encapsulate runtime objects and values such as the HttpRequest object. To make it easy to instantiate an OData client, we will also override the %OnNew() method (the class's constructor method) and use it to set up the runtime properties.Class DcLib.OData.Client Extends %RegisteredObject{Parameter BaseURL;Parameter SSLConfiguration;Parameter EntityName;Property HttpRequest As %Net.HttpRequest;Property BaseURL As %String;Property EntityName As %String;Property Debug As %Boolean [ InitialExpression = 0 ];Method %OnNew(pBaseURL As %String = "", pSSLConfiguration As %String = "") As %Status [ Private, ServerOnly = 1 ]{ set ..HttpRequest=##class(%Net.HttpRequest).%New() set ..BaseURL=$select(pBaseURL'="":pBaseURL,1:..#BaseURL) set ..EntityName=..#EntityName set sslConfiguration=$select(pSSLConfiguration'="":pSSLConfiguration,1:..#SSLConfiguration) if sslConfiguration'="" set ..HttpRequest.SSLConfiguration=sslConfiguration quit $$$OK}}We can now define a client class that is specific to the TripPinWS service by extending DcLib.OData.Client and setting the BaseURL and SSL Configuration parameters in one single place.Class TripPinWS.Client Extends DcLib.OData.Client{Parameter BaseURL = "https://services.odata.org:443/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW";Parameter SSLConfiguration = "odata_org";}With this base client in place, we can now create a class for each entity type that we want to use in the service. By extending the new client class all we need to do is define the entity name in the EntityName parameter.Class TripPinWS.People Extends TripPinWS.Client{Parameter EntityName = "People";}Next, we need to provide some more methods on the base DcLib.OData.Client class that will make it easy to query the entities.Method Select(pSelect As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$select",pSelect) return $this}Method Filter(pFilter As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$filter",pFilter) return $this}Method Search(pSearch As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$search",pSearch) return $this}Method OrderBy(pOrderBy As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$orderby",pOrderBy) return $this}Method Top(pTop As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$top",pTop) return $this}Method Skip(pSkip As %String) As DcLib.OData.Client{ do ..HttpRequest.SetParam("$skip",pSkip) return $this}Method Fetch(pEntityId As %String = "") As DcLib.OData.ClientResponse{ if pEntityId="" return ##class(DcLib.OData.ClientResponse).%New($$$ERROR($$$GeneralError,"Entity ID must be provided"),"") set pEntityId="('"_pEntityId_"')" if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName_pEntityId,..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"one") quit response}Method FetchCount() As DcLib.OData.ClientResponse{ if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName_"/$count") set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"count") quit response}Method FetchAll() As DcLib.OData.ClientResponse{ #dim response As DcLib.OData.ClientResponse if $extract(..BaseURL,*)'="/" set ..BaseURL=..BaseURL_"/" set sc=..HttpRequest.Get(..BaseURL_..EntityName,..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse,"many") if response.IsError() return response //if the response has a nextLink then we need to keep going back to fetch more data while response.Payload.%IsDefined("@odata.nextLink") { //stash the previous value array, push the new values on to it and then //set it back to the new response and create a new value iterator set previousValueArray=response.Payload.value set sc=..HttpRequest.Get(response.Payload."@odata.nextLink",..Debug) set response=##class(DcLib.OData.ClientResponse).%New(sc,..HttpRequest.HttpResponse) if response.IsError() return response while response.Value.%GetNext(.key,.value) { do previousValueArray.%Push(value) } set response.Payload.value=previousValueArray set response.Value=response.Payload.value.%GetIterator() } return response}We've added nine new methods. The first six are instance methods for defining query options, and the last three are methods for fetching one, all, or a count of all entities.Notice that the first six methods are essentially a wrapper for setting parameters on the HTTP request object. To make implementation coding easier, each of these methods returns an instance of this object so that we can chain the methods together. Before we explain the main Fetch() method let’s see the Filter() method in action.set people=##class(TripPinWS.People).%New().Filter("UserName eq 'ronaldmundy'").FetchAll()while people.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }If we use this method, it returns:Ronald MundyThe example code creates an instance of the TripPinWS People object. This sets the base URL and certificate configuration in its base class. We can then call its Filter method to define a filter query and then FetchAll() to trigger the HTTP request.Note that we can directly access the people results as a dynamic object, not as raw JSON data. This is because we are also going to implement a ClientResponse object that makes exception handling simpler. We also generate dynamic objects depending on the type of result that we get back.First, let's discuss the FetchAll() method. At this stage, our implementation classes have defined the OData URL in its base class configuration, the helper methods are setting additional parameters, and the FetchAll() method needs to build the URL and make a GET request. Just as in our original command-line example, we call the Get() method on the HttpRequest class and create a ClientResponse from its results.The method is complicated because the API only returns eight results at a time. We must handle this in our code and use the previous result's nextLink value to keep fetching the next page of results until there are no more pages. As we fetch each additional page, we store the previous results array and then push each new result on to it.The Fetch(), FetchAll() and FetchCount() methods return an instance of a class called DcLib.OData.ClientResponse. Let's create that now to handle both exceptions and auto deserialize valid JSON responses.Class DcLib.OData.ClientResponse Extends %RegisteredObject{Property InternalStatus As %Status [ Private ];Property HttpResponse As %Net.HttpResponse;Property Payload As %Library.DynamicObject;Property Value;Method %OnNew(pRequestStatus As %Status, pHttpResponse As %Net.HttpResponse, pValueMode As %String = "") As %Status [ Private, ServerOnly = 1 ]{ //check for immediate HTTP error set ..InternalStatus = pRequestStatus set ..HttpResponse = pHttpResponse if $$$ISERR(pRequestStatus) { if $SYSTEM.Status.GetOneErrorText(pRequestStatus)["<READ>" set ..InternalStatus=$$$ERROR($$$GeneralError,"Could not get a response from HTTP server, server could be uncontactable or server details are incorrect") return $$$OK } //if mode is count, then the response is not JSON, its just a numeric value //validate that it is a number and return all ok if true, else let it fall through //to pick up any errors that are presented as JSON if pValueMode="count" { set value=pHttpResponse.Data.Read(32000) if value?1.N { set ..Value=value return $$$OK } } //serialise JSON payload, catch any serialisation errors try { set ..Payload={}.%FromJSON(pHttpResponse.Data) } catch err { //check for HTTP status code error first if $e(pHttpResponse.StatusCode,1)'="2" { set ..InternalStatus = $$$ERROR($$$GeneralError,"Unexpected HTTP Status Code "_pHttpResponse.StatusCode) if pHttpResponse.Data.Size>0 return $$$OK } set ..InternalStatus=err.AsStatus() return $$$OK } //check payload for an OData error if ..Payload.%IsDefined("error") { do ..HttpResponse.Data.Rewind() set error=..HttpResponse.Data.Read(32000) set ..InternalStatus=$$$ERROR($$$GeneralError,..Payload.error.message) return $$$OK } //all ok, set the response value to match the required modes (many, one, count) if pValueMode="one" { set ..Value=..Payload } else { set iterator=..Payload.value.%GetIterator() set ..Value=iterator } return $$$OK}Method IsOK(){ return $$$ISOK(..InternalStatus)}Method IsError(){ return $$$ISERR(..InternalStatus)}Method GetStatus(){ return ..InternalStatus}Method GetStatusText(){ return $SYSTEM.Status.GetOneStatusText(..InternalStatus)}Method ThrowException(){ Throw ##class(%Exception.General).%New("OData Fetch Exception","999",,$SYSTEM.Status.GetOneStatusText(..InternalStatus))}Method OutputToDevice(){ do ..HttpResponse.OutputToDevice()}}Given an instance of the ClientResponse object, we can first test to see if there was an error. Errors can happen on several levels, so we want to return them in a single, easy-to-use solution.set response=##class(TripPinWS.People).%New().Filter("UserName eq 'ronaldmundy'").FetchAll()if response.IsError() write !,response.GetStatusText() quitThe IsOK() and IsError() methods check the object for errors. If an error occurred, we can call GetStatus() or GetStatusText() to access the error, or use ThrowException() to pass the error to an exception handler.If there is no error, then the ClientResponse will assign the raw payload object to the response Payload property:set ..Payload={}.%FromJSON(pHttpResponse.Data)It will then set the response Value property to the main data array within the payload, either as a single instance or as an array iterator to traverse many results.I've put all of this code together in a single project on GitHub https://github.com/SeanConnelly/IrisOData/blob/master/README.md which will make more sense when reviewed as a whole. All of the following examples are included in the source GitHub project.Using the OData ClientThere is just one more method we should understand on the base Client class: the With() method. If you don't want to create an instance of every entity, you can instead use the With() method with just one single client class. The With() method will establish a new client with the provided entity name:ClassMethod With(pEntityName As %String) As DcLib.OData.Client{ set client=..%New() set client.EntityName=pEntityName return client}We can now use it to fetch all people using the base Client class:/// Fetch all "People" using the base client class and .With("People")ClassMethod TestGenericFetchAllUsingWithPeople(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.Client).With("People").FetchAll() if response.IsError() write !,response.GetStatusText() quit while response.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }}Or, using an entity per class approach:/// Fetch all "People" using the People classClassMethod TestFetchAllPeople(){ #dim people As DcLib.OData.ClientResponse set people=##class(TripPinWS.People).%New().FetchAll() if people.IsError() write !,people.GetStatusText() quit while people.Value.%GetNext(.key,.person) { write !,person.FirstName," ",person.LastName }}As you can see, they’re very similar. The correct choice depends on how important autocomplete is to you with concrete entities, and whether you want a concrete entity class to add more entity-specific methods.DC>do ##class(TripPinWS.Tests).TestFetchAllPeople()Russell WhyteScott KetchumRonald Mundy… more peopleNext, let's implement the same for Airlines:/// Fetch all "Airlines"ClassMethod TestFetchAllAirlines(){ #dim airlines As DcLib.OData.ClientResponse set airlines=##class(TripPinWS.Airlines).%New().FetchAll() if airlines.IsError() write !,airlines.GetStatusText() quit while airlines.Value.%GetNext(.key,.airline) { write !,airline.AirlineCode," ",airline.Name }}And from the command line ...DC>do ##class(TripPinWS.Tests).TestFetchAllAirlines()AA American AirlinesFM Shanghai Airline… more airlinesAnd now airports:/// Fetch all "Airports"ClassMethod TestFetchAllAirports(){ #dim airports As DcLib.OData.ClientResponse set airports=##class(TripPinWS.Airports).%New().FetchAll() if airports.IsError() write !,airports.GetStatusText() quit while airports.Value.%GetNext(.key,.airport) { write !,airport.IataCode," ",airport.Name }}And from the command line...DC>do ##class(TripPinWS.Tests).TestFetchAllAirports()SFO San Francisco International AirportLAX Los Angeles International AirportSHA Shanghai Hongqiao International Airport… more airportsSo far we’ve been using the FetchAll() method. We can also use the Fetch() method to fetch a single entity using the entity’s primary key:/// Fetch single "People" entity using the persons IDClassMethod TestFetchPersonWithID(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.People).%New().Fetch("russellwhyte") if response.IsError() write !,response.GetStatusText() quit //lets use the new formatter to pretty print to the output (latest version of IRIS only) set jsonFormatter = ##class(%JSON.Formatter).%New() do jsonFormatter.Format(response.Value)}In this instance, we are using the new JSON formatter class, which can take a dynamic array or object and output it to formatted JSON.DC>do ##class(TripPinWS.Tests).TestFetchPersonWithID(){ "@odata.context":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/$metadata#People/$entity", "@odata.id":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People('russellwhyte')", "@odata.etag":"W/\"08D720E1BB3333CF\"", "@odata.editLink":"http://services.odata.org/V4/(S(jndgbgy2tbu1vjtzyoei2w3e))/TripPinServiceRW/People('russellwhyte')", "UserName":"russellwhyte", "FirstName":"Russell", "LastName":"Whyte", "Emails":[ "Russell@example.com", "Russell@contoso.com" ], "AddressInfo":[ { "Address":"187 Suffolk Ln.", "City":{ "CountryRegion":"United States", "Name":"Boise", "Region":"ID" } } ], "Gender":"Male", "Concurrency":637014026176639951}Persisting ODataIn the final few examples, we will demonstrate how the OData JSON could be deserialized into persistent objects using the new JSON adapter class. We will create three classes — Person, Address, and City — which will reflect the Person data structure in the OData metadata. We will use the %JSONIGNOREINVALIDFIELD set to 1 so that the additional OData properties such as @odata.context do not throw a deserialization error.Class TripPinWS.Model.Person Extends (%Persistent, %JSON.Adaptor){Parameter %JSONIGNOREINVALIDFIELD = 1;Property UserName As %String;Property FirstName As %String;Property LastName As %String;Property Emails As list Of %String;Property Gender As %String;Property Concurrency As %Integer;Relationship AddressInfo As Address [ Cardinality = many, Inverse = Person ];Index UserNameIndex On UserName [ IdKey, PrimaryKey, Unique ];} Class TripPinWS.Model.Address Extends (%Persistent, %JSON.Adaptor){Property Address As %String;Property City As TripPinWS.Model.City;Relationship Person As Person [ Cardinality = one, Inverse = AddressInfo ];} Class TripPinWS.Model.City Extends (%Persistent, %JSON.Adaptor){Property CountryRegion As %String;Property Name As %String;Property Region As %String;}Next, we will fetch Russel Whyte from the OData service, create a new instance of the Person model, then call the %JSONImport() method using the response value. This will populate the Person object, along with the Address and City details.ClassMethod TestPersonModel(){ #dim response As DcLib.OData.ClientResponse set response=##class(TripPinWS.People).%New().Fetch("russellwhyte") if response.IsError() write !,response.GetStatusText() quit set person=##class(TripPinWS.Model.Person).%New() set sc=person.%JSONImport(response.Value) if $$$ISERR(sc) write !!,$SYSTEM.Status.GetOneErrorText(sc) return set sc=person.%Save() if $$$ISERR(sc) write !!,$SYSTEM.Status.GetOneErrorText(sc) return}We can then run a SQL command to see the data is persisted.SELECT ID, Concurrency, Emails, FirstName, Gender, LastName, UserNameFROM TripPinWS_Model.PersonID Concurrency Emails FirstName Gender LastName UserNamerussellwhyte 637012191599722031 Russell@example.com Russell@contoso.com Russell Male Whyte russellwhyteFinal ThoughtsAs we’ve seen, it’s easy to consume RESTful OData services using the built-in %NET classes. With a small amount of additional helper code, we can simplify the construction of OData queries, unify error reporting, and automatically deserialize JSON into dynamic objects.We can then create a new OData client just by providing its base URL and, if required, an HTTPS configuration. We then have the option to use this one class and the .With('entity') method to consume any entity on the service, or create named subclasses for the entities that we are interested in.We have also demonstrated that it's possible to deserialize JSON responses directly into persistent classes using the new JSON adaptor. In the real world, we might consider denormalizing this data first and ensure that the JSON adapter class works with custom mappings.Finally, working with OData has been a real breeze. The consistency of service implementation has required much less code than I often experience with bespoke implementations. Whilst I enjoy the freedom of RESTful design, I would certainly consider implementing a standard in my next server-side solution. Awesome! Thanks Sean,
This helped me understand the OData specification in a quick way.
Paul Excellent post. My app allows expose IRIS as a Odata server, see: https://openexchange.intersystems.com/package/OData-Server-for-IRIS
Announcement
Evgeny Shvarov · Sep 4, 2019
Hi InterSystems Developers!Here is are the release notes of Developers Community new features added in August 2019. What's new?Site performance fix;New features for the job opportunity announcements;feature to embed DC posts into other sites;minor enhancements.See the details below.DC Performance fixIn August we've fixed a major problem in performance and think that we don't need any more tweaks on it. If you feel that some DC functionality needs performance fixes please submit an issue.New features for Job Opportunities on InterSystems Developers communityDevelopers!We want to let you know about all the opportunities for a job as InterSystems Developers so with this release we've introduced a few new features to make Job opportunities announcements on DC more visible and handy to use.Every post with Job Opportunity now have the button - I'm interested which, if you click it, sends a Direct Message to a member who submitted a vacancy with the following text:"Hi! I'm interested in your job opportunity "name of job opportunity"(link). Send me more information please".Job Opportunities now have a new icon - suitcase (for your laptop?).Job posters!If the job opportunity is closed you can click a Close button now which will hide it and which will let developers know that this opportunity is no longer available.This action will hide the opportunity into drafts.DC articles embedded to other sitesWith this release, we introduced a feature to embed DC post (an article, or event, or question) into another site.Just add
"?iframe"
parameter to an article and this gives you a link ready to be embedded into another site as an iframe.
Example:
Minor changes
Again we've fixed a lot of bugs and introduced some nice features, e.g. better support for tables in markdown, new options for members sorting - by posts rating, comments rating, answers rating.
Also, we added an extra link to the app in the bottom of the article if the link to the Open Exchange app is introduced.
We also added a few enhancements in translated articles management - see new articles in Spanish Community.
See the full list of changes with this release and submit your issues and feature requests in a new kanban.
Stay tuned!