go to post Rich Taylor · Dec 28, 2018 Connor,This is true for the Docker0 bridge which I have setup and noted in the post. The problem is that docker-compose does not use this setting at all. Unless you configure the docker-compose.yml file using one of the methods I mention you will still get a 172.x.x.x address.Rich
go to post Rich Taylor · Oct 8, 2018 You can use the community license for what ever you want. There are limitations on configuration. Once you exceed the capabilities enabled by this license you would need to move up to some kind of paid license. Using it for learning the product should not come close to the limit however. Have fun!!
go to post Rich Taylor · May 30, 2018 I see that the ID is not in the export. However the import does figure out how to match an import to an existing item since it does not allow you to overwrite that existing records. Otherwise we would see a duplication of records under new ID's. As far as the methods to use I would have to disagree with this recommendation. A system administrator should not have to write code to maintain the systems under their care. Documentation is important, however that can be accomplished with a literal document. Then, failing to have a export/import or enterprise management function, those changes would manually be done to all effected systems. Writing code is less clear to everyone and is no less work.
go to post Rich Taylor · May 30, 2018 Let me clarify. this has to do with the ExportTasks and ImportTasks methods of %SYS.Task class. I need to know the qspec options that have impact on these processes.as to question 2. The process is that they are setting up a new server for backup and want to replicate what they have setup for the current server. Exporting and importing what is currently present is the best way. If they are going to write a program then they could just as well compare each existing task and do the changes manually. There is an ongoing maintenance side to this which would also be better done with an export an import.So to the original question is there anyway to tell ImportTasks to override the tasks that exist?
go to post Rich Taylor · May 30, 2018 Eduard,I had some additional questions.What are the options for qspec beyond the default 'd'?What if I want to override system defined tasks because I have adjusted things like schedules? Is there an option to do this? qspec?
go to post Rich Taylor · May 10, 2018 Eduard,Ok, I had not noticed that, but you are correct. I had tried other methods first as I noted before and ran into issues loading onto the new systems. I had obviously skipped the step of verifying the export file when I tried this method. So I gather that you HAVE to pass in a list of IDS to export. Leaving it blank does not export all. As I mentioned the documentation is extremely sparse on this api. I will test this again later.
go to post Rich Taylor · May 9, 2018 ERROR #6037: Nothing imported.Not terribly helpful I'm afraid. I know that when I attempted to use SQL I was getting many Field Read Only errors. So this may be related to that.
go to post Rich Taylor · May 9, 2018 I agree except that the import would not work. There is little documentation on utilizing these tools and I cannot even read the source code.
go to post Rich Taylor · May 9, 2018 Ok, Here is the procedure that worked for me:Export:merge ^TaskList = ^SYS("Task","TaskD")set sc = $System.OBJ.Export("TaskList.GBL","c:\temp\TaskList.gbl")note the extension GBL is not part of the globalname. It is used to indicate that we want to export a globalthe file destination is completely up to you.Importset sc = $System.OBJ.Load("c:\temp\TaskList.gbl",,.log)merge ^SYS("Task","TaskD") = ^TaskList
go to post Rich Taylor · May 9, 2018 I am clicking on the class and not the package in my build. and getting what looks like package documentation. There was obviously an issue in the build I was using. Regardless the code for export worked. But I can't get the import to function. Looking as exporting globals now.
go to post Rich Taylor · May 9, 2018 Eduard,Thanks for the info. I was looking at that but could not find the global reference in the class definition or in searching the global list. Now I know why. It's not in a global by itself. Let me see if this will work.
go to post Rich Taylor · May 9, 2018 I see this in the latest version of the online docs which is version 2017.2. obviously something was amiss in the build I was using. I have tried this, but the import does not work. I get an error that only states nothing was imported. Not terribly helpful .
go to post Rich Taylor · May 9, 2018 Interesting. I am looking at Cache version 2017.1.0.792 and these classes have non of this documentation. I will have to see if this exists on the version the customer is using. This is how the Class reference for %SYS.Task looks on that version:
go to post Rich Taylor · Feb 26, 2018 Your welcome.As you said you are still learning LDAP I am putting a couple of links regarding LDAP documentation and articles below. Hope that helps you out.http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=GCAS_LDAPhttp://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=EGIN_options_connectivity_adaptershttps://community.intersystems.com/post/global-summit-2016-ldap-beyond-simple-schema
go to post Rich Taylor · Feb 26, 2018 First, you can access Ensemble Credentials using the Ens.Config.Credentials class. To be clear this is NOT User definitions from the Security module. These are defined via the Ensemble Management portal options under Ensemble -> Configure ->Credentials.That should work for you. I would still like to better understand what is going on in the application here that drives this. You seem to be indicating that this is a user logging into Ensemble. If you could detail out the workflow that is happening and how it related to Ensemble Services we might be able to better advise you.Finally, I want to make you aware that the LDAP interface in InterSystems technologies has a method for using groups to define everything the security model needs. In fact that is the default method in recent versions.The best path forward is to get your Sales Engineer (SE) involved in what you are trying to achieve. That person would be best suited to dig into your requirements and advise you. If, for some reason, you cannot contact your SE or don't know who that is send me a private message. I'd be happy to help out more directly.
go to post Rich Taylor · Feb 26, 2018 Ensemble Credentials are normally used to satisfy security for an Ensemble Business host. This separates the maintenance of security from the maintenance of the actual interfaces. The application of the security is handled completely by Ensemble in that scenario. This does not appear to be how you are attempting to utilize this. It would help to better understand your use case here. What is the entry path/service that is utilizing delegated authentication?
go to post Rich Taylor · Jan 10, 2018 No it is not 'necessary'. However I do like to be able to have an environment that more closely matches what one might need in production. This is both for my experience and to be able to show InterSystems technology in a manner that might occur for a client.I do use Docker exec, thought I choose to go to BASH so I have more general access. I actually wrote a simple cmd file to do this and added it to a menu on my toolbar.@echo offdocker container ls --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}"echo:set /P Container=Container ID: docker exec -it %Container% bash
go to post Rich Taylor · Jan 9, 2018 Let me add my experience to this comment. I have been wading into the Docker ocean. I am on Widows and really did not want to run a Linux VM to get Docker Containers (seemed a bit redundant to me) so Docker for Windows was the way to go. So far this has worked extremely well for me. I am running an Ubuntu container with Ensemble added int. My dockerfile is a simplistic version of the one earlier in these comments. I am having only one issue related to getting the SSH daemon to run on when the container starts. I hope to have all my local instances moved into containers soon.My feeling is that this will be great for demonstrations, local development, and proofs of concept. I would agree that for any production use having a straight Linux environment with Docker would be a more robust and stable solution.