Very interesting, @Philipp Bonin !
Translate to Python is really impressive feature!
- Log in to post comments
Very interesting, @Philipp Bonin !
Translate to Python is really impressive feature!
Thanks, Robert!
Anyway I think every operation should expose these classes: request class and response class. They are like interfaces of a given service or operation.
And once I start building a transformation where it is known where the router gets the message(response) from and where it directs it the data transformation builder could show these message classes on the source and target parts of the transformation. This will safe a significant amount of developers' time.
If we think how this could be implemented, what about extending Operations with MessageClasses class, that will add two properties RequestClass and ResponseClass, that could be overridden with a proper class types by Operation developer?
This is crucial if we have a lot of 3rd party Operation and Services builders.
The links to InterSystems IRIS community Editions are updated.
The application should work either on IRIS Community Edition or IRIS for Health Community Edition. Both could be downloaded as host (Mac, Windows) versions from Evaluation site, or can be used in a form of containers pulled from InterSystems Container Registry or Community Containers: intersystemsdc/iris-community:latest or intersystemsdc/irishealth-community:latest .
Thanks for reporting!
I like your idea of introducing it in a "New Transform" Wizard
Ah, thanks Alex! I should have been enabling the Testing option for Production first.
Yes, this shows a set of available requests but doesn't expose response types though.
IMO request and response message classes are like Interfaces of a service or a production so it should be very obvious which they are.
Hi Alex! This is a good one about Test Action!
However it doesn't work for me - maybe it is a UI glitch:
.png)
Yes, thank you Robert. I'm wondering why the class of the request message (which is the same as incoming message) and response message (which is outgoing) are not the properties of the class of operation. As every time when we do a Router it connects one operation or service with another and should be suggested automatically to a transformation UI.
Amazing...
Can we do the same against DC pages?
I overlooked your package and introduced my own "one-class" package to handle this.
So the Setup.Init() method is not a loop anymore :)
ClassMethod Init(TgToken As %String, GPTKey As %String) As %Status
{
set st=$$$OK
set production="shvarov.telegramgpt.i14y.TgGptProduction"
for item="Telegram.InboundService","Telegram.OutboundOperation" {
set st=##class(shvarov.i14y.Settings).SetValue(production,item,"Token",TgToken)
quit:$$$ISERR(st)
}
set item="St.OpenAi.BO.Api.Connect"
set st=##class(shvarov.i14y.Settings).SetValue(production,item,"ApiKey",GPTKey)
return st
}Thank you, @Kurro Lopez ! I'll take a look!
The bonus set is updated. Two bonuses added:
Thank you, @Guillaume Rongier! I like it!
But sometimes we can just have a production with "standard" services and operations, so there is no option to override OnInit().
How can I use it as a parameter to IPM installation?
Add an issue?
If %classes usage could help?
Also you can map into %All?
Actually you can make an invoke with %Status check in a “before” phase and return not-OK status with message. This will prevent from installation.
The only caveat that this approach works only for the "pre-initialised" settings - those that are presented in the production class xdata block. So, if you plan to make changes programmatically, init these settings with any placeholders and this method will be able to change it programmatically later.
Introduced a module that does the thing what @Sergey Mikhailenko suggested.
So install it:
USER>zpm "install production-settings"
and call:
do ##class(shvarov.i14y.Settings).SetValue("ProductionName","ServiceOrOperationName","Setting",Value)Hi @Marcel Lee ! We didn't do this before. I sent you an email.
Looking forward!
For now it ended in the following way:
USER>d ##class(shvarov.telegramgpt.Setup).Init($system.Util.GetEnviron("TG_BOT_TOKEN"),$system.Util.GetEnviron("OPENAPI_KEY"))This is how we can setup production after docker build and start in the following project. Thank you, @Guillaume Rongier
Thanks @Alex Woodhead!
Already implemented in this manner. Now the module can be installed as is:
USER>zpm "install telegram-gpt -D TgToken=your_telegram_token -D GPTKey=your_ChatGPT_key" Thank you, @Sergey Mikhailenko !
This is exactly the thing I was looking for.
Implemented in a new version of ChatGPT Telegram bot
In a new version can also be installed as:
USER>zpm "install telegram-gpt -D TgToken=your_telegram_token -D GPTKey=your_ChatGPT_key"
so you can pass the Telegram Token and ChatGPT API keys as production parameters.
Thank you, @Kurro Lopez! And thanks for introducing chatGPT package to the community!
It is your choice, but I personally recommend to use IRIS as docker images for development.
It is very convenient and this approach eliminates a gazillion problems you can met developing with IRIS desktop.
Nothing happens until you intentionally load classes from a new branch and other resources into an IRIS namespace.
But I highly recommend you to rebuild iris docker image and reload classes from a new branch when switching branches otherwise you could have a mix of two (or even more) branches in IRIS that can lead to unexpected behaviour.
Great article, @Muhammad Waseem !
I'd add a docker command to start terminal.
First this one launches IRIS and creates a fresh namespace alone with the user 'demo' and password 'demo':
docker run --rm --name iris-demo -d -p 9091:1972 -p 9092:52773 -e IRIS_PASSWORD=demo -e IRIS_USERNAME=demo -e IRIS_NAMESPACE=DEV intersystemsdc/iris-community
Then to launch a terminal in PROD namespace:
docker exec -it iris-demo iris session iris -U DEV
DEV>
@Michael Davidovich , this is a good idea!
If you could provide a repo with legacy application inviting for dockerization I believe a lot of DC members could help.
Here is another video that updates the class and commits changes to the repo
Well, if you have git repo cloned with objectscript files and VSCode connected to IRIS all your files are imported into IRIS and can be executed there.
You change the objectscript files in VSCode, compile and they are imported into IRIS automatically.
VSCode and git maintain the versioning of changes for you.
I understand the argumentation, makes sense. Just curious how do you debug those unittests that fail?