Find

Article
· 3 hr ago 2m read

Lier des tables par programmation

Rubrique FAQ InterSystems

Dans InterSystems IRIS, vous pouvez créer des tables liées à l'aide de commandes, au lieu d'utiliser System Explorer > SQL > Wizard > Linked Tables dans le Portail de gestion :

Pour créer une table liée, utilisez la méthode CreateLinkedTable de la classe %SYSTEM.SQL.Schema. Consultez la référence de la classe pour plus de détails.

Pour l'exécuter, procédez comme suit :

set sc = $SYSTEM.SQL.Schema.CreateLinkedTable("<dsn>","<Schema>","<Table>","<primaryKeys>","<localClass>","<localTable>","")

/// 1er argument : dsn - Nom de la connexion SQL Gateway
/// 2e argument : Schema - Nom du schéma source
/// 3e argument : Table - Nom de la table source
/// 4e argument : primaryKeys - Clé primaire
/// 5e argument : localClass - Nom de la classe liée (par exemple, User.LinkedClass)
/// 6e argument : localTable - Nom de la table SQL liée (SqlTableName)
/// 7e argument : columnMap - Informations sur les champs liés

Si vous l'exécutez de cette façon, la table liée sera créée avec l'attribut ReadOnly. Si vous souhaitez supprimer l'attribut ReadOnly, vous devez le spécifier dans le septième argument, columnMap.

set columnMap("external field name") = $lb("new class property name","new sql field name","read-only(1/0)")

Dans cet exemple, un columnMap est créé qui définit ReadOnly sur 0 pour tous les champs (colonnes), et une table liée est créée. La primaryKey est définie pour hériter de la primaryKey de la table liée. L'utilisation est la suivante :

do ##class(ISC.LinkUtils).LinkTable("<dsn>","<Schema>","<Table>","<localClass>")

/// Premier argument : dsn - Nom de la connexion SQL Gateway
/// Deuxième argument : Schéma - Nom du schéma source du lien
/// Troisième argument : Table - Nom de la table source du lien
/// Quatrième argument : localClass - Nom de la classe de destination du lien (par exemple, User.LinkedClass)

Vous pouvez également voir l'exemple utilisé ici : https://github.com/Intersystems-jp/CreateLinkedTable

Discussion (0)1
Log in or sign up to continue
Digest
· 7 hr ago
Announcement
· 9 hr ago

加入我们的社区十周年纪念视频吧!

今年,我们的InterSystems开发者社区迎来10周年纪念——我们诚邀您共襄盛举!

我们正在制作一部特别的社区视频,其中将收录来自世界各地的开发者社区成员的问候与回忆。

想要加入吗?很简单:

▶️录制一段简短的视频(1-2分钟),在视频中:

  • 分享您在开发者社区中的难忘时刻或精彩瞬间
  • 送上您对10周年庆典的祝贺🎊

我们会将大家的视频片段整合成一部盛大的庆典视频,供所有人欣赏!🎬✨ 🎬✨

👉点击此处录制视频

只需几分钟——按照屏幕上的提示操作即可,无需任何设置。完成后,我们会自动收到您的视频。

欢迎您成为官方10周年庆典一部分,抓住这个机会吧!🥂🥂

Discussion (0)1
Log in or sign up to continue
Article
· 9 hr ago 2m read

直播回放 | 借助IDFS构建实时数据中枢:从多源整合到智能分析

10月17日14:00,我们举办了题为“借助IDFS构建实时数据中枢:从多源整合到智能分析”的线上研讨会,邀请InterSystems销售工程师 @Jeff Liu 
分享了InterSystems Data Fabric Studio(IDFS)。

直播现已准备就绪,欢迎👉点击此处查看!InterSystems Data Fabric Studio(IDFS)提供了一种新方法,可在安全可控的环境中将正确的数据在正确的时间提供给正确的消费者。IDFS是一个完全由云计算管理的解决方案 ,旨在轻松实施和维护智能数据编织(smart data fabric),将不同的数据连接并转换为单一的统一可操作信息源。这一自助式解决方案使数据分析师、数据管理员和数据工程师能够访问和处理业务利益相关者所需的数据,而无需依赖开发人员。

本次分享将展示如何通过多源数据管道自动化构建(定义数据源连接、字段提取与清洗规则)、业务日历驱动的实时调度(周期自动运行数据任务)以实现多种异构数据系统的无缝融合。

您将在此次分享中了解到以下经典场景:

  • 数据工程师视角:通过可视化 “配方” 工具(Recipes)定义数据转换逻辑,无需编码即可完成从数据到分析表的自动化加载;
  • 分析师实践:基于整合后的标准化数据集,快速构建生产效率 BI 立方体,联动Power BI 生成动态看板;
  • 合规管理:利用系统内置的 “快照调度” 功能,自动生成符合审计要求的历史数据存档,结合层级化权限控制(管理员 - 工程师 - 分析师分工),确保数据安全可追溯。

无论您是数据工程师、架构师,还是AI应用开发者,都能在本次研讨会中获取IDFS实战经验、技术架构设计思路与前沿趋势洞察,IDFS助力您轻松部署以数据为中心、连接数据和应用孤岛的AI应用!

我们期待与您的进一步互动。

1. 留言互动

在会议进行过程中,如果您有任何疑问,或者希望与我们进一步讨论,可以在屏幕上方点击“提问”按钮,提交您的问题,我们会在分享结束后整理问题,并通过邮件向您回复。

2. 有奖调研

参会期间,点击屏幕右上角“有奖调研”完成问卷,将有机会获得定制小礼品。

快来加入我们吧٩(๑>◡<๑)۶ 👉点击查看

Discussion (0)1
Log in or sign up to continue
Article
· 13 hr ago 6m read

IKO Plus: Multi-Cluster IrisClusters Propogated with Karmada

Kamino for IrisClusters

If you are in the business of building a robust High Availability, Disaster Recovery or Stamping multiple environments rapidly and in a consistent manner Karmada may just be the engine powering your Cloning Facility.



I lost the clone war with KubeAdmiral but won the pod race with Karmada and would like to pay it forward with what I figured out.   This is a mult-cluster solution, that I would consider to be a day zero with day one management of Kubernetes objects, IrisCluster included.


To keep inline with Cloud Native Computing Foundation standards, the Star Wars analogy is required, so here it goes.




Boba Fett's genetics were sent and prepared on Kamino, which triggered numberous clones were subsequently deployed across the galaxy.  Some went to Endor, some went to Tatooine, others got deployed on the Death Star, and some even defected and bought a Ford F-150 and moved to Montana.  But in all cases, the Clone essentially evolved on its own, but retained the base genetic footprint.  This is somewhat the idea for Karmada, deploying resources to meet HA/DR, stretched, or purposeful environments with a single decleration of what the IRis

If you are looking for a more plausible use case for understanding Karmada, or generally fell out of favor of Disney's acquisition of the Star Wars franchise, here is one to take to a meeting, Multi-Cluster Secrets Management.

So if you are a Kubehead (tm) and tend to do this on the regular to sync secrets across clusters...

kubectl --context source get secret shhhh-secret -o yaml | kubectl --context target apply -f -

You may want to look a this from just that angle as a simple, yet powerful backdrop.

Goal

Lets provision a pair of clusters, configure Karmada on one of them and join the second one to the Karmada control plane for the PUSH model.  Once the second cluster is joined, we are going to provision an IrisCluster on the Karmada control plane, and propogate it to the member cluster.

Then, lets had a third cluster to Karmada and deploy the same IrisCluster to that one.



Clusters

Create a pair of clusters in Kind, one named Fett and the other Clone, this sill also install the Cilium CNI:

 

cat <<EOF | kind create cluster --name ikofett --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  disableDefaultCNI: true
EOF

cat <<EOF | kind create cluster --name ikoclone --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
  - role: control-plane
  - role: worker
networking:
  disableDefaultCNI: true
EOF

kind get kubeconfig --name ikoclone > ikoclone.kubeconfig
kind get kubeconfig --name ikofett > ikofett.kubeconfig

KUBECONFIGS=("ikoclone.kubeconfig" "ikofett.kubeconfig")

for cfg in "${KUBECONFIGS[@]}"; do
  echo ">>> Running against kubeconfig: $cfg"
  cilium install --version v1.18.0 --kubeconfig "$cfg"
  cilium status --wait --kubeconfig "$cfg"
  echo ">>> Finished $cfg"
  echo
done


You should now have a pair of clusters and a pair of kubeconfigs.


IKO

IrisCluster is a CRD and IKO efficiently creates them, its important to ensure that IKO exists along with the CRDS on all clusters.
 

KUBECONFIGS=("ikoclone.kubeconfig" "ikofett.kubeconfig")

for cfg in "${KUBECONFIGS[@]}"; do
  echo ">>> Running against kubeconfig: $cfg"
  helm install iko iris-operator/ -f iris-operator/values.yaml --kubeconfig "$cfg"  
  echo ">>> Finished $cfg"
  echo
done

Now, from a previous post we just so happen to have a stretched cluster laying around, so we will use that towards the end.


Karmada

Installing Karmada via Helm Chart is the easiest route with a fresh Kind cluster, so lets do that on `ikofett`

helm repo add karmada-charts https://raw.githubusercontent.com/karmada-io/karmada/master/charts
helm repo update
helm --namespace karmada-system upgrade -i karmada karmada-charts/karmada --version=1.15 --create-namespace --kubeconfig ikofett.kubeconfig


❗ The next step is pretty important, and a cornerstone to understanding cluster interaction.

We now need to create the kubeconfig to interact with the karmada cluster api.

You should now have 3 kubeconfigs... you will be interacting with the newly generated one often.

Here is an illustration of cluster api interaction at this stage of the distraction:

 
❗Important Networking Using Kind

Now that we have the Karmada control plane setup, lets just install the IKO crds into it. Instead of installing the entire operator, just add the crds from another machine into that control plane.

kubectl --kubeconfig ikofett.kubeconfig get crd irisclusters.intersystems.com -o yaml > ikocrds.yaml
kubectl create -f ikocrds.yaml --kubeconfig ikokamino.kubeconfig


We should have Karmada ready for business, lets propogate an IrisCluster.

Join

Now we have to let Karmada know about and be able to "talk" to the member cluster, in our case, `ikoclone` of which we created in Kind.

sween @ fhirwatch-pop-os ~/Desktop/IKOPLUS/karmada
└─ $ ▶ kubectl karmada --kubeconfig ikokamino.kubeconfig  join ikoclone --cluster-kubeconfig=ikoclone.kubeconfig
cluster(ikoclone) is joined successfully



Propogate

Now we are going to do the two-step to deploy and propogate our IrisCluster, interacting only with the karmada api.

Step one

Deploy iriscluster to Karmada.  This does NOT deploy an actual workload, it simply stores the definition, much like a "template" for an IrisCluster, or if we are still tracking to the Star Wars analagy, the DNA of the clone.

# full IKO documentation:
# https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=PAGE_deployment_iko
apiVersion: intersystems.com/v1alpha1
kind: IrisCluster
metadata:
  name: federated-iris
spec:
  topology:
    data:
      image: containers.intersystems.com/intersystems/iris-community:2025.1
  serviceTemplate:
    spec:
      type: LoadBalancer
      externalTrafficPolicy: Local

 

You can query irisclusters in the karmada control plane, but you will notice its basically just a stub.



Step two

Deploy a propogation policy to send it.

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: iriscluster-propagate
  namespace: default
spec:
  resourceSelectors:
  - apiVersion: intersystems.com/v1alpha1
    kind: IrisCluster
  placement:
    clusterAffinity:
      clusterNames:
      - ikoclone    #### our clone or member cluster

On the clone you should see a spun up IrisCluster!

Join Another and Propogate Another

Revisit the additional kubeconfig for brevity.

Now in one full swoop, lets join a stretched cluster out on Google Cloud Platform from a previous post with a similar join.

 
The second cluster is now available for business out in Google Cloud Platform as a Stretched Cluster.


To get a IrisCluster out there, we just have to edit our propogation policy to include the new target `k8s`.

And there you have it, another clone of the IrisCluster out on GCP

This scratches the surface with Karmada, check the docs for professional flexibility with your clones:
 

  • overrides on resources per joined cluster
  • networking integration with submariner
  • governance guidance
  • PUSH configuration, a lot like ArgoCD (PULL was above)
  • ...
  • ...

Discussion (0)1
Log in or sign up to continue