In this article, I am trying to identify the multiple areas to develop the features we can able to do using python and machine learning.

Each hospital is every moment trying to improve its quality of service and efficiency using technology and services. 

The healthcare sector is one of the very big and vast areas of service options available and python is one of the best technology for doing machine learning.

In every hospital, humans will come with some feelings, if this feeling will understand using technology is make a chance to provide better service.

2 2
2 190

Kidney Disease can be discovered from some parameters well known to the medical community. In this way, in order to help the medical community and computerized systems, especially AI, the scientist Akshay Singh published a very useful dataset for training ML algorithms in the detection/prediction of kidney disease. This publication can be found on the largest and best known data repository for ML, Kaggle at https://www.kaggle.com/datasets/akshayksingh/kidney-disease-dataset.

2 0
0 142

Diabetes can be discovered from some parameters well known to the medical community. In this way, in order to help the medical community and computerized systems, especially AI, the National Institute of Diabetes and Digestive and Kidney Diseases published a very useful dataset for training ML algorithms in the detection/prediction of diabetes. This publication can be found on the largest and best known data repository for ML, Kaggle at https://www.kaggle.com/datasets/mathchi/diabetes-data-set.

5 1
1 109

Challenges of real-time AI/ML computations

We will start from the examples that we faced as Data Science practice at InterSystems:

  • A “high-load” customer portal is integrated with an online recommendation system. The plan is to reconfigure promo campaigns at the level of the entire retail network (we will assume that instead of a “flat” promo campaign master there will be used a “segment-tactic” matrix). What will happen to the recommender mechanisms? What will happen to data feeds and updates into the recommender mechanisms (the volume of input data having increased 25000 times)? What will happen to recommendation rule generation setup (the need to reduce 1000 times the recommendation rule filtering threshold due to a thousandfold increase of the volume and “assortment” of the rules generated)?
  • An equipment health monitoring system uses “manual” data sample feeds. Now it is connected to a SCADA system that transmits thousands of process parameter readings each second. What will happen to the monitoring system (will it be able to handle equipment health monitoring on a second-by-second basis)? What will happen once the input data receives a new bloc of several hundreds of columns with data sensor readings recently implemented in the SCADA system (will it be necessary, and for how long, to shut down the monitoring system to integrate the new sensor data in the analysis)?
  • A complex of AI/ML mechanisms (recommendation, monitoring, forecasting) depend on each other’s results. How many man-hours will it take every month to adapt those AI/ML mechanisms’ functioning to changes in the input data? What is the overall “delay” in supporting business decision making by the AI/ML mechanisms (the refresh frequency of supporting information against the feed frequency of new input data)?

4 0
1 423

Fixing the terminology

A robot is not expected to be either huge or humanoid, or even material (in disagreement with Wikipedia, although the latter softens the initial definition in one paragraph and admits virtual form of a robot). A robot is an automate, from an algorithmic viewpoint, an automate for autonomous (algorithmic) execution of concrete tasks. A light detector that triggers street lights at night is a robot. An email software separating e-mails into “external” and “internal” is also a robot. Artificial intelligence (in an applied and narrow sense, Wikipedia interpreting it differently again) is algorithms for extracting dependencies from data. It will not execute any tasks on its own, for that one would need to implement it as concrete analytic processes (input data, plus models, plus output data, plus process control). The analytic process acting as an “artificial intelligence carrier” can be launched by a human or by a robot. It can be stopped by either of the two as well. And managed by any of them too.

6 0
0 206

What is Distributed Artificial Intelligence (DAI)?

Attempts to find a “bullet-proof” definition have not produced result: it seems like the term is slightly “ahead of time”. Still, we can analyze semantically the term itself – deriving that distributed artificial intelligence is the same AI (see our effort to suggest an “applied” definition) though partitioned across several computers that are not clustered together (neither data-wise, nor via applications, not by providing access to particular computers in principle). I.e., ideally, distributed artificial intelligence should be arranged in such a way that none of the computers participating in that “distribution” have direct access to data nor applications of another computer: the only alternative becomes transmission of data samples and executable scripts via “transparent” messaging. Any deviations from that ideal should lead to an advent of “partially distributed artificial intelligence” – an example being distributed data with a central application server. Or its inverse. One way or the other, we obtain as a result a set of “federated” models (i.e., either models trained each on their own data sources, or each trained by their own algorithms, or “both at once”).

2 0
1 361

Artificial intelligence has solved countless human challenges – and medical coding might be next.
As organizations prepare for ICD-11, medical coding is about to become more complicated. Healthcare organizations in the United States already manage 140,000+ codes in ICD-10. With ICD-11, that number will rise.
Some propose artificial intelligence as a solution. AI could aid computer-based medical coding systems, identifying errors, enhancing patient care, and optimizing revenue cycles, among other benefits.

1 0
0 232

Keywords:  IRIS, IntegratedML, Machine Learning, Covid-19, Kaggle 

Purpose

Recently I noticed a Kaggle dataset  for the prediction of whether a Covid-19 patient will be admitted to ICU.  It is a spreadsheet of 1925 encounter records of 231 columns of vital signs and observations, with the last column of "ICU" being 1 for Yes or 0 for No. The task is to predict whether a patient will be admitted to ICU based on known data.

2 1
1 565

This is the third post of a series explaining how to create an end-to-end Machine Learning system.

Training a Machine Learning Model

When you work with machine learning is common to hear this work: training. Do you what training mean in a ML Pipeline?
Training could mean all the development process of a machine learning model OR the specific point in all development process
that uses training data and results in a machine learning model.

4 10
1 218

This is my introduction to a series of posts explaining how to create an end-to-end Machine Learning system.

Starting with one problem

Our IRIS Development Community has several posts without tags or wrong tagged. As the posts keep growing the organization
of each tag and the experience of any community member browsing the subjects tends to decrease.

First solutions in mind

We can think some usual solutions for this scenario, like:

5 11
1 329

A few months ago, I read this interesting article from MIT Technology Review, explaing how COVID-19 pandemic are issuing challenges to IT teams worldwide regarding their machine learning (ML) systems.

Such article inspire me to think about how to deal with performance issues after a ML model was deployed.

2 2
0 308

Keywords:  PyODBC, unixODBC, IRIS, IntegratedML, Jupyter Notebook, Python 3

 

Purpose

A few months ago I touched on a brief note on "Python JDBC connection into IRIS", and since then I referred to it more frequently than my own scratchpad hidden deep in my PC. Hence, here comes up another 5-minute note on how to make "Python ODBC connection into IRIS".

1 0
1 1,201

 

Keywords:   Jupyter Notebook, Tensorflow GPU, Keras, Deep Learning, MLP,  and HealthShare    

 

1. Purpose and Objectives

In  previous"Part I" we have set up a deep learning demo environment. In this "Part II" we will test what we could do with it.

Many people at my age had started with the classic MLP (Multi-Layer Perceptron) model. It is intuitive hence conceptually easier to start with.

1 2
3 712
Article
Niyaz Khafizov · Jul 27, 2018 4m read
Load a ML model into InterSystems IRIS

Hi all. Today we are going to upload a ML model into IRIS Manager and test it.

Note: I have done the following on Ubuntu 18.04, Apache Zeppelin 0.8.0, Python 3.6.5.

Introduction

These days many available different tools for Data Mining enable you to develop predictive models and analyze the data you have with unprecedented ease. InterSystems IRIS Data Platform provide a stable foundation for your big data and fast data applications, providing interoperability with modern DataMining tools. 

6 2
2 1,074