Fixing the terminology

A robot is not expected to be either huge or humanoid, or even material (in disagreement with Wikipedia, although the latter softens the initial definition in one paragraph and admits virtual form of a robot). A robot is an automate, from an algorithmic viewpoint, an automate for autonomous (algorithmic) execution of concrete tasks. A light detector that triggers street lights at night is a robot. An email software separating e-mails into “external” and “internal” is also a robot. Artificial intelligence (in an applied and narrow sense, Wikipedia interpreting it differently again) is algorithms for extracting dependencies from data. It will not execute any tasks on its own, for that one would need to implement it as concrete analytic processes (input data, plus models, plus output data, plus process control). The analytic process acting as an “artificial intelligence carrier” can be launched by a human or by a robot. It can be stopped by either of the two as well. And managed by any of them too.

6 0
0 375

This is the third post of a series explaining how to create an end-to-end Machine Learning system.

Training a Machine Learning Model

When you work with machine learning is common to hear this work: training. Do you what training mean in a ML Pipeline?
Training could mean all the development process of a machine learning model OR the specific point in all development process
that uses training data and results in a machine learning model.

4 10
2 366

This is the second post of a series explaining how to create an end-to-end Machine Learning system.

Exploring Data

The InterSystems IRIS already has what we need to explore the data: an SQL Engine! For people who used to explore data in
csv or text files this could help to accelerate this step. Basically we explore all the data to understand the intersection
(joins) which should help to create a dataset prepared to be used by a machine learning algorithm.

1 0
1 307

Hi all. We are going to find duplicates in a dataset using Apache Spark Machine Learning algorithms.

Note: I have done the following on Ubuntu 18.04, Python 3.6.5, Zeppelin 0.8.0, Spark 2.1.1

Introduction

In previous articles we have done the following:

0 0
1 727

Apache Spark has rapidly become one of the most exciting technologies for big data analytics and machine learning. Spark is a general data processing engine created for use in clustered computing environments. Its heart is the Resilient Distributed Dataset (RDD) which represents a distributed, fault tolerant, collection of data that can be operated on in parallel across the nodes of a cluster. Spark is implemented using a combination of Java and Scala and so comes as a library that can run on any JVM.

11 5
1 2.8K

Hello!

My group and I are currently doing a research project on natural language processing and iKnow plays a big role in this project. I am aware that the algorithms iKnow use aren't public, and I respect that.

My question is, are there any public documents/research that explains, at least part of, the algorthims iKnow uses and the motivations for using them?

1 2
0 420