Abdullah Al Zakwani
Imagine you are in an autonomous car, depending to be given direction by intelligent traffic signals. While being 200km/hr you realize the actual learning mechanism were being driven by false positive and the actual decisions being taken by your AI robots are random. Human have always been developing with fear of what they create. On social life, from the time a wheel was invented to modern computer human always fear the machine will take all our jobs. We always fail to relate the fact these technologies free up our time to get better into things that mater more, our health and comfort. Currently the boom of AI has fallen into the same category, positive as a platform for opportunity and negative that “THEY WILL TAKE OVER THE WORLD” not only our jobs. Fear aside caution has to be taken not on making robots intelligent “and take over the world” but more importantly on the development of information and capabilities of the makers. Minimizing false positive and enforced learning rates of AI algorithms has to be clearly eliminated while the right (maybe best) algorithm made by real capable data scientist who able to read and understand concept they implement be promoted. There is a need of a project like the Kristel Clear Engine and its support for will focus and the emergent issue of false positive and Emotional Intelligence and their relation.