Artificial Intelligence (AI) – What's the big fuss about?

maro news
Artificial Intelligence (AI) – What's the big fuss about?

Historically, intelligence is a quality that has almost always been attributed only to humans. Some scientists have advanced arguments for animal intelligence, but the consensus has always been that human intelligence is far superior than any other form of intelligence. For believers, the only intelligence greater than human intelligence is God’s intelligence. For non-believers, human intelligence was the ultimate. All this was until the development of what has now become known as Artificial Intelligence (AI). For the first time in human history, people of all persuasions and paradigms have to grapple with the possibility of co-existing with machines that have equal or greater intelligence than humans. Some even go as far as to say that we will get to the point where machines have Artificial Superintelligence, which inadvertently would mean most people would be unemployable.

What is Artificial Intelligence?

In order to understand AI one needs to first understand the historical superiority of human intelligence. Mainly, there have been four elements that have given human intelligence superiority over animals and machines: learning, reasoning and self-correction. Therefore, when machine/program can solve problems, complete a task, learn or exhibit other cognitive functions that humans can, we refer to it as having artificial intelligence.

The term 'artificial intelligence' was coined in 1956 at a science conference at Dartmouth College, in New Hampshire USA. At the time, AI was simply defined as the programming of machines to simulate humans in completing tasks. Over time, this definition has evolved and changed, but the core of it remains the same: the idea that machines may be built and programmed to have human-like cognitive abilities, decision making and task execution.

Simply put, Artificial Intelligence is the sum of technological advancements that bridge the gap between human beings and machines as it relates to intelligence. The more a machine/program has cognitive skills, the more artificially intelligent it is considered.

A quick study of AI and its history shows three clear phases of progression:

  1. 1. Applied AI / Artificial Narrow Intelligence (ANI)
  2. 2. Deep AI / Artificial General Intelligence (AGI)
  3. 3. Artificial Superintelligence.
  1. Artificial Narrow Intelligence (ANI)
    AI that is designed to perform a single task, pulling data and instructions from a specific data-set is called ANI. It simulates human behavior based on a constricted range of pre-defined instructions. As a result, ANI systems don't perform outside of the single task that they are designed to perform.

    We are currently living in the era of ANI. Though most experts agree that it is still developing and maturing, we have undoubtedly began to enjoy some of the conveniences that come with ANI.
    Examples of ANI include: voice recognition, facial recognition, voice assistants, self-driving cars, or searching the Internet.
  2.  

  3. Artificial General Intelligence (AGI)
    A machine with Artificial General Intelligence wholly simulates human intelligence in that it has the ability to learn, store data, analyze the data and apply its intelligence to make decisions and to solve problems. In any given situation, AGI comprehends, thinks, and executes in a way that is no different from humans.
    Unlike ANI - which is largely limited to being task oriented, AGI has the capacity to be both task oriented and solutions oriented. Even when faced with an unfamiliar task, AGI possesses the capacity to find solutions for that task without the explicit instruction of human developers. This kind of capacity to solve problems was previously unimaginable for computers.
    The general consensus is that AGI is at least a few decades away, but the progress and advancements towards it continue to captivate technologists and scientists alike.
  4. Artificial Intelligence - ANI and AGI

     

  5. Artificial Superintelligence
    Many believe that once programs possess the ability to learn limitlessly, Artificial General Intelligence (AGI) will become merely a segue to Artificial Superintelligence, i.e. machines will become vastly superior to humans in reasoning and decision-making abilities.
    At the moment, humans continue to have the upper hand because they are the creators/programmers of AI machines, i.e. humans have to build and instruct/code machines. But the idea of Superintelligent machines would mean that the machines themselves would be capable of building and instructing other, more advanced machines which will also be capable of producing more capable machines. Eventually, the human race loses the status of intelligence standard-bearer.

The Upside

AI executes numerous, high-volume, computerized tasks reliably and without fatigue and doesn’t have to keep to ‘working hours’. AI systems are significantly quicker any human being, which enables companies to improve overall productivity, efficiency and quality.

Health. AI driven systems and applications assist doctors to make data-driven decisions, thereby reducing incorrect analysis and diagnosis. And robotic surgery continues to make significant advances, while artificial limbs enable the disabled to enjoy new levels of freedom and capability.

Education. AI-enabled algorithms are able to analyze a learner’s knowledge and interests and provide more personalized recommendations, learning approaches and training plans.

Data Value Maximized. Data generated from self-learning can become intellectual property. The answers for most problems are in the data; one just has to apply AI to get them out more speedily and accurately, thereby creating a competitive advantage for any business.

The Downside

Human intelligence may be further diminished by the inevitable 'Dependence lock-in' that comes with AI. By 'making life easier' for humans, machine intelligence has already began the frontal assault on human intelligence.

For example, one of the great tragedies of the 21st century is not only that people no longer use their minds (or intelligence) to perform the simple task of remembering phone

Another example, many people don't see the need to remember directions when there's a lady in the car that tells them at every stop whether to turn left or right. They complete trust the GPS to make decisions about how to get where they are going.

This increased dependence on machine-driven decisions is gradually eroding our ability to think for ourselves, take action independently and interact effectively with each other.

What are the implications for Africa

As artificial intelligence becomes a reality, Africa needs to expedite infrastructure projects to catch up to the rest of the world. Electricity and Broadband access are essential. They can no longer be considered as nice-to-have or the icing on the cake, NO – they are the baking powder and the flour. There is no cake without them.

0 Comments.

leave a comment

You must login to post a comment. Already Member Login | New Register