Artificial Intelligence (A.I.) is a broad branch of computer science that creates intelligent machines to perform tasks that typically require human intelligence.

WHAT ARE THE FOUR TYPES OF ARTIFICIAL INTELLIGENCE?

  • Reactive Machines
  • Limited Memory
  • Theory of Mind
  • Self-Awareness

Reactive Machines

Artificial intelligence

Machine learning that takes a human face as input and draws a box around it to identify it as a face is a simple reactive machine. The model does not store input data does not conduct training.

Limited Memory

Limited memory types refer to the A.I.’s ability to retain previous data and predictions, using that data to make more accurate predictions. With little memory, the machine learning architecture becomes a bit more complex. 

Three main types of machine learning models achieve this type with limited memory:

  • Reinforcement learning
  • Long Short Term Memory (LSTMs)
  • Evolutionary Generative Adversarial Networks (E-GAN)

Theory of Mind

Nowadays, machine learning models do a lot to direct humans to a task. If you angrily yell at Google Maps to point you in a different direction, it won’t offer emotional support and say, “This is the fastest direction. Who can I call and tell you you’ll be late?” Instead, Google Maps continues to return the same traffic reports and estimated arrival times.

Self-Awareness

Self-awareness is the ability to see oneself clearly and objectively through reflection and introspection.

While it may not be possible to achieve complete objectivity about oneself (this is a dispute that has continued to rage throughout the history of philosophy), there are certainly degrees of self-awareness. It exists on a spectrum.

While everyone has a fundamental idea of ​​self-awareness, we don’t know exactly where it comes from, what its predecessors are, or why some of us have more or less of it than others.

Benefits and risks of artificial intelligence

Artificial intelligence today is properly known as narrow A.I. (or weak A.I.). It is designed to perform a minor task (e.g., only facial recognition or only internet searches, or only driving a car). However, the long-term goal of many researchers is to create general AI (AGI or strong A.I.). While narrow A.I. may outperform humans at whatever specific task, like playing chess or solving equations, AGI would outperform humans at nearly every cognitive task.

In the short term, the goal of maintaining the beneficial impact of A.I. on society motivates research in many areas, from economics and law to technical topics such as verification, validity, security, and control. While your laptop crashing or being hacked may be nothing more than a minor annoyance, it becomes increasingly important that the A.I. ​​system. System or your electrical network. 

In the long term, an important question is what will happen if the quest for strong A.I. succeeds and an A.I. system becomes better than humans at all cognitive tasks. As pointed out by I.J. Good in 1965, designing more innovative A.I. systems is a cognitive task. Such a system undergoes recursive self-improvement, triggering an intelligence explosion behind human intellect. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty. So the creation of strong A.I. might be the most significant event in human history. Some experts have expressed concern, though, that it might also be the last unless we learn to align the goals of the A.I. with ours before it becomes superintelligent.

Most researchers agree that a superintelligent A.I. is unlikely to show human emotions such as love or hate. There is no reason to expect the A.I. ​​to be intentionally benevolent or malevolent. Instead, when considering how A.I. could become a risk, experts think two scenarios to be the most likely:

  1. The A.I. is programmed to do something devastating: Autonomous weapons are artificial intelligence systems programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties. Moreover, an A.I. arms race could inadvertently lead to an A.I. war resulting in mass casualties. To avoid being thwarted by the enemy, these weapons must be complicated to “turn off,” so humans could plausibly lose control of such a situation. This risk grows as A.I. intelligence and autonomy levels increase.

    2. The A.I. is programmed to do something beneficial. Still, it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the A.I.’s goals with ours, which is strikingly difficult. If you ask an obedient, intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters. Suppose a super-smart system with an ambitious geoengineering project. In that case, it might wreak havoc with our ecosystem as a side effect and view human attempts to stop it as a threat to be met.  

white robot

For news, promotions, discounts, and bonuses – Leave your details below: