What is artificial intelligence?

Artificial intelligence (AI) is a broad branch of computer science that deals with the development of smart machines that can perform human tasks without human intelligence and intervention.

Artificial intelligence is used in information technology, customer service, advertising, operations management and more. It is a simulation of natural intelligence in machines programmed to learn and mimic human actions. Such machines are capable of learning based on experience and performing human tasks. As technologies like AI continue to grow, they will have a big impact on our quality of life.

There are a few simple explanations for artificial intelligence:

  • An intelligent whole created by humans.
  • Ability to intelligently perform tasks without explicit instructions.
  • The ability of rational and humane thinking and acting.

The history of artificial intelligence

The seeds of modern artificial intelligence were planted by classical philosophers who tried to describe the process of human thinking as a mechanical manipulation of symbols.

This work culminated with the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it have inspired a handful of scientists to start seriously discussing the possibility of building an electronic brain.

Can machines think?

In the first half of the 20th century, science fiction introduced the world to the concept of artificially intelligent robots. It began with the “heartless” Tin Man from the Wizard of Oz, and continued with a humanoid robot who played Mary in Metropolis.

By the 1950s, we had a generation of scientists, mathematicians, and philosophers with a concept of artificial intelligence (or AI) that was culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence.

Turing suggested that people use available information as well as common sense to solve problems and make decisions, so why can’t machines do the same?
This was the logical framework of his 1950 work, Computer Machines and Intelligence, in which he discussed how to build intelligent machines and how to test their intelligence.

Before 1949, computers did not have a key prerequisite for intelligence: they could not store commands, but only execute them. In other words, computers could be told what to do, but they could not remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of renting a computer reached up to $ 200,000 per month. Only prestigious universities and large technology companies could afford to work on computers.

DSRPAI Conference - Beginning of AI

A couple of years later, proof of concept was initialized by Allen Newell, Cliff Shaw, and Herbert Simon, with their Logic Theorist. The Logic Theorist was a program designed to mimic human problem-solving skills and was funded by the Research and Development Corporation (RAND). It is considered by many to be the first artificial intelligence program, and was presented at the Summer Research Project of Artificial Intelligence (DSRPAI) in Dartmouth, hosted in 1956 by John McCarthy and Marvin Minsky.

At this historic conference, McCarthy, along with top scientists from various fields, during the conference and coined the term “Artificial Intelligence”. The significance of this event cannot be diminished as it catalyzed the next twenty years of AI research.

Further path of AI development

Machine learning and deep learning

From 1957 to 1974, AI became very prevalent. Computers were able to store more information and became faster, cheaper and more affordable. Machine learning algorithms have also improved and people have become better at knowing which algorithm to apply to their problem. Early demonstrations, such as Newell's and Simon's "General Problem Solver" and Joseph Weizenbaum's ELIZA, showed promise towards problem-solving goals, namely the interpretation of spoken language.

These successes, as well as the advocacy of leading researchers, have convinced government agencies such as the Agency for Advanced Defense Research Projects (DARPA) to fund AI research in several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language, as well as process high-bandwidth data. Optimism was high and expectations even higher.

In 1970, Marvin Minsky told Life magazine, "Between three and eight years from now, we will have a machine with the intelligence of the average man." However, while there was basic evidence in principle, there was still a long way to go before the ultimate goals of natural language processing, abstract thinking, and self-recognition could be achieved.

The 1980s were marked by two things: expanding the algorithmic tool and increasing resources. John Hopfield and David Rumelhart popularized “deep learning” techniques that allowed computers to learn using experience.
On the other hand, Edward Feigenbaum introduced expert systems that mimicked the decision-making process of a human expert.

During the 1990s and 2000s, many significant goals of artificial intelligence were achieved. In 1997, the reigning world chess champion, Gary Kasparov, was defeated by IBM's Deep Blue, a computer program for playing chess.

In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another big step forward, in the direction of the effort to interpret spoken language. There seemed to be no problem the machines couldn’t handle. Even human emotions were involved, as evidenced by Kismet, a robot developed by Cynthia Breazeal who was able to recognize and display emotions.


The purpose AI

The purpose of artificial intelligence is to help human capabilities and help us make advanced decisions with far-reaching consequences. From a philosophical perspective, artificial intelligence can help people live a more meaningful life devoid of hard work and help manage a complex network of interconnected individuals, companies, states, and nations to function in a way that benefits all of humanity.

Currently, artificial intelligence is used by various tools and techniques we have created over the past thousand years - to simplify human effort and help us make better decisions. Artificial intelligence has also been touted as our ultimate invention, a creation that will invent revolutionary tools and services that will exponentially change the way we live.

AI technologies

  • Machine learning - Machine learning is a method of data analysis that automates the creation of analytical models. It is a branch of artificial intelligence based on the idea that systems can learn from data, recognize patterns, and make decisions with minimal human intervention. Although artificial intelligence (AI) is a broad science of mimicking human abilities, machine learning is a specific subset of AI that trains a machine to learn.
  • Deep learning - is a type of machine learning that enables a computer to perform human-like tasks, such as speech recognition, image recognition, or prediction. Instead of organizing data conducted through predefined equations, in-depth learning sets the basic parameters of the data and enables the computer to learn independently by recognizing patterns using many layers of processing. 
  • Natural language processing (NLP) - Natural language processing is a branch of artificial intelligence that helps computers to understand, interpret and manipulate human language. NLP helps computers communicate with people in their language, allowing computers to read text, store speech, interpret it, measure feelings, and identify important parts.
  • Computer vision - Computer vision is an area of artificial intelligence that enables computers to interpret and understand the visual world. By using digital images from cameras and videos and deep-learning models, machines can accurately identify and classify objects - and then respond to what they “see”. From face recognition to the processing of a live football match, computer vision surpasses human visual abilities in many areas.

Machine learning, deep learning, Natural language processing, Computer vision

Types of artificial intelligence

Different subjects of artificial intelligence are built for different purposes and thus differ. AI can be classified based on type 1 and type 2 (based on functionality). Here is a brief introduction on the types of artificial intelligence:

  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence (AGI)
  • Artificial Super Intelligence (ASI)

Artificial Narrow Intelligence (ANI)

This is the most common form of AI on the market. Such artificial intelligence systems are designed to solve a single problem and could perform a single task really well. By definition, they have narrow capabilities, such as recommending products to an e-commerce user or weather forecasting. They are able to approach human functioning in very specific contexts and even surpass them in many cases, but only in very controlled environments with a limited set of parameters.

Artificial General Intelligence (AGI)

This is still a theoretical concept. It is defined as AI that has a cognitive function at the human level in a wide range of domains such as language processing, image processing, computer functioning and reasoning and so on.
We are still far from building such a system. The system should contain thousands of artificial narrow intelligence systems working in tandem, communicating with each other mimicking human reasoning. Even with the most advanced computer systems and infrastructures, such asFujitsu’s K or IBM’s Watson, it took them 40 minutes to simulate one second of neural activity. It also speaks to the immense complexity and interconnectedness of the human brain, as well as the magnitude of the challenge of building this type of AI, with our current resources.

Artificial Super Intelligence (ASI)

ASI is considered a logical advancement of AGI. An artificial superintelligence (ASI) system could surpass all human capabilities. That would involve making decisions, making rational decisions, and even includes things like creating better art and building emotional relationships.

Once we achieve artificial general intelligence, AI systems could quickly improve their capabilities and advance into areas we may not have even dreamed of. Although the gap between AGI and ASI would be relatively small (some say it is only nanoseconds, because Artificial Intelligence would learn so quickly), the long journey ahead of us towards AGI itself seems to seem like a concept that lies far in the future.

Strong vs weak artificial intelligence

The use of artificial intelligence is global and many people are familiar with the once rare technology. It is common in video games, smartphones, cars, etc. Extensive research has divided AI into two types: strong and weak AI. The terms do not imply that strong artificial intelligence works better or is “stronger” than weak artificial intelligence.
The terms were devised by John Searle to differentiate performance in different types of AI machines.

Here are some basic differences between weak and strong artificial intelligence.

Weak AI

The popular iPhone Siri and Amazon’s Alexa could be called AI, but they are generally weak AI programs. This categorization is rooted in the difference between supervised and unsupervised programming, because voice-activated help usually has a programmed response. What they do is feel or ‘scan’ things that are similar to what they already know and classify them accordingly.

This is a human-like property, but that’s where the similarities basically end because weak AIs are simply simulations. If you ask Siri to turn on the air conditioner, it understands keywords like “on” and “air conditioner,” so it will respond by turning on the air conditioner.
However, it only responds to what it is programmed for. He does not understand or derive meaning from what you have said.

Strong AI

Presented in many films, powerful artificial intelligence functions more like a human brain. They do not classify, and use clustering and aggregation to process the data. This means that there is no programmed response to your keywords or requests, as can be seen in weak AIs, and the results of their programming and functions are generally unpredictable.
For example, when you talk to a man, you can only guess what someone’s response will be.

A popular example of strong AI is the one found in games. It is more independent of weak AI and can learn and adapt to different situations. Another example of strong AI is poker AI that can be learned to adapt and outsmart the skills of human opponents.

Although weak AI is the more common version, strong AI was also a crucial part of the AI revolution. Scientists often describe it as the ‘true representation of human intelligence in machines’.

Advantages and disadvantages of artificial intelligence

There is no doubt that technology has improved human life. The technology has taken over various areas, from music recommendations, ticket instructions, mobile banking to fraud prevention and other. There is a thin line between progress and destruction. There are always two sides to the coin, and that’s the case with AI as well.

Advantages of AI

  • Human error reduction - The decisions made by the AI at each step are decided by previously collected data and a certain set of algorithms. When properly programmed, these errors can be reduced to zero.
  • Zero Risk - Whether it's removing a bomb, going into space, exploring the deepest parts of the ocean, machines with metal bodies are resistant in nature and can survive hostile atmospheres.
  • Availability 24/7 - There are many studies showing that people are only productive for about 3 to 4 hours a day. But AI can work indefinitely without pauses. They think much faster than humans and perform multiple tasks with accurate results at the same time. They can even deal with tedious repetitive jobs with the help of AI algorithms.
  • Digital Assistants - Almost all large organizations today use digital assistants to interact with their customers which significantly minimizes the need for human resources. You can talk to the chatbot and ask them exactly what you need.
  • New Inventions - AI has helped to find new inventions in almost all domains to solve complex problems.
  • Impartial Decisions - Human beings are driven by feelings, whether we like it or not. AI, on the other hand, is devoid of emotion and is very practical and rational in its approach.

Chatbots

Disadvantages of AI

  • High Cost - The ability to create a machine that can simulate human intelligence is no small thing. It takes a lot of time and resources. AI also has to work on the latest hardware and software to update and meet the latest requirements, which makes it quite expensive.
  • Unemployment - As AI replaces most repetitive tasks and other work with robots, human interference is becoming less and less which will cause a major problem in employment standards.
  • Lack of emotion - There is no doubt that machines are much better when it comes to efficient operation, but they cannot replace the human connection that makes up a team. Machines cannot develop a connection with people which is an essential attribute when it comes to team management.
  • Lack of creativity - Machines can only perform those tasks for which they are designed or programmed, anything beyond that tends to crash or give irrelevant results.

Cobots replacing humans with AI

The future of artificial intelligence

There is almost no industry in which modern artificial intelligence - more precisely, "narrow artificial intelligence", which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning - is not present. This is especially true in recent years, as data collection and analysis have increased significantly thanks to robust IoT connectivity, the proliferation of connected devices, and ever-faster computing.

Some sectors are at the beginning of their AI journey, others are veteran travelers. They both have a long way to go. No matter what the impact of artificial intelligence on our lives today, it is hard to ignore:

  • Transportation: While it might take them a decade or more to perfect them, autonomous cars will one day drive us from place to place.
  • Production: Robots on AI work together with humans to perform a limited range of tasks such as assembly and sorting, and predictive analysis sensors maintain continuous operation.
  • Healthcare: In the area of healthcare, diseases are diagnosed faster and more accurately, drug discovery is accelerated, virtual nurse assistants monitor patients, and big data analysis helps create a more personalized patient experience.
  • Education: Textbooks are digitized with the help of artificial intelligence, virtual teachers help human instructors at an early stage, and facial analysis measures students feelings to determine who is struggling or bored and better adapts the experience to their individual needs.
  • Media: Journalism also uses AI and will continue to benefit from it. Bloomberg uses Cyborg technology to help you quickly understand complex financial statements. The Associated Press employs Automated Insights ’natural language skills to produce 3,700 earnings reports a year - nearly four times as many as in the recent past.
  • Customer service: Google is working on an AI assistant who can make human calls and make appointments at, say, your hair salon next door. In addition to words, the system understands the context.
    With companies spending about $ 20 billion a year on artificial intelligence products and services, technology giants like Google, Apple, Microsoft and Amazon spend billions on creating those products and services, universities that make AI a more prominent part of their curricula (MIT alone spends $ 1 billion in a new college dedicated exclusively to computing, with a focus on AI), great things are sure to happen. Some of these events are well on their way to being fully realized; some are only theoretical and could remain so.

Conclusion

As people who have always been fascinated by technological change and fiction, we currently live in the midst of the greatest progress in our history. Artificial intelligence has become the next big thing in the field of technology. Organizations around the world are devising revolutionary innovations in artificial intelligence and machine learning.

Artificial intelligence not only affects the future of every industry and every human being but has also acted as a major driver of new technologies such as big data, robotics and IoT. Given the growth rate, it will continue to act as a technological innovator in the foreseeable future. As these technologies continue to grow, they will have more and more impact on the social framework and quality of life.

Klara Markotić
Content Creator at MachineDesk with a particular interest in marketing and social media.
marketing@machine-desk.com