Sign In
Artificial intelligence (AI) is a broad branch of computer science that deals with the development of smart machines that can perform human tasks without human intelligence and intervention.
Artificial intelligence is used in information technology, customer service, advertising, operations management and more. It is a simulation of natural intelligence in machines programmed to learn and mimic human actions. Such machines are capable of learning based on experience and performing human tasks. As technologies like AI continue to grow, they will have a big impact on our quality of life.
There are a few simple explanations for artificial intelligence:
The seeds of modern artificial intelligence were planted by classical philosophers who tried to describe the process of human thinking as a mechanical manipulation of symbols.
This work culminated with the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it have inspired a handful of scientists to start seriously discussing the possibility of building an electronic brain.
In the first half of the 20th century, science fiction introduced the world to the concept of artificially intelligent robots. It began with the “heartless” Tin Man from the Wizard of Oz, and continued with a humanoid robot who played Mary in Metropolis.
By the 1950s, we had a generation of scientists, mathematicians, and philosophers with a concept of artificial intelligence (or AI) that was culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence.
Turing suggested that people use available information as well as common sense to solve problems and make decisions, so why can’t machines do the same?This was the logical framework of his 1950 work, Computer Machines and Intelligence, in which he discussed how to build intelligent machines and how to test their intelligence.
Before 1949, computers did not have a key prerequisite for intelligence: they could not store commands, but only execute them. In other words, computers could be told what to do, but they could not remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of renting a computer reached up to $ 200,000 per month. Only prestigious universities and large technology companies could afford to work on computers.
A couple of years later, proof of concept was initialized by Allen Newell, Cliff Shaw, and Herbert Simon, with their Logic Theorist. The Logic Theorist was a program designed to mimic human problem-solving skills and was funded by the Research and Development Corporation (RAND). It is considered by many to be the first artificial intelligence program, and was presented at the Summer Research Project of Artificial Intelligence (DSRPAI) in Dartmouth, hosted in 1956 by John McCarthy and Marvin Minsky.
At this historic conference, McCarthy, along with top scientists from various fields, during the conference and coined the term “Artificial Intelligence”. The significance of this event cannot be diminished as it catalyzed the next twenty years of AI research.
From 1957 to 1974, AI became very prevalent. Computers were able to store more information and became faster, cheaper and more affordable. Machine learning algorithms have also improved and people have become better at knowing which algorithm to apply to their problem. Early demonstrations, such as Newell's and Simon's "General Problem Solver" and Joseph Weizenbaum's ELIZA, showed promise towards problem-solving goals, namely the interpretation of spoken language.
These successes, as well as the advocacy of leading researchers, have convinced government agencies such as the Agency for Advanced Defense Research Projects (DARPA) to fund AI research in several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language, as well as process high-bandwidth data. Optimism was high and expectations even higher.
In 1970, Marvin Minsky told Life magazine, "Between three and eight years from now, we will have a machine with the intelligence of the average man." However, while there was basic evidence in principle, there was still a long way to go before the ultimate goals of natural language processing, abstract thinking, and self-recognition could be achieved.
The 1980s were marked by two things: expanding the algorithmic tool and increasing resources. John Hopfield and David Rumelhart popularized “deep learning” techniques that allowed computers to learn using experience.On the other hand, Edward Feigenbaum introduced expert systems that mimicked the decision-making process of a human expert.
During the 1990s and 2000s, many significant goals of artificial intelligence were achieved. In 1997, the reigning world chess champion, Gary Kasparov, was defeated by IBM's Deep Blue, a computer program for playing chess.
In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another big step forward, in the direction of the effort to interpret spoken language. There seemed to be no problem the machines couldn’t handle. Even human emotions were involved, as evidenced by Kismet, a robot developed by Cynthia Breazeal who was able to recognize and display emotions.
The purpose of artificial intelligence is to help human capabilities and help us make advanced decisions with far-reaching consequences. From a philosophical perspective, artificial intelligence can help people live a more meaningful life devoid of hard work and help manage a complex network of interconnected individuals, companies, states, and nations to function in a way that benefits all of humanity.
Currently, artificial intelligence is used by various tools and techniques we have created over the past thousand years - to simplify human effort and help us make better decisions. Artificial intelligence has also been touted as our ultimate invention, a creation that will invent revolutionary tools and services that will exponentially change the way we live.
Different subjects of artificial intelligence are built for different purposes and thus differ. AI can be classified based on type 1 and type 2 (based on functionality). Here is a brief introduction on the types of artificial intelligence:
This is the most common form of AI on the market. Such artificial intelligence systems are designed to solve a single problem and could perform a single task really well. By definition, they have narrow capabilities, such as recommending products to an e-commerce user or weather forecasting. They are able to approach human functioning in very specific contexts and even surpass them in many cases, but only in very controlled environments with a limited set of parameters.
This is still a theoretical concept. It is defined as AI that has a cognitive function at the human level in a wide range of domains such as language processing, image processing, computer functioning and reasoning and so on.We are still far from building such a system. The system should contain thousands of artificial narrow intelligence systems working in tandem, communicating with each other mimicking human reasoning. Even with the most advanced computer systems and infrastructures, such asFujitsu’s K or IBM’s Watson, it took them 40 minutes to simulate one second of neural activity. It also speaks to the immense complexity and interconnectedness of the human brain, as well as the magnitude of the challenge of building this type of AI, with our current resources.
ASI is considered a logical advancement of AGI. An artificial superintelligence (ASI) system could surpass all human capabilities. That would involve making decisions, making rational decisions, and even includes things like creating better art and building emotional relationships.Once we achieve artificial general intelligence, AI systems could quickly improve their capabilities and advance into areas we may not have even dreamed of. Although the gap between AGI and ASI would be relatively small (some say it is only nanoseconds, because Artificial Intelligence would learn so quickly), the long journey ahead of us towards AGI itself seems to seem like a concept that lies far in the future.
ASI is considered a logical advancement of AGI. An artificial superintelligence (ASI) system could surpass all human capabilities. That would involve making decisions, making rational decisions, and even includes things like creating better art and building emotional relationships.
Once we achieve artificial general intelligence, AI systems could quickly improve their capabilities and advance into areas we may not have even dreamed of. Although the gap between AGI and ASI would be relatively small (some say it is only nanoseconds, because Artificial Intelligence would learn so quickly), the long journey ahead of us towards AGI itself seems to seem like a concept that lies far in the future.
The use of artificial intelligence is global and many people are familiar with the once rare technology. It is common in video games, smartphones, cars, etc. Extensive research has divided AI into two types: strong and weak AI. The terms do not imply that strong artificial intelligence works better or is “stronger” than weak artificial intelligence.The terms were devised by John Searle to differentiate performance in different types of AI machines.
Here are some basic differences between weak and strong artificial intelligence.
The popular iPhone Siri and Amazon’s Alexa could be called AI, but they are generally weak AI programs. This categorization is rooted in the difference between supervised and unsupervised programming, because voice-activated help usually has a programmed response. What they do is feel or ‘scan’ things that are similar to what they already know and classify them accordingly.
This is a human-like property, but that’s where the similarities basically end because weak AIs are simply simulations. If you ask Siri to turn on the air conditioner, it understands keywords like “on” and “air conditioner,” so it will respond by turning on the air conditioner.However, it only responds to what it is programmed for. He does not understand or derive meaning from what you have said.
Presented in many films, powerful artificial intelligence functions more like a human brain. They do not classify, and use clustering and aggregation to process the data. This means that there is no programmed response to your keywords or requests, as can be seen in weak AIs, and the results of their programming and functions are generally unpredictable.For example, when you talk to a man, you can only guess what someone’s response will be.
A popular example of strong AI is the one found in games. It is more independent of weak AI and can learn and adapt to different situations. Another example of strong AI is poker AI that can be learned to adapt and outsmart the skills of human opponents.
Although weak AI is the more common version, strong AI was also a crucial part of the AI revolution. Scientists often describe it as the ‘true representation of human intelligence in machines’.
There is no doubt that technology has improved human life. The technology has taken over various areas, from music recommendations, ticket instructions, mobile banking to fraud prevention and other. There is a thin line between progress and destruction. There are always two sides to the coin, and that’s the case with AI as well.
There is almost no industry in which modern artificial intelligence - more precisely, "narrow artificial intelligence", which performs objective functions using data-trained models and often falls into the categories of deep learning or machine learning - is not present. This is especially true in recent years, as data collection and analysis have increased significantly thanks to robust IoT connectivity, the proliferation of connected devices, and ever-faster computing.
Some sectors are at the beginning of their AI journey, others are veteran travelers. They both have a long way to go. No matter what the impact of artificial intelligence on our lives today, it is hard to ignore:
As people who have always been fascinated by technological change and fiction, we currently live in the midst of the greatest progress in our history. Artificial intelligence has become the next big thing in the field of technology. Organizations around the world are devising revolutionary innovations in artificial intelligence and machine learning.
Artificial intelligence not only affects the future of every industry and every human being but has also acted as a major driver of new technologies such as big data, robotics and IoT. Given the growth rate, it will continue to act as a technological innovator in the foreseeable future. As these technologies continue to grow, they will have more and more impact on the social framework and quality of life.
Plan production, increase the utilization of machines and resources in the plant, quality and competitiveness in the market.
Take advantage of an agile way of running projects, reduce consumption, gain better control and greater productivity with existing machinery and human resources.
Get started
Want to be up to date with the latest trends from Industry 4.0 and tehnology?
Sign up for free to our Content Hub and get ready for the best news and articles about project management, smart factories, technology, software solutions and more.
Please complete all required fields!
Request demo version
Join companies that have digitized their production. Make an appointment so we can guide you through the system.