When talking about Artificial Intelligence (AI), most people associate the term with science fiction and with technologies that are practically inconceivable. However, the reality is far from the films or books that have worked on the subject, since this technology has been studied in society for almost 100 years and is part of the daily life of multiple sectors and industries.
Knowing how Artificial Intelligence arose and learning more about its context is a very useful way to understand what it is, how it works, under what precepts it was presented as a social need and how today it represents an essential pillar within daily processes. of many people, businesses or companies. To delve into all of the above, today we want to talk about the background of Artificial Intelligence and take a tour of its history up to the present.
History of artificial intelligence
– First pillars
Although many do not know it, the first pillars of AI date back to the 30s, by the hand of Alan Turing. Due to all the scientific studies and investigations that were carried out at this time in response to World War II and the want to decipher the secret codes that the Nazi army sent through its Enigma machine, Turing began to work on the use of algorithms . and created a system intelligent enough to predict and evaluate the messages being sent.
In addition, Turing believed that ” if a machine behaves intelligently in all aspects, then it is intelligent “, so he took it upon himself to demonstrate the efficiency of machines, he invented the famous Turing Test (a method used to verify that certain technologies can give answers very similar to those of a human being and even be indistinguishable) and ended up becoming the father of Artificial Intelligence.
– Modern Artificial Intelligence
While it is true that Alan Turing laid very strong foundations in the study of AI, it was not until 1956, with the participation of John McCarty, Marvin Misky and Claude Shannon, that the term was first appropriated and widely shared. publicly with the scientific community at the Dartmouth Summer Research Project on Artificial Intelligence conference. Before this, technologies capable of making complex decisions had been given many other names and there had been no consensus on what their capabilities and functions were. After this, it was possible to determine that the AIs:
– You may have beliefs or intentions within your mental attitudes.
They have the ability to learn.
- They can solve multiple problems from different levels of understanding.
- They are able to analyze complex situations and give them a logical meaning.
- They know their limitations.
- It can be original, perceive things and model their reality.
- Use languages and symbols.
Although other events took place in the 1990s that became relevant to the scientific world, such as IBM’s Deep Blue machine that beat world champion Gari Kasparov in a game of chess, it was not till the 21st century that the real changes began. began to show in the daily life of people. To understand these transformations, it is important to take into account the following dates:
– 2011: IBM puts its Watson cognitive computer to the test in a television game show. Because this team is capable of accumulating information, learning as they work, and interacting with human language codes, they managed to take on two of the best contestants on the show and win.
– 2012 – 2014: this period of time is characteristic because virtual assistants, which work from machine learning, appear on the scene. The best known, of course, are Siri by Apple and Google Now by Google. These programs are released to the public and revolutionize the way humans interact with technology.
– 2018 – 2019: AI begins to be used to optimize business processes in different industries. This, in turn, brings great results for the productive sector, empowers data-based solutions, and technology is used as a means to carry out tasks.