AI History

Artificial Intelligence’s History (AI)As you might be surprised to learn, the field of artificial intelligence (AI) has a far longer history than you might expect. In 1956, Marvin Minsky created the word “AI” at Dartmouth College, where he was a professor.

AI certification will provide you an advantage over other job seekers in this field. Now is the moment to create a career in Artificial Intelligence with advancements like Facial Recognition, AI in Healthcare, Chatbots, and more. In our daily lives, virtual assistants are already helping us save time and energy. Tesla’s self-driving cars, for example, have already showed us what the future may hold. Artificial intelligence (AI) can help us lessen and predict the hazards of climate change so that we can take action before it’s too late. This is merely the beginning; there will be much more to come in the future. Artificial intelligence is expected to produce 133 million new employment by 2022, according to this estimate.

For the first time, intelligent robots and artificial entities appear in ancient Greek mythology. There has been an important turning point in the study of human intelligence since Aristotle developed syllogism and applied it to deductive reasoning in his work. The current state of artificial intelligence has barely existed for less than a century, despite its long and deep roots.

Let’s have a look at some of the most significant events in the history of AI:

in which Warren McCulloch and Walter Pits wrote the first paper on artificial intelligence (AI), “A Logical Calculus of Ideas Immanent in Nervous Activity,” They proposed using a model based on synthetic neurons.

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb suggested the idea of altering the strength of connections between neurons.

It was 1950 when Alan Turing released “Computing Machinery and Intelligence” in which he offered a test to assess whether or not machines can display human-like behavior. The Turin Test is a well-known name for this type of test.

The first neural network computer, dubbed SNARC, was constructed the same year by Harvard grads Marvin Minsky and Dean Edmonds.

1956 – Allen Newell and Herbert A. Simon create “Logic Theorist,” the “first artificial intelligence program.” Using this tool, we were able to confirm the correctness of 38 out of 52 known mathematical theorems, while also uncovering numerous new and improved proofs.

John McCarthy, an American scientist, invented the term “Artificial Intelligence” for the first time at the Dartmouth Conference in the same year.

In the years that followed, interest in AI developed at a rapid pace.

While working at IBM in 1959, Arthur Samuel invented the phrase machine learning.

Stanford University’s John McCarthy established an AI lab in 1963.

ELIZA was founded by Joseph Weizenbaum in 1966, and was the first ever chatbot.

The first humanoid robot, WABOT-1, was developed in Japan in 1972.

The period from 1974 to 1980 is known as the first AI winter. As funding from the government dried up and interest in AI waned, many scientists were unable to carry out their research to the fullest degree.

As soon as 1980 came around, AI was back! First commercial expert system R1 was built by Digital Equipment Corporations, which marked the end of the AI winter period.

Similarly, Stanford University hosted the first ever national meeting of the American Association for Artificial Intelligence.

From 1987 to 1993, many investors and the government ceased sponsoring AI research due to rising computer technology and cheaper alternatives, leading to the second AI Winter period.

In 1997, a machine beat a human! First time a computer/machine beat a world chess champion: IBM’s computer IBM Deep Blue defeated Gary Kasparov, the current world chess champion.

Vacuum cleaners were introduced in 2002, allowing AI to enter people’s homes.

Robots like Boston Dynamics’ “Big Dog” and iRobot “PackBot” began to be used by the American military in 2005.

An increase in the use of artificial intelligence (AI) may be traced all the way back to the year 2006.

The advancements in speech recognition that Google made in 2008 led to the inclusion of this feature in the iPhone app.

Jeopardy was a game show where contestants had to solve complex questions and riddles, and Watson, an IBM computer, won in 2011. Watson had shown that it was capable of comprehending and solving complicated issues quickly.

Using deep learning techniques, the Google Brain Deep Learning project’s inventor, Andrew Ng, fed 10 million YouTube movies into a neural network. When the neural network learned to recognize a cat without being told what a cat was, it heralded the beginning of a new age in deep learning.

A self-driving automobile built by Google in 2014 passed a road test.

Alexa was released in 2014 by Amazon.

Robot citizen Sophia, a humanoid robot that can recognize faces, speak, and show emotion, was developed by Hanson Robotics in 2016.

It will be 2020 when Baidu makes their LinearFold AI algorithm available to teams of scientists and doctors working on a vaccine for the SARS-CoV-2 pandemic. Faster than any previous method, the algorithm could predict the virus’ RNA sequence in just 27 seconds.

Every day, Artificial Intelligence is advancing at a rapid pace in every industry. It’s no longer science fiction to say that AI is already here.

Everyday use of AI

The following is a short list of possible uses for AI in your daily life:

To provide customers with more relevant recommendations based on their previous searches and purchases, online retailers are turning to artificial intelligence (AI).

Artificial intelligence (AI) is being used by smartphones to create digital personal assistants. It is possible for AI assistants to answer inquiries and assist users in organizing their daily activities without any effort.

Machine translations: AI-based language translation software delivers translations, subtitling, and language identification that can assist users understand foreign languages.

It is possible to use AI systems to identify and combat cyberattacks by recognizing trends and following the attacks backwards.

When it comes to tracking the spread of Covid-19, AI has been utilized to spot outbreaks as well as process healthcare claims.



it’s not technology, it’s what you do with it

Latest News