Margaret Mead, the famous American anthropologist born in 1901, said in one of her lectures that a healed femur was the first sign of civilization. The fact that someone had a practically fatal fall and received care to heal is the beginning of human civilization. Our ability to reason is almost as old as our natural instinct to care for our fellow humans. In fact, for centuries, it was the ability to reason that set humans apart from other animals and machines. But now, with AI reasoning, that distinction has been broken.
What is reasoning in AI?
Reasoning is the psychological process of drawing conclusions and predicting an outcome based on the integration of available knowledge, facts, and beliefs. It is concerned with obtaining accurate information or knowledge of current events. The concept of reasoning is essential in AI because it allows robots to reason, exactly like a human brain, and gives them the ability to act like humans.
In the development of AI, the ability to reason is crucial. Therefore, reasoning is using prior knowledge to make inferences, formulate hypotheses, or develop strategies to address a problem. To understand the human brain, the way it thinks and how it draws conclusions about particular topics, we need the help of reasoning. It is for this reason that it is crucial in AI.
Types of reasoning in AI
AI divides reasoning into the following categories:
- Deductive reasoning: It involves making a new discovery based on known information that is logically linked to that topic. It is a type of valid reasoning, which means that if the premises are true, the result must also be true.
Example: Note that if a = b and b = c, then a = c. Let’s illustrate this with an example: every number that ends in 0 or 5 is divisible by 5. Since 35 ends in 5, it must be divisible by 5.
- Inductive Reasoning: Uses generalization to conclude limited data from specific facts or data to a general statement or conclusion.
Example: I have only seen white cats. Therefore, most cats are likely to be white.
- Abductive Reasoning: Start with one or more observations and find the most likely explanation or conclusion.
Example: When I went out this morning, the grass was completely covered in dew. Presumably, it rained last night.
- Common sense reasoning: This is a form of informal reasoning acquired through life experiences.
Example: Touching a stove: Common sense tells you not to touch a hot stove to avoid getting burned.
- Monotone Reasoning: When monotone reasoning is used, the conclusion remains the same even if we add new facts to our knowledge base.
Example: The sun rises in the east and sets in the west.
- Non-monotonic reasoning: In non-monotonic reasoning, if we discover something new, it could make some of our findings wrong.
Example: Consider a container of water. If we put it on the stove and turn on the flame, it will definitely heat up to boiling, and when we turn off the flame, it will gradually cool down.
How do machines think?
The reliability of machine learning is doubted due to the mystery in the processes for obtaining conclusions from AI, generating skepticism about its reliability. In a new study, researchers have presented a new technique to quickly analyze the behavior of AI software by assessing how well its thinking aligns with human reasoning.
Understanding how machine learning derives findings and their reliability is crucial as these findings are increasingly applied in the real world. For example, AI software could have correctly identified a skin lesion as malignant. Still, he could have done so by concentrating on a background spot with no connection to the clinical image.
Can we trust AI reasoning?
The Harvard Machine Learning Foundations Group is holding a seminar called “Teaching Language Models to Reason.” It is one of the many workshops organized by the group.
The researchers said their method is made up of four parts:
- Chain of thought, which means adding more thoughts before arriving at a final answer.
- Self-consistency, which means taking several samples and choosing the most common solution.
- From less to more, which means dividing problems into smaller parts and solving each one separately.
- Instruction fine tuning, which means setting up an AI to be able to evaluate new problems without training.
The researchers highlighted that Large Language Models (LLM) think for themselves and could help people. They also said: “Larger models will make our world more efficient.”
Likewise, according to UCLA research, the AI-powered tool GPT-3 can reason as well as college students. The UCLA team tested GPT-3 with thinking questions that appear on IQ and standardized tests like the SAT. The AI model was given the tasks of SAT analogy question solving and prediction as follows in a complex array, and performed exceptionally well in both.
Can AI reason like humans?
Have you ever wondered why some sites ask you to click a box that says, “I’m not a robot”? What’s really stopping a robot from clicking that checkbox? It’s actually not the click of the box, but the movement of the mouse that crawls the site. Robots make a linear movement towards the box, while humans take a more fluid path. Likewise, reasoning in AI is different from how humans reason.
Humans tend to put themselves in other people’s shoes, so we immediately give the AI human traits. But Artificial General Intelligence (AGI) may not seem as human as we think. In the case of AI, it is assumed that any given example of reasoning would provide a mathematical combination to produce the required results. But it is often difficult, due to the uniqueness of the decision made by the individual involved. As a result, using AI in computers and robotics – to solve complex problems with several alternative answers – is very challenging.
Some experts say we are seeing the beginning of “true” or “strong” AI, while others say this won’t happen for a long time. Some experts even say that the tests used to see how much an AI is like a person are flawed because they only look at specific types of intelligence.