news-details

Can advanced AI can solve visual puzzles and perform abstract reasoning?

Artificial Intelligence has learned to master language, generate art, and even beat grandmasters at chess. But can it crack the code of abstract reasoning—those tricky visual puzzles that leave humans scratching their heads?

Researchers at USC Viterbi School of Engineering Information Sciences Institute (ISI) are putting AI's cognitive abilities to the test, pushing the multi-modal large language models (MLLMs) to solve visual problems once reserved for human IQ tests. The result? A glimpse into how far AI has come—and where it still stumbles.

USC Viterbi ISI Research Assistants Kian Ahrabian and Zhivar Sourati recently investigated whether MLLMs can perform nonverbal abstract reasoning, tasks that require both visual perception and logical reasoning, and presented their findings at the Conference on Language Modeling (COLM 2024) in Philadelphia, PA October 7–9, 2024. The work is also available on the arXiv preprint server.

Jay Pujara, research associate professor of computer science at the USC Viterbi School of Engineering and an author on the paper said, "Every day we're bombarded with new headlines about what AI can (and can't) do, which are often very surprising. We still have such a limited understanding of what new AI models can do, and until we understand these limitations we can't make AI better, safer, and more useful. This paper helps fill in a missing piece of the story of where AI struggles."

The challenge: Can AI see and think?

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market