news-details

How do we know if ChatGPT can recognize a face?

For the last year, ChatGPT has been able to analyze images as well as text as a feature of its latest version—GPT-4V(ision).

For instance, if you upload a photograph of the contents of your fridge, ChatGPT can describe what's in the photo and then recommend potential meal ideas based on those ingredients, along with suitable recipes. Or you can photograph a hand-drawn sketch of how you'd like your new website to look and ChatGPT will take that image and provide you with the HTML code to make the site.

You can also upload a still frame from part way through a film. ChatGPT can identify the film and summarize the plot up to that point only. The list of applications is virtually endless.

As a researcher interested in face perception, I'm particularly curious about how ChatGPT handles face images—matching two different images of the same person, for example. But how can we judge just how good the chatbot is at recognizing faces? To explore how well people perform with faces, psychologists have come up with numerous tests that assess different abilities, so I decided to try ChatGPT on some of these.

First, I tried it on the "reading the mind in the eyes" test. In this task, only the eye regions of photographs are presented, along with four descriptive words as options regarding what the person in the picture is thinking or feeling (with one of these being the correct answer).

Related Posts
Advertisements
Market Overview
Top US Stocks
Cryptocurrency Market