zlacker

[return to "Gemini 3 Pro: the frontier of vision AI"]
1. djoldm+1H[view] [source] 2025-12-05 19:18:33
>>xnx+(OP)
Interesting "ScreenSpot Pro" results:

    72.7% Gemini 3 Pro
    11.4% Gemini 2.5 Pro
    49.9% Claude Opus 4.5
    3.50% GPT-5.1
ScreenSpot-Pro: GUI Grounding for Professional High-Resolution Computer Use

https://arxiv.org/abs/2504.07981

◧◩
2. simonw+SR[view] [source] 2025-12-05 20:12:34
>>djoldm+1H
I was surprised at how poorly GPT-5 did in comparison to Opus 4.1 and Gemini 2.5 on a pretty simple OCR task a few months ago - I should run that again against the latest models and see how they do. https://simonwillison.net/2025/Aug/29/the-perils-of-vibe-cod...
◧◩◪
3. daemon+gw1[view] [source] 2025-12-06 00:16:35
>>simonw+SR
Agreed, GPT-5 and even 5.1 is noticeably bad at OCR. OCRArena backs this up: https://www.ocrarena.ai/leaderboard (I personally would rank 5.1 as even worse than it is there).

According to the calculator on the pricing page (it's inside a toggle at the bottom of the FAQs), GPT-5 is resizing images to have a minor dimension of at most 768: https://openai.com/api/pricing/ That's ~half the resolution I would normally use for OCR, so if that's happening even via the API then I guess it makes sense it performs so poorly.

◧◩◪◨
4. datadr+Gh3[view] [source] 2025-12-06 19:27:07
>>daemon+gw1
and GPT4 was pretty decent at OCR, so that's weird?
[go to top]