- cross-posted to:
- technology@lemmy.world
- cross-posted to:
- technology@lemmy.world
Anyone who has been surfing the web for a while is probably used to clicking through a CAPTCHA grid of street images, identifying everyday objects to prove that they’re a human and not an automated bot. Now, though, new research claims that locally run bots using specially trained image-recognition models can match human-level performance in this style of CAPTCHA, achieving a 100 percent success rate despite being decidedly not human.
ETH Zurich PhD student Andreas Plesner and his colleagues’ new research, available as a pre-print paper, focuses on Google’s ReCAPTCHA v2, which challenges users to identify which street images in a grid contain items like bicycles, crosswalks, mountains, stairs, or traffic lights. Google began phasing that system out years ago in favor of an “invisible” reCAPTCHA v3 that analyzes user interactions rather than offering an explicit challenge.
Despite this, the older reCAPTCHA v2 is still used by millions of websites. And even sites that use the updated reCAPTCHA v3 will sometimes use reCAPTCHA v2 as a fallback when the updated system gives a user a low “human” confidence rating.
Aren’t these Captchas designed to get training data for AI models anyway?
“System does what it was designed to do” doesn’t feel that surprising…
Aren’t these Captchas designed to get training data for AI models anyway?
Yes and no, the captchas are just meant to be hard for computers to solve but easier for humans. People saw that, and thought that “if we’re making people do this might as well have them do something useful” not meant to be malevolent- and the purpose is still stopping bots, training them is a side-effect.
No, you’re wrong, the Traffic Light examples ARE specifically to gather data to train models. Being a good Captcha was just a byproduct of that. If people just wanted a good captcha they wouldn’t need hundreds of millions of photos of street lights and bicycles.
No, you’re wrong, the Traffic Light examples ARE specifically to gather data to train models.
No you’re wrong, because the sites that embed those captchas on their page are not doing that to help good.
If people just wanted a good captcha they wouldn’t need hundreds of millions of photos of street lights and bicycles.
Yes, they are getting something productive out of the human labor that would be done anyways. Trust me as a web developer, and web scraper, some kind of captcha is necessary for many free services to be useful/economically viable. The core of a good captcha is just making it marginally more expensive for the scraper/bot than it is for you.
The sites don’t create the captcha, you yourself just said it was embedded there.
They embed for a reason… And the captchas wouldn’t exist if they weren’t embedded anywhere