Comments

You must log in or register to comment.

nmxt t1_jac7nak wrote

The site tracks the movement of the mouse cursor when you are clicking that button, and there are specific ways in how humans move the mouse that can be analyzed and recognized.

18

Ben-Z-S t1_jac7qlj wrote

Typically a bot can spam forms easier than a human as they can see the underlying code / bypass things. They can also do it very quickly. Even just moving your mouse might be enough to determine if you're "real" as its actually quite difficult to simulate realistic movement.
If a bot was making multiple accounts or say voting on something they dont have to navigate thr page, they know what a submit button is and can directly select the object

1

Leonarth5 t1_jac7x1r wrote

Clicking on them is very easy for a bot, that's why they don't just check that you have clicked them.

Your mouse movement is recorded and analyzed for unnatural behavior both before and after the click.

14

bulksalty t1_jac83ik wrote

Google owns the company that provides those and google knows a lot about you (you may have a google account or from their cookies tracking your browser id). Google checks it's information about whether you're likely to be a real person and if it thinks it's very likely does nothing.

When it's not sure you get to identify something they need to train their AI or mark on google maps, like vehicles traffic control devices, fire hydrants, hills etc.

20

Gnonthgol t1_jac85lj wrote

That is propriatary and therefore kept a secret. There are a number of different checks they do in the backend. Things like getting the exact browser, operating system, configuration and even things like the size of the browser window. They also collect timings and movement of the mouse cursor. And also cookies for social media sites to associate you with your online persona. All of this information is sent back to the operator of the robot check, most commonly google, where they presumably are looking for known fingerprints of robots. If they find something suspecious they will send you additional challenges that is harder then just clicking a button.

2

Spirited-Mountain-65 t1_jac8p4a wrote

Captchas are also used to train AI image recognition. They're often blurry, hard to recognize images that AI can't (well. couldn't) solve themselves.

AI have gotten so good that they can solve them without being detected.

3

UnitSignificant2866 t1_jac9rfz wrote

I was told that clicking the not a robot box gave permission for some of your recent browsing history to be released. I'm not sure what algorithms it uses but your history and any other data in the package is something a robot can't fake yet.

−1

NameUnavail t1_jac9swt wrote

The reCaptcha service is owned by Google, and it feeds a whole bunch of data, exactly what Google won't tell us, into one big machine learning algorithm that spits out a score how likely it thinks you are a bot, and depending on what that score is the site then deals with your request.. Checking the box is mostly unnecessary, and the newest versions of captcha don't even have it anymore, they just run silently in the background without you even knowing.

1

CliffExcellent123 t1_jacasxk wrote

>When it's not sure you get to identify something they need to train
their AI or mark on google maps, like vehicles traffic control devices,
fire hydrants, hills etc.

Which is why those tend to show up more when you're in Incognito mode or using a VPN.

7

SoppingBread t1_jacfvmr wrote

Images are used for supervised AI training. "Select all crosswalks" is used to help their AI to identify crosswalks, likely quality assurance for some driving algorithm. They detect you as human because the mouse takes a path that is recognized as human (doesn't jump directly to image position, has arc, has varied response time, etc.), not because you made the right selection.

So congrats, you work for Google.

2

TheDefected t1_jachpvk wrote

There was an older one you don't see anymore where they showed you blurred text, that was from character recognition from scans of old newspapers that google was adding into their database and couldn't make out.

9

Real-Rude-Dude t1_jaci4ur wrote

>>Even just moving your mouse might be enough to determine if you're "real" as its actually quite difficult to simulate realistic movement.

A good way to show this is to open paint and then try and draw a straight line or a circle. It's not perfect. Now think about the equation or commands it would take to program a bot to draw that not-perfect line or circle.

3

Ithalan t1_jaconoj wrote

Maybe, but as things are now, you only need to increase the amount effort it takes to send spam through your web/account registration form even a bit for it to be unprofitable for the spammer and make them try somewhere else.

If all websites worldwide were to one day have captcha on every webform that could be used for spam, then spammers might start to have an incentive to try and beat it, but right now it's easier and cheaper to just keep trying sites until you find one that is not secured, then send your spam through that until the site owner discovers it and stops you.

2

tonysansan t1_jacqn01 wrote

This is a CAPTCHA. Its purpose is to make parts of a site harder to script than other targets. It’s not trying to stop all possible bots as much as the least sophisticated (and therefore most numerous) ones. Like much of cybersecurity, the point is to make things harder so that hackers look instead for easier targets. It’s a cat and mouse game between the bots and the CAPTCHAs, and companies like google making the latter have the resources to always stay a step ahead.

1

Leonarth5 t1_jacya0r wrote

I had never thought about that. There'd still be some nuance to how you clicked on it, since you don't touch on just one point and you can touch with different pressure and for different lengths of time, but if thinks it's not enough then you'll just get one of those image classification popups to complete.

6

blipsman t1_jad04wu wrote

The reCaptcha is actually tracking mouse movements prior to clicking the button to see if they're human-like or bot-like.

1

Slypenslyde t1_jadc3l2 wrote

This is very hard to answer definitively because what information they track is secret. If they explained their algorithm for human detection, people would update their bots to look more like humans.

What we can glean from some discussions about it and some common sense is that a lot more is going on than just whether you click the right images or the check box. Sometimes you don't even have to click trains or crosswalks or non-civilian targets.

The code already knows a lot about the person you claim to be and the things you usually do. It's already made some guesses based on your IP, the information your browser gives up, the time of day, and what site you're trying to visit. All of that alone is probably enough to verify that you are the right person, but it's not enough to verify you aren't running a program working on your behalf to do things in a fashion the website owner doesn't want.

So it also tracks how your mouse moves to the checkbox when it's time to click. Bots can sometimes move in a very "not natural" way so it looks at the mouse movements to decide if a bot's involved. Maybe you did touch input: that still gives a lot of data about the "tap gesture" like the size of the tap, how long the finger stayed down, the shape of the tap, etc. Bots don't simulate that very well, or when they have to generate multiple taps they tend to create recognizable patterns.

All of that is real squirrely. Sometimes you have to go through multiple rounds of "click the picture". That's probably when something about your input looks "not human enough" so the system wants to see more. Eventually you make it confident enough it's dealing with a human it lets you through.

(Let us also not forget Google sells products based on its image recognition AIs: a side goal of this program has always been presenting images their AI has trouble classifying to humans who can help train it to be better.)

The thing is this is kind of like locks on a house's front door. A person who's spent a few months practicing with lockpicks can get inside silently in less than 30 seconds. But even among criminals the number of people who invest that much is a small percentage, and the people who do generally look for bigger scores than the average household contains. So a simple deadbolt is enough to keep out a large number of criminals, but for the ones not deterred it means they tend to try noisier or more violent forms of entry, which is less likely to go undetected.

That's what bot detection does. Many bots just aren't sophisticated enough to pass the gate. The ones that are get slowed down by dealing with the process. Part of the goal of regulating bots is making sure bot traffic doesn't overwhelm sites and APIs, and slowing down bots is one way to accomplish it.

So no, it's not a perfect shield against bots and can sometimes even reject legitimate humans. But the purpose is to make it harder to use bots and to make the bots that work less efficient. It's good at doing that.

3

MaxGuide t1_jadpst9 wrote

How long did you hold your finger in the screen?

How accurate was your click position?

Was it at several points, like a finger would, or a very tiny selected pixel?

All those are clumpt into human-like misses or bot-driven commands.

3

explainlikeimfive-ModTeam t1_jae09fy wrote

Your submission has been removed for the following reason(s):

Top level comments (i.e. comments that are direct replies to the main thread) are reserved for explanations to the OP or follow up on topic questions.

Joke only comments, while allowed elsewhere in the thread, may not exist at the top level.


If you would like this removal reviewed, please read the detailed rules first. **If you believe it was removed erroneously, explain why using this form and we will review your submission.

1

ShankThatSnitch t1_jae1bhk wrote

This is not the answer. I believe it d9es use that as one metric, but what you don't realize is that when you click that box, it is giving Google consent to scan your history and analyze it for human like patterns. If your history is repeated attempts at sites trying to download this or that, or whatever a bot might be set to do, then it would block. Bur if you browsed reddit, then hopped on Amazon, and google the definition of conalingus l, then watched YouTube, the algo would know you are human.

1

Bensemus t1_jae63jy wrote

It's not just mouse movement. It has a ton of data on how you are interacting with the browser and uses all of it to determine if you are a bot or not. If it's not confident you are a human you get to answer a captcha. Some just ask you to answer a captcha every time.

3

Flair_Helper t1_jaepvx7 wrote

Please read this entire message

Your submission has been removed for the following reason(s):

  • ELI5 requires that you search the ELI5 subreddit for your topic before posting. Users will often either find a thread that meets their needs or find that their question might qualify for an exception to rule 7. Please see this wiki entry for more details (Rule 7).

If you would like this removal reviewed, please read the detailed rules first. If you believe this submission was removed erroneously, please use this form and we will review your submission.

1