Viewing a single comment thread. View all comments

navillusr t1_j4rhitt wrote

The distinctions you’re drawing, pixels vs selenium output and browser vs os, are far less significant than the complexity of the tasks (step-by-step vs entire processes). What they’ve achieved is strictly harder for humans than what you are testing. We can argue whether perception or planning are harder for current technology (the computer vision is far more developed than AI planning right now), but I think you need to reconsider the formulation of your tasks. It seems like they are designed to be easy enough for modern methods to solve.

On another note, most interesting tasks can’t be completed with just an x,y mouse location output. Why did you decide to restrict the benchmark to such a limited set of tasks?

1

mrconter1 OP t1_j4rsus2 wrote

Really appreciate your feedback.

> The distinctions you’re drawing, pixels vs selenium output and browser vs os, are far less significant than the complexity of the tasks (step-by-step vs entire processes). What they’ve achieved is strictly harder for humans than what you are testing. We can argue whether perception or planning are harder for current technology (the computer vision is far more developed than AI planning right now), but I think you need to reconsider the formulation of your tasks. It seems like they are designed to be easy enough for modern methods to solve.

I'm not sure about this. Being able to do the next click on a large diversified benchmark of screenshot is extremely difficult for a computer today. It would need to be able to:

  • Choose the next chess move if I am in a chess application
  • Recognize the color palette icon on the keyboard if I ask it to change the color of the keyboard
  • Recognize the Gmail icon of I say "send an email"
  • Change keyboard mode in if I ask it to write an exclamation mark
  • Press the key "2" if I ask it to type the number equivalent to the number of consuls that traditionally held the office at the same time in ancient Rome.

That's way outside what current models can do. But humans could do it easily. This benchmark would be extremely simple and intuitive for humans to complete (even with far fetched goals) but there is no model today capable of even knowing that you should press on the new line location given a screenshot and "Add line" today.

> On another note, most interesting tasks can’t be completed with just an x,y mouse location output. Why did you decide to restrict the benchmark to such a limited set of tasks?

I wrote about this in the ReadMe. There is no reason. It's just easier to explain the idea for people. I think the most powerful variant of this idea would take a series of frames (video context) and instructions and output something of the following:

  • Click
  • Press (X seconds)
  • Move from P1 to P2 (X seconds)

The benchmark is simple enough to understand and explain so that you can start to envision what such a model would be able to do. Or much more interesting. What would it not be able to do.

If you have any more feedback or thoughts please reply. I wish more people were interested but either the idea sucked or I need to create something interactive for people.

1

blose1 t1_j4td3lq wrote

>Recognize the Gmail icon of I say "send an email"

This is not testing intelligence, this is testing if human was trained on computer usage, knows what e-mail is and used gmail before.

Someone from tribe in Africa would fail your test while he is human and is intelligent, train him on this task like you would train current gen multimodal system and it will pass your benchmark. You train LLM in combination with image model and RL model, train on instruction following using inputs you described and now it understands what it sees, can follow what you want it to do.

1

mrconter1 OP t1_j4tuaal wrote

> This is not testing intelligence, this is testing if human was trained on computer usage, knows what e-mail is and used gmail before.

I don't think it's binary. I think intelligence is a large part here.

> Someone from tribe in Africa would fail your test while he is human and is intelligent,

Could you train a bird to pass all questions on this benchmark? No. Because it's not as intelligent as a human.

> train him on this task like you would train current gen multimodal system and it will pass your benchmark. You train LLM in combination with image model and RL model, train on instruction following using inputs you described and now it understands what it sees, can follow what you want it to do.

Solving this benchmark is an easy problem? How long do you think it will take until we have a model that can causually solve all the instructions a gave in the previous comment?

1