Submitted by Balance- t3_120guce in MachineLearning

GPT-4 is a multimodal model, which specifically accepts image and text inputs, and emits text outputs. And I just realised: You can layer this over any application, or even combinations of them. You can make a screenshot tool in which you can ask question.

This makes literally any current software with an GUI machine-interpretable. A multimodal language model could look at the exact same interface that you are. And thus you don't need advanced integrations anymore.

Of course, a custom integration will almost always be better, since you have better acces to underlying data and commands, but the fact that it can immediately work on any program will be just insane.

Just a thought I wanted to share, curious what everybody thinks.

432

Comments

You must log in or register to comment.

BinarySplit t1_jdh9zu6 wrote

GPT-4 is potentially missing a vital feature to take this one step further: Visual Grounding - the ability to say where inside an image a specific element is, e.g. if the model wants to click a button, what X,Y position on the screen does that translate to?

Other MLLMs have it though, e.g. One-For-All. I guess it's only a matter of time before we can get MLLMs to provide a layer of automation over desktop applications...

204

ThirdMover t1_jdhvx8i wrote

>GPT-4 is potentially missing a vital feature to take this one step further: Visual Grounding - the ability to say where inside an image a specific element is, e.g. if the model wants to click a button, what X,Y position on the screen does that translate to?

You could just ask it to move a cursor around until it's on the specified element. I'd be shocked if GPT-4 couldn't do that.

45

MjrK t1_jdiflsw wrote

I'm confident that someone can fine-tune an end-to-end vision-tranformer that can extract user interface elements from photos and enumerate interaction options.

Seems like such an obviously-useful tool and Vit-22B should be able to handle it, or many other Computer Vision tools on Hugging Face... I would've assumed some grad student somewhere is already hacking away at that.

But then also, compute costs are a b**** but generating training data set should be somewhat easy.

Free research paper idea, I guess.

20

modcowboy t1_jdkz6of wrote

Probably would be easier for the LLM to interact with the website directly through the inspect tool vs machine vision training.

3

MjrK t1_jdm4ola wrote

For many (perhaps these days, most) use cases, absolutely! The advantage of vision in some others might be interacting more directly with the browser itself, as well as other applications, and multi-tasking... perhaps similar to the way we use PCs and mobile devices to accomplish more complex tasks

2

plocco-tocco t1_jdj9is4 wrote

It woulde be quite expensive to do tho. You have to do inference very fast with multiple images of your screen, don't know if it is even feasible.

9

ThirdMover t1_jdjf69i wrote

I am not sure. Exactly how does inference scale with the complexity of the input? The output would be very short, just enough tokens for the "move cursor to" command.

1

plocco-tocco t1_jdjx7qz wrote

The complexity of the input wouldn't change in this case since it's just a screen grab of the display. Just that you'd need to do inference at a certain frame rate to be able to detect the cursor, which isn't that cheap with GPT-4. Now, I'm not sure what the latency or cost would be, I'd need to get access to the API to answer it.

1

MassiveIndependence8 t1_jdl9oq9 wrote

You’re actually suggesting putting every single frame into gpt-4? It’ll cost you a fortune after 5 seconds of running it. Plus the latency is super high, it might takes you an hour to process a “5 seconds” worth of images.

1

ThirdMover t1_jdlabwm wrote

What do you mean by "frame"? How many images do you think GPT-4 would need to get a cursor where it needs to go? I'd estimate four or five should be plenty.

1

SkinnyJoshPeck t1_jdhis65 wrote

i imagine you could interpolate, given access to more info about the image post-GPT analysis. i.e. i’d like to think it has some boundary defined for the objects it identifies in the image as part of metadata or something in the API.

9

Single_Blueberry t1_jdhtc58 wrote

What would keep us from just telling it the screen resolution and origin and asking for coordinates?

Or asking for coordinates in fractional image dimensions.

7

MassiveIndependence8 t1_jdl9s3u wrote

The problem is that it can’t do math and spatial reasoning that well

1

Single_Blueberry t1_jdnyc2d wrote

Hmm I don't know. It's pretty bad at getting dead-on accurate results, but in many cases the relative error of the result is pretty low.

1

acutelychronicpanic t1_jdhksvy wrote

Let it move a "mouse" and loop the next screen at some time interval. Probably not the best way to do it, but that seems to be how humans do it.

3

__ingeniare__ t1_jdhxcds wrote

I would think image segmentation for UI to identify clickable elements and the like is a very solvable task

3

DisasterEquivalent t1_jdk10wf wrote

I mean, most apps have accessibility tags for all objects you can interact with (it is standard in UIKit) - The accessibility tags have hooks in them you can use for automation. so you should be able just have it find the correct element there without much searching.

2

eliminating_coasts t1_jdhkkw3 wrote

You could in principle send them four images, that align at a corner where the cursor is, if it can work out how images fit together.

1

Runthescript t1_jdknxkl wrote

Are you trying to break captcha? Cause this is definitely how we break captcha

0

Suspicious-Box- t1_jdzj7wr wrote

Just need training for that. Its amazing but what could it do with camera vision into the world and a robot body. Would it need specific training or could it brute force its way to moving a limb. The model would need to be able to improve itself real time though.

0

morebikesthanbrains t1_jdii4y7 wrote

But what about the black box. Just feed it enough data, train it, and it should figure out what to do?

−1

dankaiv t1_jdhn2at wrote

... and computer interfaces (i.e. GUIs) have extremely low noise to signal ratio compared to image data from the real world. I believe soon AI will be better at using computers than most humans.

63

thePaddyMK t1_jdlqyng wrote

I think so, too. IMO this will open new ways for software development. There has already been work looking towards RL to find bugs in games. Like climbing walls that you should not. With a multimodal model there might be interesting new ways to debug and develop UIs.

6

ginger_beer_m t1_jdi9e5j wrote

Carry this to the conclusion. Maybe not GPT4, but future LLM could interpret what's on the screen and drive the interaction with the computer themselves. This would potentially displace millions of human out of job as they get automated by the model.

50

nixed9 t1_jdifhni wrote

This is quite literally what we hope for/deeply fear at /r/singularity. It's going to be able to interact with computer systems itself. Give it read/write memory access and access to it's own API, or the ability to just simply visually process the screen output... and then.... what?

Several years ago, as recently as 2017 or so, this seemed extremely far-fetched and the "estimation" of a technological singularity of 2045 seemed wildly optimistic.

Right now it seems like it's more like than not to happen by 2030.

46

rePAN6517 t1_jdkinrg wrote

> This is quite literally what we hope for/deeply fear at /r/singularity

That sub is a cesspool of unthinking starry-eyed singularity fanbois that worship it like a religion.

12

ExcidianGuard t1_jdkrsnj wrote

Apocalyptic cults have been around for a long time, this one just has more basis in reality than usual

13

fiftyfourseventeen t1_jdlm1n7 wrote

Lmao it seems everyone used chatGPT for a grand total of 20 minutes and threw their hands up saying "this is the end!". I have always wondered how the public would react once this tech finally became good enough for the public to notice, can't say this was too far from what I envisioned. "What if it's conscious and we don't even know it!" Cmon give me a break

0

nixed9 t1_jdnm1qx wrote

> Sparks of Artificial General Intelligence: Early experiments with GPT-4

https://arxiv.org/pdf/2303.12712.pdf

3

fiftyfourseventeen t1_jdnqlqc wrote

That's really cool, but I mean, it's published by Microsoft which is working with openAI, and it's a commerical closed source product. It's in their best interest to brag about it's capabilities as much as possible.

There are maybe sparks of AGI, but there are a lot of problems that are going to be very difficult to solve that people have been trying to solve for decades.

0

frequenttimetraveler t1_jdjhhz8 wrote

It will also render chatGpt plugins obsolete. The chat will replace them by simply using the browser.

2

harharveryfunny t1_jdhkn99 wrote

> GPT-4 with image input can interpret any computer screen

Not necessarily - it depends how they've implemented it. If it's just dense object and text detection, then that's all you're going to get.

For the model to be able to actually "see" the image they would need to feed it into the model at the level of neural net representation, not post-detection object description.

For example, if you wanted the model to guage whether two photos of someone not in it's training set are the same person, then it'd need face embeddings to do that (to gauge distance). They could special case all sorts of cases like this in addition to object detection, but you could always find something they missed.

The back-of-a-napkin hand-drawn website sketch demo is promising, but could have been done via object detection.

In the announcement of GPT-4, OpenAI said they're working with another company on the image/vision tech, and gave a link to an assistive vision company... for that type of use maybe dense labelling is enough.

30

TikiTDO t1_jdi8ims wrote

The embeddings are still just a representation of information. They are extremely dense, effectively continuous representations, true, but in theory you could represent that information using other formats. It would just take far more space and require more processing.

Obviously having the visual system provide data that the model can use directly is going to be far more effective, but nothing about dense object detection and description is going to be fundamentally incompatible with any level of detail you could extract into an embedding vectror. I'm not saying it would be a smart or effective solution, but it could be done.

In fact, going to another level, LLMs aren't restricted to working with just words. You could train an LLM to receive a serialized embedding as text input, and then train it to interpret those. After all, it's effectively just a list of numbers. I'm not sure why you'd do that if you could just feed it in directly, but maybe it's more convenient to not have to train in on different types of inputs or something.

3

harharveryfunny t1_jdic1s3 wrote

>Obviously having the visual system provide data that the model can use directly is going to be far more effective, but nothing about dense object detection and description is going to be fundamentally incompatible with any level of detail you could extract into an embedding vectror. I'm not saying it would be a smart or effective solution, but it could be done.

I can't see how that could work for something like my face example. You could individually detect facial features, subclassified into hundreds of different eye/mouth/hair/etc/etc variants, and still fail to capture the subtle differences that differentiate one individual from another.

4

TikiTDO t1_jdiirji wrote

For a computer words are just bits of information. If you wanted a system that used text to communicate this info, it would just assign some values to particular words, and you'd probably end up with ultra long strings of descriptions relating things to each other using god knows what terminology. It probably wouldn't really make sense to you if you were reading it because it would just be a text-encoded representation of an embedding vector describing finer relations that would only make sense to AIs.

5

harharveryfunny t1_jdj5mom wrote

>it would just be a text-encoded representation of an embedding vector

One you've decided to input image embeddings into the model, you may as well enter them directly, not converted into text.

In any case, embeddings, whether represented as text or not, are not the same as object recognition labels.

3

TikiTDO t1_jdj6dum wrote

I'm not saying it's a good solution, I'm just saying if you want to hack it together for whatever reason, I see no reason why it couldn't work. It's sort of like the idea of building a computer using the game of life. It's probably not something you'd want to run your code on... But you could.

2

harharveryfunny t1_jdj9if0 wrote

I'm not sure what your point is.

I started by pointing out that there are some use cases (giving face comparison as an example) where you need access to the neural representation of the image (e.g. embeddings), not just object recognition labels.

You seem to want to argue and say that text labels are all you need, but now you've come full circle back to agree with me and say that the model needs that neural representation (embeddings)!

As I said, embeddings are not the same as object labels. An embedding is a point in n-dimensional space. A label is an object name like "cat" or "nose". Encoding an embedding as text (simple enough - just a vector of numbers) doesn't turn it into an object label.

5

TikiTDO t1_jdjibnv wrote

My point was that you could pass all the information contained in an embedding as a text prompts into a model, rather than using it directly as an input vector, and an LLM could probably figure out how to use it even if the way you chose to deliver those embeddings was doing a numpy.savetxt and then sending the resulting string is as a prompt. I also pointed out that you could if your really wanted to write a network to convert an embedding to some sort of semantically meaningful word soup that stores the same amount of information. It's basically a pointless bit of trivia which illustrates a fun idea.

I'm not particularly interested in arguing whatever you think I want to argue. I made a pedantic aside that technically you can represent the same information in different formats, including representing embedding as text, and that a transformer based architecture would be able to find patterns it it all the same. I don't see anything to argue here, it's just a "you could also do it this way, isn't that neat." It's sort of the nature of a public forum; you made a post that made me think something, so I hit reply and wrote down my thoughts, nothing more.

2

reditum t1_jdhfxfa wrote

Check ACT-1 and WebGPT

22

WokeAssBaller t1_jdixm43 wrote

So WebGPT doesn’t quite do this, it uses a JavaScript library to simplify web pages to basic text

11

reditum t1_jdja3tw wrote

Oh well, that’s what I get for not reading the paper.

5

byteuser t1_jdiirr7 wrote

Are they still doing development in ACT-1? Last update seems September last year

4

reditum t1_jdij2tu wrote

I honestly don’t know. I also think their approach wasn’t great either. Maybe (hopefully) they ditched it for something better.

5

shitasspetfuckers t1_je1v7pf wrote

Can you please clarify what specifically about their approach wasn't great?

1

reditum t1_je29q9y wrote

From a comment on HackerNews they made a Chrome extension, gathering all the training data from it, and it runs super slowly as well.

1

dlrace t1_jdh8ra8 wrote

The new plugins can be/ are created by just documenting the api and feeding it to gpt4 aren't they? no actual coding . So it seems at least plausible that the other approach would be as you say, let it interpret the ui visually.

16

loopuleasa t1_jdhrwkv wrote

GPT4 is not publicly multimodal though

9

farmingvillein t1_jdhua51 wrote

Hmm, what do you mean by "publicly"? OpenAI has publicly stated that GPT-4 is multi-modal, and that they simply haven't exposed the image API yet.

The image API isn't publicly available yet, but it is clearly coming.

9

loopuleasa t1_jdhuit0 wrote

talking about consumer access to the image API

is tricky, as the system is swamped already with text

they mentioned an image takes 30 seconds to "comprehend" by the model...

13

MysteryInc152 t1_jdj8x5e wrote

>they mentioned an image takes 30 seconds to "comprehend" by the model...

wait really ? Cn you link source or something. There's no reason a native implementation should take that long.

Now i'm wondering if they're just doing something like this -https://github.com/microsoft/MM-REACT

3

yashdes t1_jdij1tl wrote

these models are very sparse, meaning very few of the actual calculations actually effect the output. My guess is trimming the model is how they got gpt3.5-turbo and I wouldn't be surprised if gpt4-turbo is coming.

0

farmingvillein t1_jdj9w98 wrote

> these models are very sparse

Hmm, do you have any sources for this assertion?

It isn't entirely unreasonable, but 1) GPU speed-ups for sparsity aren't that high (unless OpenAI is doing something crazy secret/special...possible?), so this isn't actually that big of an upswing (unless we're including MoE?) and 2) openai hasn't released architecture details (beyond the original gpt3 paper--which did not indicate that the model was "very" sparse).

1

SatoshiNotMe t1_jdkd8l5 wrote

I’m curious about this as well. I see it’s multimodal but how do I use it with images? The ChatGPTplus interface clearly does not handle images. Does the API handle image?

1

farmingvillein t1_jdkdjye wrote

> I see it’s multimodal but how do I use it with images?

You unfortunately can't right now--the image handling is not publicly available, although supposedly the model is capable.

1

BullockHouse t1_jdidje0 wrote

I'm curious if it can be instructed to play minecraft in a keyboard only mode simply by connecting a sequence of images to key stroke outputs.

3

BullockHouse t1_jdil2ok wrote

I'm familiar! I'm curious though if it can generalize well enough to play semi-competently without specialized training. Has implications for multi-modal models and robotics.

2

Art10001 t1_jdihrod wrote

Probably. And if not, certainly someday.

1

CollectionLeather292 t1_jdj0jsl wrote

How do I try it out? I can't find a way to ad an image input to the chat...

3

H0lzm1ch3l t1_jdjntxm wrote

Yes but why let the AI use a GUI when we can just give it an API …

3

MassiveIndependence8 t1_jdla8px wrote

Not all api are public and LLM aren’t fined tune to process api

2

signed7 t1_jdoy969 wrote

> LLM aren’t fined tune to process API

GPT-4 isn't. If plugins becomes a success, I reckon GPT-5 will be.

1

Spziokles t1_jdjc4wn wrote

So when playing League of Legends, it could tell you which enemy champion disappeared from their lane, and in how many seconds you should retreat to stay safe?

Curious how this will impact E-Sports and wether it will be treated like doping in some form.

2

alexmin93 t1_jdmocbw wrote

The problem is that LLMs aren't capable to make decisions. While GPT-4 can chat almost like a sentient being, it's not sentient at all. It's not able to coprehend the limitations of it's knowledge and capabilities. It's extremely hard to make it call an API to ask for more context. There's no way it will be good at using a computer like a user. It can predict what wappens if you do something but it won't be able to take some action. It's a dataset limitation mostly, it's relatively easy to train language models as there's almost infinite ammount of text on the Internet. But are there any condition-action kind of datasets? You'd need to observe human behavior for millenias (or install some tracker software on thousands of workstations and observe users behavior for years)

2

mycall t1_jdi3cko wrote

Can it detect object in the photo? Maybe drive an RC car with it? :)

1

wind_dude t1_jdikwc4 wrote

I'm also curious about this, I reached out for developer access to try and test this on web screenshots for information extraction.

1

itsnotlupus t1_jdj2xpr wrote

Meh. We see a few demos and all of the demos work all of the time, but that could easily be an optical illusion.

Yes, GPT-4 is probably hooked to subsystems that can parse an image, be it some revision of CLIP or whatever else, and yes it's going to work well enough some of the time, maybe even most of the time.

But maybe wait until actual non-corpo people have their hands on it and can assess how well it actually works, how often it fails, and whether anyone can actually trust it to do those things consistently.

1

frequenttimetraveler t1_jdjhb4b wrote

Automatic tech support will be huge. Print screen, then 'computer, fix this problem'.

1

simmol t1_jdjsvuh wrote

Wouldn't it be more like the tech support is constantly monitoring your computer screen so you don't even have to print screen?

2

SeymourBits t1_jdlwrgi wrote

This is the most accurate comment I've come across. The entire system is only as good and granular as the CLIP text description that's passed into GPT-4 which then has to "imagine" the described image, often with varying degrees of hallucinations. I've used it and can confirm it is currently not possible to operate anything close to a GUI with the current approach.

1

shitasspetfuckers t1_jed7vuu wrote

Can you please clarify what specifically you have tried, and what was the outcome?

1

simmol t1_jdjq815 wrote

I think for this to be truly effective, the LLM would need to take in huge amounts of computer screen images in its training set, and I am not sure if that was done for the pre-trained model for GPT-4. But once this is done for all possible computer screen image combinations that one can think of, then it would probably be akin to the self-driving car type of algorithm where you can navigate accordingly based on the images.

But this type of multi-modality would be useful if you have the person actually sitting in front of the computer working side-by-side with the AI, right? Because if you want to eliminate the human from the loop, then I am not sure if this is an efficient way of training the LLM since these type of computer screen images are what helps a human navigate the computer, and not necessarily optimal for the LLM.

1

MyPetGoat t1_jdk8icb wrote

You’d need the model to be running all the time observing what you’re doing on the computer. Could be done

1

simmol t1_jdkd4pf wrote

Seems quite inefficient though. Can't GPT just access the HTML or other type of codes associated with the website and just access the websites via the text as opposed to image?

1

Puzzleheaded_Acadia1 t1_jdjx3oq wrote

I have questions can I fine-tune the gpt-neo-x 125m parameters on chat dataset to give me a decent answer like human because when I run it give me random characters

1

MyPetGoat t1_jdk8b7t wrote

How big is the training set? I’ve found small ones can generate gibberish

2

skaag t1_jdli80o wrote

I have not seen a way in the GPT-4 UI by OpenAI to submit an image? How do you do it?

1

emissaryo t1_je4jvzt wrote

Now I'm even more concerned about privacy. Governments will use it for surveillance and the more modalities we add, the more surveillance there will be.

1

banmeyoucoward t1_jdhg7kt wrote

I'd bet that screen recordings + mouse clicks + keyboard inputs made their way into the training data too.

0

nmkd t1_jdhmgpm wrote

Nope, it's multimodal in terms of understanding language and images. It wasn't trained on mouse movement because that's neither language nor imagery.

4

Deep-Station-1746 t1_jdhhbbg wrote

Nope. Ability to input something doesn't mean being able to use it reliably. For example, take this post - your eyes have an ability to input all the info on the screen, but as a contribution, this post is pretty worthless. And, you are a lot smarter than GPT-4, I think.

Edit: spelling

−19

3_Thumbs_Up t1_jdhp6zj wrote

Unnecessarily insulting people on the internet make you seem really smart. OP, unlike you, at least contributed something of value.

11

Balance- OP t1_jdhjm0f wrote

It doesn’t have to use it yet for actions on its own, but it could be very useful context when prompting questions.

1

ObiWanCanShowMe t1_jdhil2v wrote

We are smarter locally, meaning to our experience and our capability, we are not "smarter" in the grand scheme.

0