You must log in or register to comment.

Te_Quiero_Puta t1_j5trn5e wrote

First, Dall•E, then ChatGPT, and now our WiFi's gonna 3D model us... Shit's getting weird, fast.


iboughtarock OP t1_j5ttqoi wrote

Don't forget about Riffusion. It's basically like Dalle-E, but they trained it on images of spectrograms and can generate new music.


grammar_nazi_zombie t1_j5tvcyc wrote

Apparently just linking the subreddit is too short of a comment, so I need to fill it with garbage useless text, because that’s way better than a simple, succinct comment.



esprit-de-lescalier t1_j5uv5cd wrote

It’s almost as if we are at the knee of an exponential curve just about to go vertical


Test19s t1_j5w82i1 wrote

I just hope we don't get knee-capped by scarcities.


ogcuddlezombie t1_j5uqwhe wrote

After recently finishing my yearly reread of Neuromancer, AIs have been in the headlines constantly.

Shits about to get really weird, really fast


iboughtarock OP t1_j5ti13h wrote

Simple Wi-Fi routers can be used to detect and perceive the poses and positions of humans and map their bodies clearly in 3D, a new report has found.

With the help of AI neural networks and deep learning, researchers at Carnegie Mellon University were also able to create full-body images of subjects.

This proof-of-concept would be a breakthrough for healthcare, security, gaming (VR), and a host of other industries. It would also overcome issues affecting regular cameras, such as poor lighting or simple obstacles like furniture blocking a camera lens, while also eclipsing traditional RBG sensors, LiDAR, and radar technology. It also would cost far less and consume less energy than the latter, researchers noted.

However, this discovery comes with a host of potential privacy issues. If the technology does make it to the mainstream, one’s movements and poses could be monitored — even through walls — without prior knowledge or consent.

Article Link


tlighta t1_j5upmr1 wrote

This really shows the potential for AI to see a bunch of things that we currently can't see.


wogolfatthefool t1_j5v6hid wrote

Can you image if holo lens was still a thing, and being able to see all the radios waves in your area in real time because of AI integrated into it?


Winjin t1_j5vqpcl wrote

I think you wouldn't like it. It's probably like standing in the middle of a very busy 12-lane road where every second car is blasting music from open windows.

We have radio waves all around us.


thomascardin t1_j5x67be wrote

My dude let's start a company that makes this into an AR reality, since Apple is about to release the glasses that can make it happen.


myrddin4242 t1_j609guu wrote

Radio waves are a range of frequencies of light. As with visible frequencies, what you would see is the emitters lighting up, plus whatever reflective-to-radio surfaces are currently in view.


wogolfatthefool t1_j60bfq2 wrote

Yes BUT through AI or some programing, you wouldn't actually have to view it as such. It would appear in a visual form you could tolerate.


myrddin4242 t1_j60c9wo wrote

Sure, just pointing out that, since radio is just a different color, you wouldn’t see waves that are passing you by, anymore than you would see a beam of red light in the air.


greenappletree t1_j5tqtoy wrote

That incredible- were they able to distinguish different people or is this a control room with a single individual standing still. I would imagine it can get exponentially more complicated with multiple people moving around.


Altruistic_Rate6053 t1_j5vgg6x wrote

I remember seeing a patent like this a couple years ago. Like most things these days we will soon get some new perks at the cost of companies being able to mine this positional data with no limit


Nanohaystack t1_j5tizx0 wrote

Echolocation has been a thing for a while, it's just that the normal radio background made it impractical to try developing deterministic echolocation techniques for heavily trafficked applications, though attempts were made even in the early 2000s. This is essentially the same thing as we saw in Dark Knight film, in 2008. The use of machine learning for processing such massive amounts of data enabled this application of a well-known technology.


Sweeth_Tooth99 t1_j5tr7e3 wrote

Would you need to modify the firmware of the router for being able to that with it?


seamustheseagull t1_j5tvgv9 wrote

I'm going to guess that the "routers" part in the headline is even a theory.

I didn't read the article, but if I had to guess, this was probably accomplished in a lab environment using multiple custom-built wireless access points and a load of number crunching behind the scenes infrastructure to develop the 3D images.

This means that in theory, using a top-tier wireless mesh system with a special configuration of antennae, the correct firmware and a specific layout of the access points, they could be used to relay information to a central system which could crunch this data to produce 3D layouts.

There is zero chance this is coming to your $50 Netgear home router next week.


Sweeth_Tooth99 t1_j5tw3io wrote

Thought maybe a hacker with the right software could remotely use a route to image whatever is near the router.


seamustheseagull t1_j5u8dwb wrote

Not at this stage. A malicious firmware in future perhaps, but the hacker would still need 3 devices (I read the article :D) in the room, all with compromised formware.

If this application proves to be useful, then they will likely continue building on it to allow partial imaging with two or even one device, as well as mapping of other objects besides people, and through walls and other objects which are permeable from a WiFi POV.

But what they've done on this pass is fundamentally a form of reverse triangulation; using the data from each of three waypoints to discover data points within their boundaries that can't be seen.

Think of it like 3 people standing each on the top of a hill, looking at an object in front of them. They all relay information to a 4th person about what they can and can't see. The 4th person can then use this information (after a lo-haw-haw-hawt of calculations and line drawing) to draw a reasonably accurate 3D rendering of the object.

Actually, from a WiFi perspective it's like there's a big object made of clear fluid between them, so they're telling the fourth person not only what they can see, but hoew clearly they can see it. Hence the need for insane numbers of calculations that probably weren't even reasonably possible a decade ago.


SsooooOriginal t1_j5vmpci wrote

Router, phone, pc, laptop, tablet, game console, IoT devices.

Practically any place with a wifi router is going to have two more devices connected to it.

Most people, myself included, don't have much more than a glimpse of a clue as to how we can secure our own networks. Fuck.


myrddin4242 t1_j609sli wrote

Without multiple routers, the image would lack depth and perspective.


tuscanspeed t1_j5u7pmu wrote

> I didn't read the article,
>There is zero chance this is coming to your $50 Netgear home router next week.

Well..maybe you should.

>Researchers used three WiFi transmitters, such as those on a $50 TP-Link Archer A7 AC1750 WiFi router, positioned it in a room with several people, and successfully came up with wireframe images of those detected in the room.


LaserHammerXI t1_j5x48p2 wrote

Maybe you should read the paper. They use off-the-shelve routers and train traffic data against two cam feeds. It's virtually free. No fancy hardware to capture, and no heavy compute necessary.


Nanohaystack t1_j5ukavy wrote

You'd have to fiddle with the firmware in any case to get such capacities even if you weren't using the router itself for computing. If you reeeeeaaaalllyyyy optimized a machine learned model that's fitted precisely to the conditions of a particular room, then it could be possible. There are wifi routers out there on the more expensive side with beefy CPUs that have like 1 gig memory and can take a few hundred MB worth of firmware. Even stuff you can find off the shelf in a BestBuy now, like Asus AX1800, can carry 128MB flash, that's sufficient for a rudimentary machine learning setup, though with its 256MB RAM and 4-core 1.5GHz ARM Cortex, it would be rather slow at training a model and will definitely need external storage for swap space.

If I were approaching such a task today, I'd be using two or three access points as "sensors" using a jerry rigged radio driver to stream raw data straight to a dedicated machine learning setup. I've met tech wiz guys who are in the business of optimizing trained neural networks and they do some very impressive stuff, but even then, I'd be surprised if a run of the mill home router CPU wouldn't burst into fire under all this load.


[deleted] t1_j5txnjf wrote



PM_ME_NUNUDES t1_j5v40mo wrote

It's probably gradient descent based inversion. You take measurements of the signal in the room with no people in it as your baseline, then introduce 1 person and observe the difference in signal responses and build a 3D forward model which accurately reconstructs the observed data due to the pertabation of the signal by the human.


VRrob t1_j5uwtq5 wrote

CIA has been using similar tech for decades to spy on people.


jalalao t1_j5v0hag wrote

Reminds me of the dark knight technology where they were using cellphones to spy on people in a similar way


bbrosen t1_j5wkqi9 wrote

with gps, bluetooth, wifi and lidar, cameras and microphones on most cell phones now, it is no wonder...


ayub_mja t1_j5vjlhq wrote

Does anybody remember a tv show call Continuum had a character named Alec Sadler? Saw this tech on the tv show years ago


bbrosen t1_j5wkunz wrote

this is why apps and software on cell phones can be a data mine for foreign countries who are our enemies


Avaruusmurkku t1_j5xpmay wrote

Great. Now we have to worry about the fucking Wifi spying on you. Modern era is privacy hell.


BigCantankerous t1_j5y8dqo wrote

Does this only work over 2.4Ghz? It is the frequency that affects water / human tissue. If it doesn't work over 5ghz, does turning off 2.4Ghz close this door?installing multiple 5Ghz access points to cover a house only prevents it from being done over your network. It doesn't prevent getting scanned by all of your neighbors' routers/access points in surrounding houses. Cyber operators have been carrying their phones in metal fabric bags/cases for over a decade to prevent the devices from sending/receiving signal or being tracked while in the case. Disabling sensors (dev options) and turning off location on Android devices would prevent it from your own device, but AFAIK, it's impossible on iPhone. Turning off an iPhone just turns it into an airtag for all other Apple devices.


heyman0 t1_j62855f wrote

well it looks like those conspiracy theorists were onto something.


gizmosticles t1_j5w3kmy wrote

Are we like 100 percent sure that being around routers 24/7 is like totally safe and nothing to worry about?