Uristqwerty

Uristqwerty t1_j9gu3tn wrote

> Unused RAM is wasted RAM

Your actual OS is well aware of that fact, and will use spare RAM to make everything faster rather than just the one narcissistic program hogging extra. It'll cache files on disk so that commonly-used things are even faster than SSD. It'll erase some amount of old memory pre-emptively so that when a program demands a block of fresh RAM, the OS can immediately hand some over. Probably other background optimizations too.

8

Uristqwerty t1_j8rsa3r wrote

When it comes to consumer behaviour, people flocking to the cheaper product and actively saying "I don't care about the supply chain! Give me my cheap phone/AI art" while others keep trying to draw attention to unethical practices? It's a very close parallel. Maybe the harm feels less tangible when spread out over orders of magnitude more people, or when you're so accustomed to abusive ToS conditions giving away your rights, but it's still there.

0

Uristqwerty t1_j8og1o5 wrote

The dataset used to train the model needs to be sourced ethically, just like the supply chain used by a physical manufacturer needs to be audited to ensure a supplier isn't using slave labour in a country too remote to attract much attention over the issue. In this case, I'd say the companies need to either dilute their datasets further, using fewer samples from any given person to the point that AI can't replicate the appearance of a specific person or the style of an artist except by improbable coincidence or extreme genericity, or get consent from each person who (or whose work) appears in the training data.

Though this is deepfakes, which I think involve users applying additional training material specifically of the target, so that the AI over-fits to that specific output. If the original AI was ethically/respectfully produced, then the people responsible for the additional rounds of training ought to be the ones at fault, at least as much as the prompt-writer themselves (assuming they're not the same individual!). For that, the only good solution I can think of is legislation.

−1

Uristqwerty t1_j8j9tl9 wrote

The worst doctor leaving school will continue to learn throughout the rest of their career, shaping what they review to cover their known weaknesses. This is a current peak AI that has already finished learning everything it can from the dataset.

2

Uristqwerty t1_j534934 wrote

Better to leave the recycling programs in place, though. If the political will exists to upgrade what happens behind-the-scenes, it could only take a few short years to improve. For the public, though? Habits can transcend generations, so having everyone sort their recyclables from their trash regardless is valuable just to keep the opportunity open.

5

Uristqwerty t1_j13sro5 wrote

Developers' key value is their mindset for analyzing problems, and their ability to identify vagueness, contradiction, and mistakes in the given task, go back to the client, and talk through the edge cases and issues. AI might replace code monkeys who never even attempted to improve themselves, but as with every no-/low-code solution, management will quickly find that a) it's harder than it looks, as they don't have the mindset to clearly communicate the task in language the tool understands (this includes using domain-specific business jargon that the AI won't have trained on, or references to concepts prevalent in that specific company's internal email discussions), and b) a dedicated programmer has a time-efficiency bonus that makes it cheaper for them to do the work than a manager, so might as well delegate to the specialist anyway and free up time for managing other aspects of the business.

Thing is, developers are constantly creating new languages and libraries in an attempt to more concisely write their intentions in a way the computer can understand. Dropping back to human grammar loses a ton of specificity, and introduces a new sort of linguistic boilerplate.

2

Uristqwerty t1_iug996y wrote

Pretty much everyone has a phone, right? And pretty much every phone has a TPM that can store cryptographic keys and self-destruct rather than ever let them leak, right? So, you need two keys: One proof-of-age key that's the same for everyone, perhaps generated fresh each month by the government, for which simply having access to the key says that you're over the threshold and nothing more. Then, a unique-to-you key generated by your phone that is only used once a month on a fixed date to fetch the latest proof-of-age key. Setting that one up may require visiting a government office in-person once to verify your identity. Then, everyone over 18 in a single nation looks alike to the websites asking for your identity. To ensure they don't sneakily swap out the proof key for targeted individuals, each month's public half would be made public, for all users and websites alike to see. Perhaps have the TPM verify a fingerprint or face match before unlocking the proof key.

And if that's a scheme that a cryptography amateur can come up with in minutes, based on a high-level understanding of TPMs and SSL certificates, imagine what someone who properly understands M-of-N secret sharing, zero-knowledge proofs, and all sorts of other clever mathematical tools could do, given months to refine their design and peers to identify and help correct flaws all along the way!

2

Uristqwerty t1_iufytzi wrote

In theory, there are mathematicians working with cryptography systems (no relation to cryptocurrencies, cryptography is a vast field and the rest of it is very useful for everyday life) could invent a scheme where you can prove your age without leaking any metadata to either the website asking, or the government that verified your date of birth and identity at some point in the distant past.

In practice, most implementations will be utter shit, and leak details everywhere. If someone does propose a good solution, the public won't have the expertise, or even willingness to read the specification and think critically about it to tell the difference, and will rally against both good and bad solutions alike. Except the bad solutions will be forced forwards more fervently by people poised to abuse them, so any reasonable one is all but guaranteed to be shot down.

1