nipsen

nipsen t1_jdi8t1x wrote

Yes and no, so to speak. My dearly bought Thinkbook has it. Any Asus of any price-range. Most HPs made in recent years. Literally anything in a slim form-factor will have this solution with the keyboard being plastic-welded to the top chassis.

Ironically, a lot of the actual consumer-grade shit, like Lenovo Yoga, etc., inherited their design from the old elephant euthanasia brick devices, and have detachable keyboard modules. But it is actually glued and fused to what is doubling as the back panel for the mainboard. Which is also the solution on many of the older IBM-ish Lenovos. The keyboard module itself is not difficult to produce or replace on these devices, and is just attached with a standard ribbon cable. But to actually replace it requires some form of OEM-specific voodoo.

I am told by entirely reliable industry insiders that this is not done to make sure the laptops must be specified to different regions, to maintain these artificial region-offices of these companies, at all. Or that it is one of the few remaining things that can be glued to the laptop, so that when it breaks, the whole thing has to be replaced - meaning that it is a great way to make sure random consumers also buy warranty "deals". I'm told none of these things are involved, at all.

1

nipsen t1_jdekv15 wrote

Oh, good. Finally usb is fast enough to not have to worry about internal motherboard contacts. Now we can actually just have a small enclosure and plug an external card into any laptop-sleeve with usb! Perfect!

But wait, dear nerds, hearken to this! - A company will now allow you to buy a specially fitted module, for absurd amounts of money, that you can /slot into the back of only a single type of laptop/ via USB4! Not bad, eh!!! Eh!!!

(Also, here's some flak about mxm egpu solutions, because it's not Mac enough)

Seriously, though - the keyboard modules are fantastic. That'd save me a day of pulling fused plastic pips off the plate on the back of the motherboard on a laptop, to get another keyboard replacement in. Why not sell the laptop on that? "No need to replace your entire laptop if the spacebar breaks".

39

nipsen t1_j8hp1ym wrote

a) RISC-V is a general, abstract and formulaeic scheme for how computing elements will work together. There's nothing that stops Intel from offering their compute elements as part of a RISC-V design. Which will have very obvious usage-scenarios, and will have abysmal performance. But there is nothing stopping Intel from doing that.

b) There are parts of Intel that certainly had ambitions of not being married to the cisc-designs from the 90s forever. But those parts of the company mysteriously suffer layoffs, or else are shut down altogether. Projects they are involved in - by sheer chance, I'm sure - end up modifying the prototypes to include monolithic designs with "secret" cisc-optimisation on closed fpga-solutions.

c) Although Intel were promoting a "silicon pre-production stage" of Risc-V chips, this project is now cancelled. They are not producing any Risc-V chips -- no one are producing Risc-v chips. There will be chips based on the schema, for certain, but they will not be the kind of chip that will have the makeup of a protected, instruction set bound specific fpga. In other words: nothing stops Intel from marketing their bullshit offering as "RISC-V", even though they might not offer much in terms of performance, or really use the overall schema at all. That's what they have been gearing up towards, and that's what failed. That's why they now have nothing in it. It's literally not compatible with their "Business model".

d) The Risc-V international foundation - by sheer chance, I'm sure - has relocated to Switzerland in order to specifically -- by sheer chance -- escape very specific concerns about US trade regulations and potential lawsuits.

e) The contribution to this foundation from Intel was 1bn dollars. It's a vanishingly small sum in the sceme of things.

Lastly: is really Risc-V a competitor to arm? I hear tons of people say that, and I certainly read it in industry insider-infested american (spiritually or otherwise) publications. But is it really the case?

What is the case is that ARM offers a very specific type of solution where their basic functions can be enhanced by adding various instruction sets. The m1 at Appul is probably a well known enough example, where adding instruction sets to the hardware layer, both programmable to a certain extent and specified on beforehand, is part of the design. A lot of Arm's customers do not use this part of the design at all, though. And there has been a very specific push from Qualcomm, among others, to gear ARM into having higher core-speeds and better out of order single instruction performance.

ARM's reaction to that has been to produce what the customers want, but there is a very obvious problem here in that as these chips are more and more geared into where the design just does not have any actual strengths - that it will be immediately gobbled up by if not Intel's x86 offerings, then AMD's. So as an alternative Risc-based schema takes shape -- a screaming necessity if you know anything useful about programming, I could add -- what that means is that ARM will then be able to compete with general Risc designs on specific applications. While the codebase that is needed for both ARM and RISC-V to have any point whatsoever - will be developed.

As opposed to being supplanted by an attempt to get x86 into the mobile sphere, and into anything, like Intel has been attempting for decades now. And where they actually have succeeded to a certain degree thanks to the power of marketing, lawsuit and a throwaway budget for this that dwarfs the GDP of a medium-sized European country.

So no - ARM is not a direct competitor to RISC-V, or vice-versa. The road back to RISC will happen, and Intel will not be part of that. At least not in the way the company does business now, or the way it has done business in the past. Intel will disappear as the company it is now, if it even becomes involved with making general contributions to Risc-V schema type chip clusters. And that's just not going to change, regardless of how many billions of dahllars go into marketing.

You will claim differently until the end of time, I'm sure. But your opinon, as shocking as it may seem, does not, in fact, alter reality.

0

nipsen t1_j8eyvsi wrote

I don't know. No one does, after many, many years. I mean, other than screwing over competition with legal wrangling.

The joke is that Intel has very literally stalled or outright managed to crush several attempts to put x86 instruction set emulators and cisc-implementations on various RISC-computers, now that the instruction set level storage is no longer prohibitively expensive on a computation unit. The actual legal details of this ongoing feud is so sordid and ridiculous at the same time, that in several cases even completely blank judges have decided the arguments don't hold up. But at the moment, if you wanted to do cisc-type optimisation of an x86 emulation engine, whether this is programmable instruction sets or not, this runs afoul Intel's definition of PC. So does chip-constructions that simply store instruction sets on general computation cores.

So there is in a sense still a requirement that an abstraction of a RISC-implementation cannot actually use x86 instruction sets at all. Which is why it is such a big deal that google throws it's weight behind a general Risc-v abstraction layer, in an attempt to make this a full ecosystem. I'm sure Intel will stick to the existing market forever in the same way. And surely there will be endless amounts of lawsuits coming the instant someone figures out how to emulate x86 VMs with any speed on Risc-V architectures. And at this point I wouldn't even be surprised if Intel will claim that any architecture technically capable of execution an emulated x86 instruction set in hardware will infringe on this utility of the x86 instruction set Intel has defined as a PC.

Anyway - at some point Intel will be gone, and this idiocy will end. But judging by how it's being done now, it won't end until the company is bankrupt.

2

nipsen t1_j8ewzay wrote

None of the terms you wrote there make any sense. And the rest is at best just false.

But if it helps you support something that doesn't suck the air out of the global integrated curcuit market, with the great power of your opinon on the Internet -- sure, buddy. I'm sure it'll be great for the Internet of Things..

Seriously, though -- what in the world do you mean with any of that?

0

nipsen t1_j2dn1d3 wrote

It's a little bit less magical than what people are suggesting here.. You don't actually hear as well, so to speak, as a microphone. So there's enough time to invert the soundwave and play it back before you start picking up the vibrations that produce sound you hear.

Alternative way to think about it - you delay the incoming sound slightly and then play it back as perfectly out of sync as you can. The question really is the response, and how quickly you can generate the wave accurately.

The trick is that you should be producing a sound-wave that sounds like what is actually heard behind the clogs, for example. And you really don't want to play back a really, really loud sound, or increase the wave too quickly based on some extrapolation, etc. And it's typically not perfect, so you get noise. You can also mask it all and increase response, so to speak by having a noise-floor.

But yeah, if you play back some fairly low volume sound where the noise is not physically noticeable, and you allow for some noise on the bottom here -- an exactly out of sync wave is going to cancel the sound out, in the sense that your ear is not going to vibrate and make you hear sound.

2

nipsen t1_iy7r9qz wrote

Np. But sorry if I sound like I'm .. really mad at someone, or something... But there's just so much bs floating around.

I guess I should have added something about where the amplifier even comes in. Sometimes you might just have a power-source and some digital transfer standard. I think most of the time, this is what you have now. You have a laptop and a usb-c, or a phone with usb-c. You might have a similar setup with hdmi. So what you're really requiring in that case is a) a very small dac that produces something reasonable (what's needed here is a 1 cent chip). And b) a very small amplification of that converted signal.

A good amplifier will do both of those from the digital source, and have noise-filters in that process, along with some equaliser voodoo, very often. This stage is typically where the actual differences in sound "feel" comes from. For example, when you listen to something from Hegel, you are not just getting the raw signal (if there is such a thing), you are getting something that they feel mimicks the feel of an orchestra-lounge. And that comes in the conversion stage for the most part (when you map out the entirely non-analog signal), and in the noise-filters after it, and then finally in the ranges the amplifier will work best at against such and such speakers.

So imagine a different scenario where you have a leveled output from an analog source (like a casette player, or a cd-player with it's own conversion stage), and now your amplifier is supposed to magick this signal into something that can produce richness and beauty on a huge rig -- this isn't trivial at all, and sometimes not really possible. Now you're suddenly talking about having a sound-feel from the amplification that sometimes very clearly and obviously is going to favour a high impedance speaker setup where the sound coming out is "cleaner" and "crisper", than what you would get out of it if there was noise being amplified, if there weren't noise-filters, if you didn't sacrifice some of the input to get a good spectrum out, etc., etc. This stuff is the realm where a lot of the really knowledgable people who know sound come from, and in that realm you can hear the difference between a good and a bad amp. And it is not just subjective, there are very specific things being done to the noise and signal here in that amplification process that causes signatures and "feels" that may be good or bad, or whatever.

But since we have a digital source now, and can skip a lot of these issues, first of all, it is possible to get really high definition audio output without garble, right? There's less and less need to level recordings, people have dacs that do that. That's huge. Should be, at least. Because not only could you get the actual sound of the source at much higher definition, you can level it against whatever your target is right there.

So what we are really looking for is just a dac that does minimal things to the audio input, and then an amplifier that just amplifies that analog signal a tiny amount without causing too much distortion. Like...

https://www.adv-sound.com/products/accessport-lite

And you suddenly have an analog signal from a phone that is going to objectively be a billion times better than what a 10k Euro amplification theater system would have been fed just 15 years ago. And on top of that, you have amplification drawing on the usb power source, getting you potential effect peaks that can comfortably handle low volume on semi-high impedance speakers.

Can you do better than this 29 dollar thing for an amp? Yes. Can you do worse? Yes, absolutely, there are cheaper ways to do the same thing. Can you do /significantly/ much better than this 29 dollar thing, though, in the context I mentioned with the amplification of a record-player or a cd-player with it's own dac, when outputting to a headset? In the sense of.. could you capture more of the sound-picture by switching to a gigantic amp? That is actually questionable. XD

0

nipsen t1_ixyjg4v wrote

Short and semi-wrong simplification: a small amp, like a phone-amp, can produce effect peaks that are sort of technically sufficient to blast your ears off even on a pretty high impedance driver (like on the hd600). But sustaining a more complex sound-picture, and still creating the dynamics (i.e., the differences in volume that are, presumably, in the recording of what isn't an electronic metronome) needs a longer and higher effect threshold (that on normal listening volumes might not be very high, but still significantly higher than a phone-amp).

So.. when you test the physics of it, you could get results on a very light amplifier - and that would even correspond to your listening experience - where the amplification required to produce an accurate enough tone for a short amount of time is extremely light. I've done some testing that is similar to what produces the effect-curves and frequency responses you see on some of these tests - and for short enough intervals on reasonable volumes, a massive effect threshold isn't necessary. At all. Like a 4W laptop amplifier competes with a Chi-fi star from Marantz that happily drives two large front-speakers on 2x100W until the glasses ring in the cupboard.

But if something more complex happens in the music (and the source you're playing back has a higher density in the actual recording - and not that it has been "upscaled" to 320kpbs mp3 or something like that - so that you're even going to be able to hear anything here other than the boom on the lower tones, and various other things that are just not produced in the actual speaker-driver) -- then the tones/music will change once you have a sufficient effect threshold to draw on.

It still doesn't need to be very high, which is why a decent enough carryable amp is enough (and why Chi-fi is a thing - the level required for more than decent enough playback is not actually that high. And the design-need to put in incredibly high ohm speakers also isn't really there, because people don't have power-grid noise or things like that that really cause noise - we're not at that level in recording or monitor sensitivity on your typical home-product that plays back, at the very most, 320kbps compressed audio, sent through any amount of noise-filters on the way). The idea that a cd-recording, that was compressed down to a cd-format in the first place, that then was washed through the cd-player's horrible variable codec, that then produces a signal that is washed with a noise-filter before it's amplified, and then also reduced slightly again through the amp -- needs a million Watt amplifier from Klipsch or Hegel to sound good on a pair of normal-sized speakers in a living-room is beyond ludicrous.

But basically, on the first step, you're looking for an effect output that can make the headphones work consistently. If you have to turn the volume/effect very high up to get much of anything, you can hear the effect very easily when you play something else than a test-tone, in that the difference between loud and quiet is basically not there, or that you're missing reproduction of some frequencies when there are other things happening as well. I.e., you get the test-tone, but while doing two, there's a variation in the output.

The second step is getting something that has a high enough threshold so that in complex parts of the recording (some people notice their blip-boop music not having a great enough bass-crunch while the keyboard is tapping an a-tonal horror that would make Schoenberg flutter in delight) will be the same when you add another element to it. On classical recordings you could have the same effect because of noise around where the recording was made, for example. So that - if the recording wasn't leveled through a press and then cut out in an a4-shaped cutout - that noise from the chairs and various echo-bits and resonance would actually sound fantastic on a good setup (as much as the recording equipment allowed). But when played back on a kitchen-player (the gold standard for "modern" digital formats), the recording would sound inconsistent, unpleasant to listen to, and dry and empty.

That effect-threshold for reproducing something with dynamics on a 300ohm headset is not very high. But it is higher than a phone-amp's, which is typically reaching peaks of the effect the power-supply can handle or is designed to handle at around 20ohm. The "gold standard" for mobile phone headsets, as with bluetooth headsets, is basically then to produce headphones that are in the 16ohm range.

That doesn't mean they suck, it's just that higher impedance speakers would sound like trash on that system (in certain situations). To make things worse, though, some of these large headsets also work, and are designed to work on lower effect. So you don't get muted or garbled tones on an under-amplified system (which again brings with it an endless amount of design-for-most-common-usage drawbacks). It's just that you're not getting out what the headset can produce.

Which admittedly very often isn't an actually high level. For example, if you can have a sustained effect without drops through, say, 5 seconds, on a very low effect amp (such as a phone-amp - they do exist), you can actually get away with that on low volumes on fairly high ohm speakers. Because what you were looking for was not to produce peak effect on all frequencies, in some astonishingly modern music recording that is unfit for human consumption -- but a lot less than that.

Or put in a different way, the difference between "hi-fi" and "crap" was actually the amplifier's ability to output the desired, still very low, effect for 1 second, instead of having a drop-off after 0,3 seconds. And that then retained all the dynamics at a humanly listenable volume, produced all the detail, etc., even though the amp is not going to melt your ears (at least literally). Note that you also have these effect-drop offs on more expensive amplifiers, just in a different range.

9