Submitted by ArchariosMaster t3_127czbb in explainlikeimfive
spikecurtis t1_jeds1fs wrote
The difference is largely historical.
Data storage devices are byte-addressable, in that the smallest unit of data you can read one write is one byte. We long ago standardized on 8-bits per byte, but in early computing there were different byte sizes.
Early data transmission devices (what we now call network hardware) measured their speed in bits per second. The byte wasn’t standardized as 8 bits yet, and different ends of the transmission might have different byte sizes.
Modern networking gear basically always sends data in some whole number of 8-bit bytes (in the context of networking they are sometimes called octets to be absolutely clear they are 8-bit units), but the bits per second terminology persists.
You wouldn’t want to be the manufacturer who unilaterally shifts to bytes per second when you peers are still marketing bits per second, for fear of people thinking your stuff is slower than it is!
Viewing a single comment thread. View all comments