Comments

You must log in or register to comment.

phiwong t1_jedorky wrote

Partly tradition, partly the hardware and partly the software.

For computing devices data is moved around, operated and stored in bytes consisting of 8 bits. (although modern ones operate just as well in double bytes of 16 bits) So the byte became the default means of designating the fundamental data element. Therefore the size of "stuff" is conveniently expressed in the number of bytes.

For transmission, however, most modern communications and networks operate on a single line transmission. There are parallel communication methods but they tend to be local. But things like USB, wi-fi, internet have hardware that send data in a stream of bits. Therefore the speed of transmission is conveniently measured in bits per second

2

yalloc t1_jedpw6q wrote

Marketing for the most part. Bigger number sounds better for speeds so things are advertised with the bigger number. They don’t expect grandma to know the difference between bit and byte and grandma of course is gonna take the speed that’s 8 times bigger.

2

spikecurtis t1_jeds1fs wrote

The difference is largely historical.

Data storage devices are byte-addressable, in that the smallest unit of data you can read one write is one byte. We long ago standardized on 8-bits per byte, but in early computing there were different byte sizes.

Early data transmission devices (what we now call network hardware) measured their speed in bits per second. The byte wasn’t standardized as 8 bits yet, and different ends of the transmission might have different byte sizes.

Modern networking gear basically always sends data in some whole number of 8-bit bytes (in the context of networking they are sometimes called octets to be absolutely clear they are 8-bit units), but the bits per second terminology persists.

You wouldn’t want to be the manufacturer who unilaterally shifts to bytes per second when you peers are still marketing bits per second, for fear of people thinking your stuff is slower than it is!

5

satans_toast t1_jedw2yf wrote

Computer networks send data one bit at a time. It’s a state change, either electrical or with light pulses. It happens faster and faster as technology improves, but it’s still just one bit at a time. Hence, bits per second.

It doesn’t matter whether it’s a file or a digitized real-time data stream or a bunch of random characters, it’s sending one bit at a time.

Also, when you’re transmitting a file over a network, it’s not just the file that’s being sent. Overhead bits are added to the stream, whether it’s the source & destination addresses, fragmentation flags, error checking bits, and a slew of other bits used for controlling the traffic somehow.

1

spikecurtis t1_jefl68l wrote

Some modulation schemes send multiple bits at a time. For example in phase shift keying, the phase of an oscillating wave encodes the information. If you do this with 4 phases, 90 degrees apart, then you send 2 bits at a time, 8 phases 3 bits, etc.

1