Comments

You must log in or register to comment.

WexfordHo t1_iuddhwd wrote

There are a lot of advances which went into that, but the single biggest which made the shift possible was the birth and refinement of the transistor. Before that you had to use vacuum tubes or something similar, which are individually large, require a lot of space too emit heat, and which can only be miniaturized so far. Transistors started off large, but now we fit billions of them on a single chip.

12

mih4u t1_iudg4u9 wrote

Like others already stated is mainly due to the miniaturization of the hardware.

A computer on its lowest layer consists of logic gates. These are 'blocks' of switches that can, for example, compare input signals and compute simple logic like AND, OR, etc. Those switches need to be able to be activated or deactivated by other logic to make a "computer".

At the beginning those where e.g. electrical relays. The first electrical switch, was as far as I know the vacuum tube. Those things were quite big.

Then came the transistor, which could be switched on by a small voltage.

Then came the integrated monolithic computer chip. Which allowed to put complex combinations of those transistors very densly packed onto one chip.

So this process reduced a thing that can be held in a hand to a microscopic 5nm integrated circuit on a silicon waver, which also reduced the energy required for computations considerably.

6

LargeGasValve t1_iudiqw0 wrote

because the first conputers used vacuum tubes, which ate a few centimeters in size, but we've invented transistors, which a much better alternative, that can do the same job but is only nanometers in size, and this means we can fit billions in less than a space smaller than single vacuum tube

1

Ok_Pizza4090 t1_iudsapx wrote

The electronic logic elements that they are made of got smaller and smaller. First they were electric relays (about the size of a ping pong ball, then vacuum tubes, then transistors. The transistors consist of materials that conduct electricity under certain conditions. The transistors became smaller and smaller. A single silicon chip can now contain many millions of transistors, each of which has the function of one electric relay. The limit is the (three dimensional) geometry of the transistors on the chip and the (photographic/deposition) process used to make them.

1

Farnsworthson t1_iudtih3 wrote

There are also big differences in what relative processing power you expect of a computer. As other people have said, the changes in tech that led to the microchip increased by an incredible factor the amount of computing power that can be packed into a given volume - but it's still also true that, to a degree, that the more you have, the more you find uses for - and also that the more you want, the bigger it gets. Some computers are still the size of a room.

IBM, for example, still produces mainframe computers for commercial use that have WAY more concurrent processing power than anything you're likely to have on your desk; the current latest one, the z16, is the size of one or more large filing cupboards. As for supercomputers - the current record holder, the [Hewlett Packard Enterprise Frontier](https://en.wikipedia.org/wiki/Frontier_(supercomputer)), apparently occupies 680m^2 (7,300ft^2 ).

1

Baby-Lee t1_iufyh0p wrote

In the spirt of ELI5, Digital computing, at its most fundamental, is about breaking complex tasks into a bunch of yes/no questions. For example, representing the concept of '4' digitally is akin to saying yes 4x [to the question 'one more?'] then saying no . . . This is found at the basic level where ASCII codes come from having a unique 'number' for every conceivable character, to unique values for every color in the palette, to every command you can imagine.

While we might in our brains have a larger concept of, say, 10,000, or pi or 'scroll left,' it's all a rapidly iterated and refreshed set of yes/no questions for the computer.

The computer keeps track of the values for these questions with a whole bunch of spots where there is no voltage for a no and a certain voltage for a yes. They're like on/off switches for your bedroom lights, only they are turned on and off with electrical signals instead of fingers.

You can imagine with tasks of any complexity or sophistication, the number of switches quickly become huge and cumbersome. As others have mentioned, they used to be relays or tubes that were nearly as large as actual light switches.

Then they discovered transistors that were made of semiconductors. And the properties of these semiconductors made it possible to simply use an electrical current. A tiny piece of the semiconductor can hold that charge value [ie, none or some, yes or no] until another current comes along and changes it.

Now all those switches can be replaced with tiny tiny little blobs of that semiconductor material specially arranged to serve as tiny tiny switches, and they can be packed really close together. Again, as others have said, billions and billions in the same space that used to be one switch or vacuum tube.

Here, the conceptual limits are being refined by our ability to fabricate. These switches, tiny as they are, still need to be structured in an orderly manner, so we've refined our ability to make smaller and smaller versions of the 'blob' that does the work of a transistor keeping track of the on/off yes/no values. You can't just 'make the tiniest transistors' and stick them on a motherboard, you have to make them with the right connections and arrange them so they keep useful track of those billions of values in a meaningful way to accomplish a goal.

So the work since the advent of the transistor has not so much been computing in a different way as it has been making the fabrication process smaller, more precise, and more efficient.

1

Flair_Helper t1_iug8630 wrote

Please read this entire message

Your submission has been removed for the following reason(s):

Straightforward or factual queries are not allowed on ELI5. ELI5 is meant for simplifying complex concepts.

If you would like this removal reviewed, please read the detailed rules first. If you believe this submission was removed erroneously, please use this form and we will review your submission.

1

strongr_togethr t1_iude922 wrote

Because electronics companies and people have been pushing the miniaturization of electronics since they knew how. As a result, we get greater efficiency and power, with a smaller form factor. The trade off is that the R & D could take longer and cost more.

With most tech and electronics though, bigger isn’t always better, because with decreased form factors you can make room for other things, that will increase efficiency and power. You see it now with smartphones which are literally mini pocket computers, amongst other things.

0

TheMoland t1_iudfor4 wrote

Basically, heat was the problem, so they figured put that making things smaller (micro chips) would lessen the heat problem, making them more effective, and more compact as a bonus

−1