Comments

You must log in or register to comment.

jahwls t1_iwfo1lq wrote

101/500 pretty good.

351

BLACK-C4T t1_iwgierp wrote

Will be more, it just doesn't make sense to upgrade everytime something better comes along

129

ShavedPapaya t1_iwgoku8 wrote

Don’t tell r/pcmasterrace that

117

Bobert_Manderson t1_iwgptsc wrote

Hey, I’ll dig myself into debt however I want. American dream.

55

ShavedPapaya t1_iwh1i0n wrote

It that debt isn’t medical, then you’re an amateur.

29

Bobert_Manderson t1_iwhg14n wrote

Jokes on you, I put my life savings into GameStop, bought $20k worth of guns, a $85k lifted trucks, and a $500k house while making 50k a year. All I need to do is have a mild medical emergency and I might beat the American Dream speedrun record.

21

Lucius-Halthier t1_iwhdbjn wrote

I’ve already alerted the council and they became as hot in the face and angry as a 4090

5

Mowensworld t1_iwfp3qy wrote

At the moment EPYC is just too good and new chips are looking even better so I don't see this changing any time soon. Considering AMD was literally almost down and out a decade a go, can't wait to see what Intel fires back with or what other architectures have in store.

129

rtb001 t1_iwgntj1 wrote

It is super impressive that Intel is a much bigger company that until recently only did CPUs, and nVidia is a much bigger company that mostly does GPUs, while AMD does BOTH yet has survived all this time.

64

frostnxn t1_iwgo5kz wrote

Yes, but amd builds the consoles exclusively which helped them stay afloat for sure.

49

rtb001 t1_iwgov0f wrote

Also I think in hindsight, AMD spinning off global foundries was a really good move. Maybe at the time it was because AMD didn't have to money to keep and maintain their own fab, so they had to become a contract manufacturer. However in later years we would see that not having their own fab meant AMD could be agile about the design of their next gen PC and server chips. So long as TSMC or Samsung could make it, then AMD can design it. But Intel was forced to only make chip designs that can be made to a good yield in their own fabs.

34

sultry_eyes t1_iy5dvg4 wrote

This is because of the two emerging markets.

NAND Flash

Mobile Phones

and Tablets/Phablets

The tablet is somewhat like a phone and a laptop but not either.

Intel and NVIDIA were already in their own respective markets. CPU and GPU.

AMD was in between CPU&GPU and IBM no longer made great Console chips. See Sony Cell Processor (poor performing difficult to program) and Xbox 360 red ring of death issues.

There suddenly needed to be a fab that could fill the gap for the emerging mobile phone sector. Intel failed and failed HARD in this market. They could not pivot to mobile phones.

Samsung and TSMC however did not fail. And NAND Flash is necessary in order for mobile phones to store the amount of data that they store.

This new market heavily funded both Samsung and TSMC to the point where TSMC is able encroach on Intel's heavy data center customers. Before this those customers were mostly Intel as they were the most reliable as opposed to 2010s AMD. Back then you would be laughed out of the room if you remotely mentioned going with an AMD system.

They had a very tiny laptop (mobile) segment.

Desktops, Servers, and Laptops were all Intel. And that made sense for them to stick to just that and not pivot into the new and emerging mobile phone market/segments.

And yeah hindsight is 20/20 and all that. Now it is Samsung and TSMC with heavy mobile segment growth. And because they are capital rich, they are encroaching into Intel's territory faster than Intel can pivot to theirs.

Intel Foundry won't fire up until 2025. And even then, we will see how many customers they can win back. (Just Qualcomm and Apple pretty much).

I can see Apple wanting to diversify their suppliers from TSMC. Apple makes most of what Intel and TSMC can sell. Smartphone, watches, iPad/Tablets, laptop and desktop chips.

Qualcomm just sells many many mobile phone CPU/GPUs so they may go with Intel if priced correctly.

I don't see anyone dethroning Samsung from their NAND flash memory business. They are pretty good at that. And the is demand for that type of storage.

HDD manufacturers appear content with pumping out 10TB+ drives forever. No change and no one clamoring for big changes there.

1

Halvus_I t1_iwh5igk wrote

Steam Deck too. Switch is Nvidia though.

9

mule_roany_mare t1_iwgp1l4 wrote

I’m honestly surprised Intel didn’t try to launch their GPUs with a console.

There’s no better environment to prove your hardware while devs optimize to it.

The whole Dx12 vs older APIs would have been a non issue & given them another year or two to work things out.

6

SpicyMintCake t1_iwgtz6s wrote

A lot harder to convince Sony or Microsoft to leave the established AMD platform for a new and untested at scale platform. Especially when consoles are thin margin items, any hardware issue is going to cut deep.

16

frostnxn t1_iwgu9su wrote

Also intel did not have the patenofor gpus, which expired in 2020 I believe.

1

mule_roany_mare t1_iwguwll wrote

..Intel has been making GPUs for a few decades. Just not discrete GPUs

7

thad137 t1_iwh0fpk wrote

The patent for what exactly? There's any number of GPU manufacturers. I don't believe any of them all have a common patent.

1

Justhe3guy t1_iwgvetu wrote

They do work on very thin margins for that though so they don’t earn massively from consoles, still worthwhile

1

DatTF2 t1_iwgos0d wrote

Part of that reason why Intel had so much more market share, at least in the late 90s and early 00s is that Intel was bribing companies like DELL to only use Intel processors. Most computers you went to buy in a store only had Intel processors and it's why they dominated the home computing space. While I try Not to fanboy and have used both Intel and AMD systems I am really glad for AMD.

15

WormRabbit t1_iwgyixd wrote

Their compiler also produced very inefficient code for AMD chips. Not because they didn't implement the optimizations, but because they detected at runtime your CPU model and used the suboptimal code paths.

10

pterofactyl t1_iwin5v6 wrote

That’s not a bribe, that’s literally just how business deals work. It’s a bribe when the money is used to influence the decision of a person when money should not be an influence.

0

qualverse t1_iwiymmx wrote

A regular business deal would be Intel saying "we'll give you a 30% discount if you buy a million Intel processors".

A bribe would be Intel saying "we'll give you a 30% discount if you don't buy any AMD processors" which is what they actually did.

4

pterofactyl t1_iwjbz5l wrote

Ok so again… that’s a business deal. Do you understand that me paying you to exclusively use my product is completely legal and not even immoral unless it causes harm on a person? If a company bribes a doctor to use only their brand of medicine, that’s immoral. If a company pays a sports team to only use their products and avoid all others, that’s literally the basis of sports sponsorships. Amd presented the best case for dell to only use their chips. Is your workplace bribing you by paying you a set fee with the understanding that you only work for them and no one else? Come on man

0

Earthborn92 t1_iwpvteo wrote

Read about Antitrust law.

3

pterofactyl t1_iwq1ls0 wrote

https://www.investopedia.com/ask/answers/09/antitrust-law.asp

I think you should. Anti trust laws prevent buyers from preventing suppliers from supplying to other businesses, but if a supplier pays for themselves to be your supplier, that is not anti trust.

Is Nike in violation because they pay teams to use only their shoes and clothes? Literally think about this. Are restaurants in violation for agreeing to stock only Pepsi products?

0

Dry-Purchase-3022 t1_iwj0qll wrote

AMD doesn’t make their own chips, making bearing Intel much easier. The fact Intel is even close to AMD while having a significantly worse manufacturing line is a testament to how great their designs are.

2

Mowensworld t1_iwiv895 wrote

AMD originally only made CPUs. They bought ATi who at the time were nvidias main competitor for 5 billion dollars. This was only back in 2006.

1

Coincedence t1_iwjtwk7 wrote

With upcoming platforms, AmD is shaping up to be a powerhouse. Majority of the performance for a fraction of the price compared to the corresponding nvidia is very tempting. Not to mention 3-D vcache coming up soon to further dominate the gaming cpu market

1

damattdanman t1_iwga7yk wrote

What do they get these super computers to do? Like what calculations are they running for this kind of power to make sense?

70

emize t1_iwgbm0r wrote

While not exciting weather predictions and analysis is a big one.

Astrophysics is another popular one.

Anything where you need to do calculations that have large numbers of variables.

127

atxweirdo t1_iwhowxd wrote

Bioinformatics and ML has taken off in recent years. Not to mention data analytics for research projects. I used to work for a supercomputer center. Lots of interesting projects were going through our queues

22

paypaytr t1_iwj6zbm wrote

For ML this is useless though. They don't need supercomputers but rather cluster of well efficient GPUs

−3

DeadFIL t1_iwjflz1 wrote

All modern supercomputers are just massive clusters of nodes, and this list includes GPU-based supercomputers. Check out #4 on the list: Leonardo, which is basically just a cluster of ~3,500 Nvidia A100-based nodes.

7

My_reddit_account_v3 t1_iwjmbs9 wrote

Ok, but why would supercomputers suck? Are they not equipped with arrays of GPUs as well?

1

DeadFIL t1_iwjpc1o wrote

Supercomputers cost a lot of money and are generally funded for specific reasons. Supercomputers are generally not very general purpose, but rather particularly built to be as good as possible at one class of task. Some computers will have a lot of CPUs, some will have a lot of GPUs, some will have a lot of both, and some will have completely different types of units that are custom built for a specific task.

It all depends on the supercomputer, but some aren't designed to excel at the ML algorithms. Any of them will do wayyyy better than your home computer due to their processing power, but many will be relatively inefficient.

3

My_reddit_account_v3 t1_iwjshyb wrote

Right. I guess what you are saying is you prefer to control the composition of the array of CPUs/GPUs, rather than rely on a “static” supercomputer, right?

2

QuentinUK t1_iwgdsbh wrote

Oak Ridge National Laboratory: materials, nuclear science, neutron science, energy, high-performance computing, systems biology and national security.

66

damattdanman t1_iwgegbh wrote

I get the rest. But national security?

12

nuclear_splines t1_iwgik6l wrote

Goes with the rest - precise simulations of nuclear material are often highly classified. Sometimes also things like “simulating the spread of a bio weapon attack, using AT&Ts cell tower data to get high precision info about population density across an entire city.”

23

Ok-disaster2022 t1_iwiccqm wrote

Well there's numerous nuclear modeling codes, but one of the biggest most validated is MCNP. The team in charge of it has accepted bug fix reports from researchers around the world regardless if they're allowed to have access to the files and data or not, export control be damned. Hell the most important part is the cross section libraries (which cut out above 2 MeV) and you can access those on public website.

I'm sure there's top secret codes, but it costs millions to build and validate codes and keep them up to date, but there's not profit in nuclear. Aerospace the modeling software is proprietary but that's because it's how those companies make billion dollar airplane deals.

2

nuclear_splines t1_iwid0jw wrote

Yeah, I wasn’t thinking of the code being proprietary, but the data. One of my friends is a nuclear engineer, and as an undergraduate student she had to pass a background check before the DoE would mail her a DVD containing high-accuracy data on measurements of nuclear material, because that’s not shared publicly. Not my background, so I don’t know precisely what the measurements were, but I imagine data on weapons grade materials is protected more thoroughly than the reactor tech she was working with.

2

Defoler t1_iwge3xy wrote

Huge financial models.
Nuclear models.
Environment models.

Things that have millions of millions of data points that you need to calculate each turn

21

blyatseeker t1_iwhpceq wrote

Each turn? Are they playing one match of civilization?

3

Defoler t1_iwhsogi wrote

Civ 7 with 1000 random pc faction players on a extra-ultra-max size map and barbarians on maximum.
That is still a bit tight for a supercomputer to run, but they are doing their best.

5

johnp299 t1_iwhl0th wrote

Mostly porn deepfakes and HPC benchmarks.

2

Ok-disaster2022 t1_iwibhfx wrote

For some models instead of attempting to derive an sexy formulation you take random numbers, assign them to certain properties for a given particle and other random numbers to have that particle act. Do this billions of times and you cna build a pretty reliable detailed model of weather patterns or nuclear reactors or whatever.

These supercomputers will rarely be used all at once for a single calculation. Instead the different research groups may be given certain amounts of computation resources according to a set schedule. A big deal at DOE SCs is making sure there isn't idle time. It cost millions to power and cool the systems, and letting them run idle is pretty costly. Same can be said for universities and such.

2

My_reddit_account_v3 t1_iwjmtbc wrote

My former employer would run simulations for new models of their products (ex: identify design flaws in aerodynamics). Every ounce of power reduced the lead time to get all our results for a given model / design iteration. I don’t understand anything that was actually going on there, but I know that our lead times highly depended on the “super power” 😅

2

supermoderators t1_iwfle2n wrote

Which is the fastest of all the fastest supercomputers?

34

wsippel t1_iwfy16v wrote

Frontier, the first supercomputer to exceed 1 exaFLOPS/s, almost three times as fast as number two. Powered by Epyc CPUs and AMD Instinct compute accelerators.

Here's the current list: https://www.top500.org/lists/top500/2022/11/

111

Contortionietzsche t1_iwg1263 wrote

21,000 kilowatts of power. That's a lot, right? I read a story recently about a company that bought a Sun Enterprise 10000 server and an executive shut it down when they got the electricity bill.

66

nexus1011 t1_iwg1krr wrote

Look at the 2nd one on the list.

29,000 almost 30k KW of power!

32

Zeraleen t1_iwg64o3 wrote

30k kW, that is almost 30MW. wow!

26

Gnash_ t1_iwgf457 wrote

My Factorio factory only consumes 9 MW and I had to build 3 nuclear reactors, just to keep the power up at night. That’s one big supercomputer

20

calvin4224 t1_iwgfzlw wrote

irl a nuclear Generator has around 1 GW (1000MW). But 30MW is still about 6 Land based Wind turbines running at full load. It's a lot!

20

Ok-disaster2022 t1_iwid396 wrote

Physics wise you can run a GW reactor at 30 W and it will essentially last forever from a fuel standpoint, just the turbines and such have to re engineered to that lower output.

But there are smaller reactors. I believe for example the Ford class supercarriers run on 4x250w reactors.

3

calvin4224 t1_iwror1m wrote

I don't think that's how nuclear fission works.

Also, 4x250 Watts will run your kettle but not a ship :P

1

PhantomTroupe-2 t1_iwgjgwp wrote

So the guy above a lying little shit or?

−33

Gnash_ t1_iwgk22o wrote

Factorio is a video game, did you really think I went out and built 3 reactors all by myself

also was the uncalled for insult really necessary?

33

MattLogi t1_iwga7wo wrote

What’s it power draw? Isn’t something like 30000 kWh only like $3000 a month? Which sure isn’t cheap but if you’re buying these super computers, I feel like $3000 is a drop in the bucket for them

Edit: yup, made a huge mistake in calculation. Much much larger number

6

Catlover419-20 t1_iwgfacd wrote

Nono, that means 30000kWh is for 1h of operation. For one month of 24/7 at 30 days you‘d need 21.600.000 kWh or 21.600 mWh, or 2.741.040€ at 12,69ct/kWh. So $2.75M if Im correct

23

MattLogi t1_iwguqiu wrote

Yeah I messed up! I was think W as I do the calculation a lot with my computers at home so I always divide by 1000 to get kWh. Like you said this is 30000kWh! Oof yeah that’s a big bill.

3

Contortionietzsche t1_iwgark3 wrote

True. Frontier is for the US department of energy right? The company that bought the E10K probably was not. AFAIK the E10K requires a 100 amp power line and back in those days (late 90’s) I don’t think performance per watt was a thing they worried about, could be wrong though.

5

Dodgy_Past t1_iwgrgw5 wrote

I was selling sun servers back then and customers never considered power consumption.

2

Diabotek t1_iwgf8be wrote

Lol, not even close. 30,000 kW * 720 hours * kW price.

4

MattLogi t1_iwguf3a wrote

Oooo yeah I did a major mistake in calculation. I’m so used to calculating W with home computers and dividing by 1000 to get my KWH…this is 30000KwH! Ooooof! Yeah that’s a huge bill. Makes a lot more sense now lol

1

Diabotek t1_iwh5jxx wrote

Yeah 30000kW is an insanely massive number. The amount of power required to run that for an hour, could power my stack for 7 years.

1

The-Protomolecule t1_iwgismm wrote

It’s easy to power when you’re oak ridge and have your own nuclear power plant.

5

fllr t1_iwg8ywq wrote

An exaflop in a singular computer… that’s absolutely insane :O

2

neoplastic_pleonasm t1_iwgv80c wrote

It's a cluster. I forget if they've published the official number yet but I want to say it was something like 256 racks of servers. I turned down a job there last year.

6

AznSzmeCk t1_iwhhzkx wrote

94 cabinets, 9408 nodes, each node is a Trento Epyc processor and 4 AMD MI250X gpu. Source from hpcwire, but I'm also an ASIC engineer for one of the chips :D

3

fllr t1_iwgvbq2 wrote

Ah… so i guess that makes it sound a little more regular now

1

diacewrb t1_iwg8ks6 wrote

If you include distributed computing then Folding@Home is probably the fastest in the world with 2.43 exaflops of power since 2020.

https://en.wikipedia.org/wiki/Folding@home

13

IAmTaka_VG t1_iwgz2bx wrote

I think they basically ran out of simulations because so many people signed up no?

1

Ripcord t1_iwgempu wrote

It's right at the top of the goddamn article

3

MurderDoneRight t1_iwgkq9m wrote

If you want your mind blown you should look into quantum computers! They're insane! They can create time crystals, that's crystals that can change state without the need to add energy or loss of energy creating true perpetual motion! And with time crystals we might be able to create even faster quantum computers by using them as quantum memory.

And even though I have no idea what any of it means, I am excited because this is real life sci-fi stuff! There's a great mini-series called DEVS where they use a quantum computer and it's nuts they exists in real life.

And you might say "yeah yeah everyone has said there's new tech on the horizon that will change the world but it always takes way longer for anything close to be developed" but check this out: The IDEA of time crystals was thought up just 10 years ago, since then they have not just been proven to exist but we can create them and yeah deep dive into everything quantum computers are doing it's just speeding up exponentially every day!

−3

bigtallsob t1_iwgosoo wrote

Keep in mind that anything that appears to be "true perpetual motion" at first glance always has a catch that prevents it from being actual perpetual motion.

9

SAI_Peregrinus t1_iwhcom7 wrote

Perpetual motion is fine, perpetual motion you can extract enirgy from isn't. An object in a stable orbit with no drag (hypothetical truly empty space) around another object would never stop or slow down.

A time crystal is a harmonic oscillator that neither loses nor gains energy while oscillating. It's "perpetual motion" in the "orbits forever" sense, not the "free energy" sense. Also has nothing to do with quantum computers.

3

pterofactyl t1_iwinoyy wrote

Well no because for that “no drag” space to exist, it would need to be in an imaginary world, so perpetual motion does not exist either way.

1

MurderDoneRight t1_iwh3wgv wrote

True, a perpetual motion machine is impossible according to the laws of physics. But time crystals are not a machine, it's an entirely new kind of exotic matter on par with supersolids, superfluids and Bose-Einstein condensates!

1

bigtallsob t1_iwh8ebm wrote

Yeah, but you are dealing with quantum funkiness. There's always a catch, like with quantum entanglement, and how despite one's state affecting the other regardless of distance, you can't use it for faster than light communication, since the act of observing the state changes the state.

1

MurderDoneRight t1_iwhacjs wrote

Yeah, like I mentioned in my first comment I don't really know anything so you may be right too. 😉

But I don't know, there's a lot of cool discoveries being done right now anyway. I did read up on quantum entanglement too because of this years Nobel prize winner in physics who used it to prove that the universe is not "real". How crazy is that?

1

SAI_Peregrinus t1_iwhc0ph wrote

Time crystals have no direct relation to quantum computers.

Quantum computers currently are very limited, but may be able to eventually compute Fourier Transforms in an amount of time that's a polynomial function of the input size (aka polynomial time), even for large inputs. That would be really cool! There are a few other problems they can solve for which there's no known classical polynomial time algorithm, but the Quantum Fourier Transform (QFT) is the big one. AFAIK nobody has yet managed to even factor the number 21 with a quantum computer, so they're a tad impractical still. Also there's no proof that classical computers can't do everything quantum computers can do just as efficiently (i.e. that BQP ≠ P), but it is strongly suspected.

Quantum annealers like D-wave's do exist now, but solve a more limited set of problems, and can't compute the QFT. It's not certain whether they're even any faster than classical computers.

I've made several enormous simplifications above.

2

mule_roany_mare t1_iwgpo4d wrote

Devs was an imperfect show, but good enough to be measured against one.

It deserved a bigger audience & should get a watch.

1

TheDevilsAdvokaat t1_iwgnkic wrote

https://blogs.nvidia.com/blog/2021/06/28/top500-ai-cloud-native/#:~:text=NVIDIA%20technologies%20power%20342%20systems,centers%20are%20increasingly%20adopting%20AI.

NVIDIA technologies power 342 systems on the TOP500 list released at the ISC High Performance event today, including 70 percent of all new systems and eight of the top 10. (June 28 2021)

Not a fanboy of either, just posted this for the sake of comparison.

16

12358 t1_iwggel6 wrote

And BTW, they run Arch, right?

5

dasda_23 t1_iwgr56q wrote

But can they run doom?

2

nascarhero t1_iwh1jxw wrote

Downvote for sponsored content

2

tobsn t1_iwh32gt wrote

just until series 5 rtx released and then it’s going to be a lot more!

/s

1

dhalem t1_iwhhoun wrote

I wonder how one of Google’s data centers compares.

1

The_Zobe t1_iwhjjfp wrote

We want percents! We want percents!

1

csbc801 t1_iwj6awg wrote

So my computer got their only dud?

1

lohvei0r t1_iwmauak wrote

4 supercomputers!

1

Sarah_Rainbow t1_iwgfy31 wrote

Serious question, what is the need for supercomputers when you have access to cloud computing in all its glory?

0

dddd0 t1_iwghfua wrote

Interconnect

Supercomputer nodes are usually connected using 100-200 Gbit/s fabrics with latencies in the microsecond range. That's pretty expensive and requires a lot of power, too, but it allows you to treat a supercomputer much more like one Really Big Computer (and previous generations of supercomputers were indeed SSI - Single System Image - systems) instead of A Bunch Of Servers. Simulations like Really Big Computers instead of A Bunch Of Servers. On an ELI5 level something like a weather simulation will divide the world into many regions and each node of a supercomputer handles one region. Interactions between regions are handled through the interconnect, so it's really important for performance.

18

LaconicLacedaemonian t1_iwgt8rr wrote

I maintain a 20k node cluster of computers that pretends to be a single computer. The reason to do it that way is if we 10x our size we can 10x the hardware and individual machines dying are replaced.

3

_ytrohs t1_iwgziml wrote

and cost.. and hypervisor overhead etc

1

krokotak47 t1_iwghwko wrote

So cloud computing literally happens in the sky and we don't need hardware for it?

10

Sarah_Rainbow t1_iwgikys wrote

Why else would i buy a telescope for?!??

I mean with the cloud you can have your computing power distributed over a larger geographic area, plus the hardware cost is lower and setting it up is relatively simple. I've heard stories from the physics department at UofT where students preferred to use AWS over other available options (supercomputers in Canada) to run their models and stuff.

0

Ericchen1248 t1_iwgnt5q wrote

While I don’t know the costs for them. I would wager the students chose to use AWS not because it was cheaper but because registering/queueing for super computer time is a pain/can take a while.

2

krokotak47 t1_iwgq1bd wrote

I believe it all comes down to cost. I've seen some calculations on reddit that were like 30k USD for the compute needed on Amazon ( absolutely no idea what the task was, something with GPUs). So that's obviously not possible for many people. What's the price for a supercomputer compared to that? I imagine it may be free for students? In my university you can gain access to serious hardware ( I'm talking powerful servers, not comparable to a supercomputer) by just asking around. What is it like in bigger universities?

1

Pizza_Low t1_iwhkwyo wrote

Cloud is great for when you want to rent someone else’s computer space. It can be cheaper than building a data center, maintaining the hardware and software, expand and contract dynamically.

For example a ton of servers can be brought online for something like if Netflix was streaming the super bowl. They might suddenly need 3 times the servers they normally need, cloud is good for that sudden expansion, but tends to be more costly for regular use.

Super computers are great for lots of calculations very quickly. For example you want to simulate the air flow of individual air molecules over a new airplane wing design. Or some other kind of complex mathematical modeling in science, or finance.

1

izza123 t1_iwgkio8 wrote

Nvidia on suicide watch

−1

_HiWay t1_iwguxle wrote

not at all, with their acquisition of Mellanox and smart NICs (Bluefield 2 and beyond) they are accelerating things right on the edge of the interconnect. Will vastly improve performance once scalability and software have been figured out at super computer size.

2

mikegwald t1_iwgrklq wrote

AMD is still a thing ?

−4

AlltheCopics t1_iwg11ga wrote

Intel the good guys

−21

JJagaimo t1_iwgbvil wrote

Neither AMD not Intel are the "good guys." Both are corporations that while we may support one or the other for whatever reason, we should not treat as if they are an individual we personally know or as if they are infallible.

20

imetators t1_iwgr6fj wrote

If you knew that these corporations are not actually a competitor but more of a teammates in market rigging, the statement about 'good guys' becomes much more funnier.

2

[deleted] t1_iwfsmap wrote

[deleted]

−63

Substantial_Boiler t1_iwfvem8 wrote

Supercomputers aren't really meant to be impressive tech demos, at the end of the day they're meant for actual real-world applications

51

Avieshek OP t1_iwg3qod wrote

Then quantum computers would simply become the next supercomputers as it's just a term for commercial purposes with multiple stacks, you do realise that right?

What we are using can be termed as Classical Computers and if tomorrow's iPhone is a quantum computer onto everyone's hands then there's no reason a supercomputer in a University then would still be a classical computer.

19

12358 t1_iwggr2p wrote

Quantum computers are not a more powerful version of a supercomputer; they do different kinds of calculations, and solve problems differently, so they are used to solve different kinds of problems. They are not a replacement for supercomputers.

7

Avieshek OP t1_iwgh5i4 wrote

As said, Quantum & Classical are different breed of computers where there’s no parallel between and please refrain from twisting into your own version where nothing has been said regarding “Quantum being more powerful than Supercomputer” when I have just stated what supercomputer itself is to be comparing with quantum which is dumb.

1

themikker t1_iwfutq9 wrote

Quantum computers can still be fast.

You just won't be able to know where they are.

17

SAI_Peregrinus t1_iwhczon wrote

They still can't find the prime factors of the number 21 with a quantum computer. They're promising, not impressive (yet).

1

iiitme t1_iwfvjfl wrote

What’s with the downvotes this isn’t a serious comment

−24