Comments

You must log in or register to comment.

AquaRegia t1_j23slck wrote

There are often shortcuts that can be used to make the problem easier. For example the fact that:

>(x^(a))^(b) = x^(ab)

Using this knowledge you can break down the exponent in your example, by finding its factors:

>67 * 1087 = 72829

1087 is still a bit large, so we can use another trick:

>x^(a) * x^(b) = x^(a+b)

And for example break it down to:

>500 + 587 = 1087

Using all this we can then perform the calculation:

>(2838393^(500) * 2838393^(587))^(67)

Which is a lot less daunting. It could probably use other rules to break it down further, but this is the general concept.

6

mfb- t1_j23u03n wrote

There is a much nicer way to break down exponents:

  • Calculate 2838393^2
  • Calculate (2838393^(2))^2 = 2838393^4 by squaring the previous result
  • Calculate ((2838393^(2))^(2))^2 = 2838393^8 by squaring the previous result
  • ... this produces all powers of 2 we need (2, 4, 8, 16, 32, ... 1024)

x^1087 = x^1024 * x^32 * x^16 * x^8 * x^4 * x^2 * x and all these factors were calculated previously (here for x=2838393).

You only need to square numbers, and at the end multiply a few numbers with each other. The computer stores numbers in binary anyway (so 1087 is 1024+32+16+8+4+2+1 for a computer), no extra steps needed to find the right factors.

(actually, for OP's original problem, a computer would directly use 72829 = 65536 + 4096 + ...)

8

Schnutzel t1_j23vg43 wrote

Also, by using floating point numbers, you can make the calculations a lot faster by sacrificing precision. Instead of storing 2838393^1024 (which is a huge number) you only store the most significant digits and how many digits it has.

2

mfb- t1_j241eji wrote

It depends on the application. 2838393^1024 has 6600 digits, which is no problem for modern computers to calculate exactly in well under a second.

1

Schnutzel t1_j24265s wrote

True, but when the exponent is 76000 and you start multiplying numbers with tens of thousands of digits, it starts to rack up.

1

Scary-Competition838 t1_j23rehy wrote

Computers do math operations, so a “big number,” particularly when converted to binary, is not much more demanding than a small one- the electrons just need to flow through a circuit that tells them what to do, and twice or ten times as long isn’t usually apparent on a human scale. To me, it’s equally or more interesting how a human brain is so efficient at some kinds of calculations, particularly abstract ones when developed, but only ever treats numbers as meaningful proportional to what their size represents versus the information encoded in them, which can be much less.

3

nmxt t1_j23qrja wrote

Modern computers can perform many millions of operations per second thanks to how tiny their components are. Computing isn’t a problem for computers, it’s what they do.

2

AquaRegia t1_j23vf57 wrote

2838393^(72829) is 72,828 multiplications resulting in a number with 469,965 digits. A 64-bit processor can't directly work with numbers larger than 20 digits, which means it'll have to perform multiple operations for a single multiplication.

All in all, it's not enough that modern computers are really fast, if it just tried to brute-force the calculation it'd still take a lot of time.

1

nmxt t1_j23weu9 wrote

Well it’s not 72,828 multiplications, it’s something like thirty multiplications. If 2838393 is N, you first need to calculate N^2 , then N^4 as the previous result squared, then N^8 as N^4 squared etc. and then just multiply some of those power-of-two results to combine them into the required final power. For example, if you wanted to calculate 3^9, you don’t need to make 9 multiplications. You calculate 3^2, 3^4 and 3^8 (three multiplications), and then multiply 3^8 by 3 to get 3^9 (fourth multiplication).

2

_Weyland_ t1_j23uqs1 wrote

First, computers don't really care about large numbers. For us doing maths on small numbers is easier while computing big numbers requires keeping some stuff in your head or writing it down, which slows the whole process. For computers it's equally easy as long as the number fits into a specific portion of memory i.e. does not exceed some very large number.

Second, modern software is very advanced. Back in the day we had much weaker computers, so we had to do all sorts of math tricks to get a solution that is both fast and precise. Nowdays the pressure to come up with new cool math is not as high, but known stuff still gets used to make computations faster.

1

Seaworthiness-Any t1_j23wik4 wrote

It's easy when you can do logarithms (and multiplications and exponentials) to high precision, which is quite easy in itself. I think this is how Alpha does it.

This was not as obvious a few years ago (well, more like 10 or 15 now), most people simply used hardware computations for floating point numbers, which limits precision. Once you implement floating point numbers in software, which started to make sense at some point, you can simply choose a higher precision.

1