Viewing a single comment thread. View all comments

WhyCloseTheCurtain t1_ixpgm96 wrote

Are we asking the right question? What could we build for a billion or half a billion dollars? If you can take a zero or two out if the cost of development, you can drastically increase the pace of deployment.

0

BrooklynVariety t1_ixr60xa wrote

Astronomer here.

We DO deploy LOTS of smaller missions at a much faster rate for significantly lower costs, you just don't hear about them as much.

Care to guess which missions still have the highest oversubscription rates? Its the flagship, billion-dollar missions like Hubble, Chandra, and JWST. Not because we prefer fancy expensive missions, but because they can do things cheaper observatories can't.

7

WhyCloseTheCurtain t1_ixtw2j9 wrote

I see your point. In computing, back in the days of the mainframe, it was about making a faster computer, a bigger more reliable disk. Then someone figured out, you can cluster cheap computers and still get a lot of computing power. You can create redundant arrays of inexpensive disks (RAID) to get more capacity, better reliability, and even higher performance.

What would RAID for telescopes look like? How could you surpass JWST with dozens of telescopes that cost a few million each?

1

BrooklynVariety t1_iy483t8 wrote

I think I understand your question.

> How could you surpass JWST with dozens of telescopes that cost a few million each?

Unfortunately, it does not work quite like that. While optical interferometry is a thing, there is no universe where it would be more practical or economic to try to make a space-based IR interferometer to match the capabilities of JWST. People talk about interferometry like you just add telescopes together and magically you have a more powerful telescope, but the truth is that images produced by interferometric telescopes (especially optical) are incredibly difficult to produce and have huge limitations. So the use case here, while incredibly powerful, is actually quite specific and it is not a viable way to replace large telescopes in most use cases.

This is not to say that smaller, less expensive missions aren't essential. Advances on the detector side of the equation mean that we are able to produce small missions that are still very sensitive. Kepler, GAIA, and Spitzer all under a billion dollars. These have contributed vital science, although the first two with very specific uses, while the latter having a short lifespan and limited angular resolution (for a workhorse telescope).

On the X-ray side, missions like NuSTAR ran under $200 million, yet this mission has incredible timing properties, can detect X-rays over a huge energy range, and can look at very bright objects without degrading the quality of the data (a huge issue for X-ray telescopes). Missions like SWIFT (>$200 million) are vital for transient X-ray sources (such as a nearby accreting black hole that has gone into outburst), allowing you to quickly get a rough idea of what the object is doing before deciding to point a much more expensive telescope (like Chandra or XMM/Newton) at it.

As useful as these smaller mission are, however, you cannot evade the laws of physics. Optical systems are difficult and expensive to produce, require a lot of testing and calibration, they are incredibly heavy, and have inflexible spatial requirements (such as the location of the focal plane). I don't really see these specific aspects becoming cheaper in the near to mid-future, and the most sensitive and powerful missions push these aspects to the maximum, no way around it, really.

I do think that we could be more efficient about this. We lament, for instance, that once Chandra eventually ceases to work, no X-ray telescope will even come close to its resolving capabilities and we do not have anything (serious) planned to replace it. We spent so much money on facilities and R&D to build the optical system (which consists of the most perfect mirrors ever created by humanity), but now we kind of want to start from scratch for a replacement. Personally, I think we should plan to build a series of the same flagship telescope, just updating the electronic hardware for each iteration. This has worked efficiently in other missions, but I am sure there are large downsides to this approach.

2

grchelp2018 t1_ixv09vz wrote

This only works for computation that can be easily parallelised. We still build supercomputers that cost 100s of millions of dollars.

When it comes to things like the JWST, a large amount of cost is in the r&d, making sure that it works reliably in space etc. It wouldn't cost the same amount to build another one now though it would still be expensive. For chips, you have high demand to amortise the costs.

1

ThickTarget t1_ixpw4br wrote

The decadal did prioritise two probe missions, which will have a cost of about a billion each. But realistically it would be one per decade, with the first launching around 2035. It's a limitation of the decadal system in that proposals compete in terms of scientific priorities, not cost or efficiency. So the smaller concepts struggle to compete. There are no fixed cap opportunities like there are for smaller Explorer missions. ESA has a better system in this regard, they have comparable M class missions where the best concept which fits within the cost cap wins. Euclid, Plato and Ariel are all ESA M class astronomy missions expected to launch this decade.

6