ManosChristofakis t1_jb9b8fc wrote

  1. ai alignment. If a large scale attack is launched that tries to interfere with US nukes, you can bet your ass that ai will disappear from everyday life overnight. Obviously we dont have to get to such extreme cases for ai to be regulated or straight up not leaving the lab in the first place

  2. human alignment. If ai progresses so fast that everyone loses their jobs, bussiness wont have any customers at all and all will go bankrupt including the ai making bussinesses themselves.

  3. lack of training data , obviously

  4. in case our hardware has reached or is close to reaching its limit in terms of efficiency, providing more computational capacity might require more hardware which might be less efficient in its use and makes computational power increase linearly instead of exponentially (also in such a case, cost might increase on par or faster than computational power)

  5. limits of current architectures. Problems like hallucination. Also i read a paper that LLMs model their output to match the prompt given by its user. That is it will reply like a neuroscientist to a neuroscientist or a philosopher to a philosopher. This may limit many of its uses in places like healthcare because biases and people not knowing what they are talking about can make the ai reach wrong conlcusions. There may be other limitations which i/scientists themselves arent aware of yet.

  6. costs. Obviously it takes a lot to buy and maintain the infastructure : cloud , GPUs , electricity and training are all significant costs right now with current LLMs which have parameters in the billions and deal only with text, but right now these costs are doable. Imagine if we try to create a multimodal ai that does the job of the engineer. It will require years or decades of training (because you cant speed up the training processes by cramming decades of training in human time to days like you do in the pc). it will maybe require hundreds of trillion (if not petas) of parameters and it would propably have to process information in real time which would propably be very expensive . You would also have to pay and maintain its robot body and accomodating infastructure. There propably are limits even with current LLMs. Current LLMs bill you per token of the robots reply aswell as its context. Best current LLMs have thousands of words of context and right now for every few replies you get it propably costs pennies (or less). But if you try to create a LLM that contains context of millions of words (for example a personal assistant or a robot friend ) the cost for every single reply , let alone continual replies, will be too prohibitive. This, assuming that these things are even possible