thedabking123

thedabking123 t1_jcvmwoq wrote

It happens in every career in every field; I'm 37 and have been a PM of an ai-driven product for a while now.

I am starting to care more about corporate leadership and using money earned to enjoy life than the technical bits I've been cycling around for a while on

Some ideas that may apply to you

  1. you could focus on people management and direct teams on larger projects
  2. you could try and find companies with new interesting problems to solve (MLOps for LLM models in firms like Jasper, or OpenAI, or Cohere, or perhaps HCI loops between humans and UI-embedded LLMs like at adept.ai ?)
  3. you could find a truly deep R&D job on a crazy new area and go at something novel. (i personally would love to spend 2-3 yrs exploring neurosymbolic computing or quantum computing once i get tired of corporate politics; haven't decided yet)
  4. etc.

No one job is a constant set of discovery and joy and focus. Everything gets old after a while so be prepared to refresh your career again in 3-5 yrs.

19

thedabking123 t1_jc93iop wrote

that's not the way that the system works.

You're using symbolic logic, its thinking is more like an intuition- a vastly more accurate intuition than ours, but limited nonetheless.

And the kicker? Its intuition of what words, characters etc. you are expecting to see. It doesn't really logic things out, it doesn't hold concepts of objects, numbers, mathematical operators etc.

It intuits an answer having seen a billion similar equations in the past and guesses at what characters on the keyboard you're expecting to see based on pattern matching.

21

thedabking123 t1_j9664jt wrote

Sometimes this can unlock other knowledge that has practical applications.

Einstein’s equations were considered useless… until later on it helped unlock ultra precise GPS and other practical applications later.

5

thedabking123 t1_j46pulo wrote

Reply to comment by L43 in [D] Bitter lesson 2.0? by Tea_Pearce

the one thing that could blow all this up is requirements for explainability; which could push the industry into low cost (but maybe low performance) methods like neurosymbolic computing whose predictions are much more understandable and explainable

I can see something to do with self driving cars (or LegalTech, or HealthTech) that results in a terrible prediction with real consequences. This would then drive the public backlash against unexplainable models, and maybe laws against them too.

Lastly this would then make deep learning models and LLMs less attractive if they fall under new regulatory regimes.

5

thedabking123 t1_j0i2bon wrote

I think it depends on how quickly they deal with "hallucinations" or situations where the responses don't make sense.

As these reduce over time, more and more use cases will become realistic.

  • Fault tolerant use cases are already there today like inspiring copy for emails, or for reports etc. These use cases don't need exact responses as it depends on the human to go the last mile and the cost of a mistake is very low.
  • Very fault intolerant use cases like recommmending diagnoses in healthcare will likely need extremely low levels of hallucinations. The cost of a mistake here is extremely high.
11

thedabking123 t1_ir6b31z wrote

Okay there's a lot to unpack here and I'm assuming you're new to the field given the way you formed your question.

Are you asking about usecases for AI in government? If so there are many thousands of ways it could be used. From surveillance of cities for crimes, to tax fraud detection.

Are you asking about frameworks for determining if a particular usecase is ethical, fair, transparent etc.? There are many emerging frameworks around explainability, interpretability, fairness etc.

Are you asking how effective the frameworks for determining fairness, transparency, etc. are in evaluating each usecase? Well that's a much deeper question, and is still something everyone is exploring.

4