thedabking123
thedabking123 t1_jcfupuc wrote
Reply to comment by Hydreigon92 in In your experience, are AI Ethics teams valuable/effective? [D] by namey-name-name
thank god that only applies to giant platforms... Our firm would crumble in the face of that.
thedabking123 t1_jcbbado wrote
Reply to comment by Jackdaw99 in Exam results for recently released GPT 4 compared to GPT 3.5 by balancetheuniverse
lol- it may make the same mistake if enough people on the internet make the mistake... OpenAI uses all web data to train the machine.
thedabking123 t1_jc93iop wrote
Reply to comment by Jackdaw99 in Exam results for recently released GPT 4 compared to GPT 3.5 by balancetheuniverse
that's not the way that the system works.
You're using symbolic logic, its thinking is more like an intuition- a vastly more accurate intuition than ours, but limited nonetheless.
And the kicker? Its intuition of what words, characters etc. you are expecting to see. It doesn't really logic things out, it doesn't hold concepts of objects, numbers, mathematical operators etc.
It intuits an answer having seen a billion similar equations in the past and guesses at what characters on the keyboard you're expecting to see based on pattern matching.
thedabking123 t1_j9664jt wrote
Reply to comment by Rich_Acanthisitta_70 in Physicists nail down the most precise value yet of the electron magnetic moment. A newly measured value of an electron’s magnetic moment — a property of its spin and charge — is twice as precise as the one physicists have used for the past 14 years. by MistWeaver80
Sometimes this can unlock other knowledge that has practical applications.
Einstein’s equations were considered useless… until later on it helped unlock ultra precise GPS and other practical applications later.
thedabking123 t1_j46pulo wrote
Reply to comment by L43 in [D] Bitter lesson 2.0? by Tea_Pearce
the one thing that could blow all this up is requirements for explainability; which could push the industry into low cost (but maybe low performance) methods like neurosymbolic computing whose predictions are much more understandable and explainable
I can see something to do with self driving cars (or LegalTech, or HealthTech) that results in a terrible prediction with real consequences. This would then drive the public backlash against unexplainable models, and maybe laws against them too.
Lastly this would then make deep learning models and LLMs less attractive if they fall under new regulatory regimes.
thedabking123 t1_j0i2bon wrote
Reply to [D] What kind of effects ChatGPT or future developments may have on job market? by ureepamuree
I think it depends on how quickly they deal with "hallucinations" or situations where the responses don't make sense.
As these reduce over time, more and more use cases will become realistic.
- Fault tolerant use cases are already there today like inspiring copy for emails, or for reports etc. These use cases don't need exact responses as it depends on the human to go the last mile and the cost of a mistake is very low.
- Very fault intolerant use cases like recommmending diagnoses in healthcare will likely need extremely low levels of hallucinations. The cost of a mistake here is extremely high.
thedabking123 t1_ir6b31z wrote
Okay there's a lot to unpack here and I'm assuming you're new to the field given the way you formed your question.
Are you asking about usecases for AI in government? If so there are many thousands of ways it could be used. From surveillance of cities for crimes, to tax fraud detection.
Are you asking about frameworks for determining if a particular usecase is ethical, fair, transparent etc.? There are many emerging frameworks around explainability, interpretability, fairness etc.
Are you asking how effective the frameworks for determining fairness, transparency, etc. are in evaluating each usecase? Well that's a much deeper question, and is still something everyone is exploring.
thedabking123 t1_jcvmwoq wrote
Reply to [D] For those who have worked 5+ years in the field, what are you up to now? by NoSeaweed8543
It happens in every career in every field; I'm 37 and have been a PM of an ai-driven product for a while now.
I am starting to care more about corporate leadership and using money earned to enjoy life than the technical bits I've been cycling around for a while on
Some ideas that may apply to you
No one job is a constant set of discovery and joy and focus. Everything gets old after a while so be prepared to refresh your career again in 3-5 yrs.