Submitted by nature_and_carnatic t3_zyp602 in MachineLearning
Mental-Swordfish7129 t1_j28a9fj wrote
I have an AI model I've been working on for some time now which I believe may be much more interpretable than many recent developments. It utilizes Bayesian model evidence as a core function anyway, so it has already "prepared" evidence and an explanation of sorts for why it "believes" what it "believes". This has made for an interesting development process as I can observe its reasoning evolve. I could elaborate if you're interested.
Viewing a single comment thread. View all comments