AURELIS is about respecting the total human being. This includes conscious and non-conscious processing. ► To get this in your email, RSS…, open this blog and click here. ◄ If you are new to this blog-wiki, please read this introduction.
This high-end view on Reinforcement Learning (R.L.) applies to Organic and Artificial Intelligence. Especially in the latter, we must be careful with R.L. now and forever, arguably more than with any other kind of A.I. Reinforcement in a nutshell You (the learner) perform action X toward goal Y and get feedback Z. Next time you Read the full article…
Health-related expectations can profoundly affect one’s health in the short and long term. Therefore, (re)appraisals can have profound effects on health or disease ― or not? Two kinds of belief + a continuum Two kinds, and how to change them: For a surface-level belief, one can look for conceptually valid arguments ― for example, a Read the full article…
Time and again, the way engineers (sorry, engineers) think and talk about human-A.I. value alignment as if human values are unproblematic by themselves strikes me as naive. Even more, as if the alignment problem can be solved by thinking about it in a mathematical, engineering way. Just find the correct code or something alike? No Read the full article…