-
I was searching the 'net and found this StackOverflow question which sort-of answers my question, which amounts to "how can you handle failure in a plan triggered by the addition of a belief?" I see the answer there provided by Jomi says pretty simply
So my follow-up question is just "is there any option for handling failure in such plans, or is this just not possible?" And if failure is a possibility is the canonical pattern to have the +belief plan just add an achievement goal, and then use the normal failure handling mechanism for handling goal failures? And I guess a follow-on question might be, is it generally the case that most plans should probably be triggered by achievement goals? Is that the more idiomatic way to proceed? Thanks again! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
Hi, that is a good point to discuss. AFAIK, there is no "oficial" proposal for that. The problem, however, is known and has motivated some publications (http://dx.doi.org/10.1007/978-3-030-25693-7_3). Personally, I prefer to program goal oriented. So, belief events quickly trigger goals. Among the advantages:
Jomi |
Beta Was this translation helpful? Give feedback.
-
Indeed an interesting discussion. I agree with Jomi that agents should
normally react to external events by adopting new goals.
…On Mon, 7 Oct 2024, 10:02 Jomi F. Hubner, ***@***.***> wrote:
Hi, that is a good point to discuss.
AFAIK, there is no "oficial" proposal for that. The problem, however, is
known and has motivated some publications (
http://dx.doi.org/10.1007/978-3-030-25693-7_3).
Personally, I prefer to program goal oriented. So, belief events quickly
trigger goals. Among the advantages:
- failure handling
- all actions are explained by (can be assigned to) a goal
Jomi
—
Reply to this email directly, view it on GitHub
<#109 (comment)>,
or unsubscribe
</~https://github.com/notifications/unsubscribe-auth/ACMU7UFYF3JR4QRIZCN56L3Z2KA6PAVCNFSM6AAAAABPMZVG7CVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTAOBWG42DANQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Hi, that is a good point to discuss.
AFAIK, there is no "oficial" proposal for that. The problem, however, is known and has motivated some publications (http://dx.doi.org/10.1007/978-3-030-25693-7_3).
Personally, I prefer to program goal oriented. So, belief events quickly trigger goals. Among the advantages:
Jomi