Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HumanLanguageInterpreter] Allow HLI chaining #2904

Closed
GiviMAD opened this issue Apr 9, 2022 · 6 comments
Closed

[HumanLanguageInterpreter] Allow HLI chaining #2904

GiviMAD opened this issue Apr 9, 2022 · 6 comments
Labels
enhancement An enhancement or new feature of the Core PR pending

Comments

@GiviMAD
Copy link
Member

GiviMAD commented Apr 9, 2022

I have this open PR on the add-ons openhab/openhab-addons#12260 for a customizable HLI and I would like to be able to use it in combination with other interpreters.

It is not quite clear for me how this can be done. I was thinking on adding a new method to the HLI interface which returns a boolean (maybe called 'understand' or something similar) so the core can check if a certain HLI can process a specific text while keeping this logic inside each interpreter, then we just need to implement a way of sorting the interpreters.

WDYT?

Also I would like to comment there is getGrammar method on the HLI interface, I think the intention for this one was to export the HLI capabilities in a common format so the core can analyze it, but I see this as a difficult feature to implement in all the interpreters, so I think it could be removed if we add the new one.

I can work on it if anybody is interested, but I would like to get some feedback from the maintainers and other contributors.

Regards!

@lolodomo
Copy link
Contributor

lolodomo commented Apr 9, 2022

We have currently this method:

    /**
     * Interprets a human language text fragment of a given {@link Locale}
     *
     * @param locale language of the text (given by a {@link Locale})
     * @param text the text to interpret
     * @return a human language response
     */
    String interpret(Locale locale, String text) throws InterpretationException;

Why not simply considering throwing InterpretationException when the HLI does not understand the text ? We should check if this is the current solution implemented with existing HLIs.
We should of course also check that this exception is correctly handled everywhere this method is called. This is done correctly in the dialog processor for example.

Then your idea is to have an ordered list of HLI to consider until one of them is able to handle a text. This could be done by a replacing the HLI parameter in the dialog processor by an ordered list of HLI. All "callers" should then be updated to replace the HLI ID string parameter by a comma separated list of HLI IDs.
Some of the rule actions which allow the user to provide a HLI ID should also be updated the same way to give the user the opportunity to provide a list of HLI.
This change will be backward compatible.

And regarding your comment about getGrammar, I agree with you.

WDYT ?

@lolodomo
Copy link
Contributor

lolodomo commented Apr 9, 2022

Regarding the first point, it looks like the system built-in HLI is well implemented, meaning it throws an exception if the text cannot be understood.:
/~https://github.com/openhab/openhab-core/blob/main/bundles/org.openhab.core.voice/src/main/java/org/openhab/core/voice/text/AbstractRuleBasedInterpreter.java#L110

The rule based HLI just sends the text as a command to a specific item. No check if the text is understood (by the user rule). So no exception throw by this HLI. But this HLI is a very specific HLI.

public String interpret(Locale locale, String text) throws InterpretationException {

And I can confirm that InterpretationException is correctly handled everywhere.

For your new HLI, the good implementation would be to throw an InterpretationException.

So the first point is clear for me.

@GiviMAD
Copy link
Member Author

GiviMAD commented Apr 9, 2022

@lolodomo I didn't realize the "sorry" message is returned throwing an exception, I will align the new interpreter to that.
And knowing that, yes, the proposal about adding a new method makes no sense.

Then your idea is to have an ordered list of HLI to consider until one of them is able to handle a text. This could be done by a replacing the HLI parameter in the dialog processor by an ordered list of HLI. All "callers" should then be updated to replace the HLI ID string parameter by a comma separated list of HLI IDs.
Some of the rule actions which allow the user to provide a HLI ID should also be updated the same way to give the user the opportunity to provide a list of HLI.
This change will be backward compatible.

Sounds good to me, I though it will be more difficult to keep this backward compatible, but after reading this explanation seems easy. I will open a PR following this.

@lolodomo
Copy link
Contributor

lolodomo commented Apr 9, 2022

I will open a PR following this.

Take care to retrieve the last code, the listen and answer mode is now merged. Your changes will also impact this part.

@lolodomo
Copy link
Contributor

lolodomo commented Apr 9, 2022

Please remove getGrammar in a separate PR.

@J-N-K J-N-K added enhancement An enhancement or new feature of the Core PR pending labels Apr 30, 2022
@lolodomo
Copy link
Contributor

To be closed now that #2906 has been merged.

@J-N-K J-N-K closed this as completed May 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An enhancement or new feature of the Core PR pending
Projects
None yet
Development

No branches or pull requests

3 participants