-
-
Notifications
You must be signed in to change notification settings - Fork 520
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multimodal input #2367
Multimodal input #2367
Conversation
1ba2482
to
9d70714
Compare
49448ed
to
885cef9
Compare
885cef9
to
fdecc2b
Compare
5651a59
to
62a4eda
Compare
alvr/dashboard/src/dashboard/components/settings_controls/presets/builtin_schema.rs
Show resolved
Hide resolved
d81a630
to
ad75817
Compare
So the way I'm expecting this to work (and what seems like the only sensible way to make it work reliably) doesn't seem to be how it is, so I'll explain it right here so you can explain how the implementation differs. Client supports multimodal input && protocol is negotiated: Server supports multimodal input && protocol is negotiated: compat layer: client has multimodal input but no protocol negotiated: server has multimodal but no protocol negotiated: |
@The-personified-devil there is quite a bit of confusion. "Server supports multimodal" is not a variable to keep track. SteamVR does not support multimodal input the way the Quest does. SteamVR always had the ability to have controller + hand skeleton at the same time. with SteamVR Input 2.0 when using two pairs of devices we can assign different skeletal levels and switch between them. The real and only use of enabling multimodal on the client, is to use the extra skeletal data to replace the fake skeletal animations we have when pressing controller buttons |
I'm aware, it's a version of the code that supports multimodal vs an outdated version of the server code that has no knowledge of multimodal existing
I'm also aware
Right, but they still kinda have to play together so it's still relevant
Think that's where most of the confusion stems from, I was trying to figure out how the detached controllers were implemented/assuming certain code was for detached controllers |
* feat: ✨ Multimodal input * Fix controllers and hands dropping to 0,0,0 when not visible * Actually fix multimodal input support * Address review comments
* feat: ✨ Multimodal input * Fix controllers and hands dropping to 0,0,0 when not visible * Actually fix multimodal input support * Address review comments
This PR adds multimodal support on the client and adds a multimodal protocol extension. It's important to understand that multimodal support and multimodal protocol are two distinct orthogonal features.
Multimodal protocol describes how to interpret data sent by the client. without multimodal protocol, controller tracking is described by only
HAND_LEFT
/HAND_RIGHT
devices indevice_motions
, while hand tracking requires both the hand device motions and the skeletons to be present. With multimodal protocol, only hand skeleton is required for hand tracking.The old protocol was decided without anticipating a feature like multimodal, so now we have to negotiate multimodal protocol support and switch it on only when both client and server advertise compatibility.
Multimodal support is enabled only when supported by the headset, and controlled by the
multimodal_input
setting. Multimodal input would be enabled in the headset regardless if the server supports the multimodal protocol, but actual multimodal behavior can be used by the server only if both peers support multimodal protocol.