No, I'm not kidding. Let me explain.
The first thought I had was to take the big charts of hand gestures from Mridangam
and make a sort of live online reference chart, which several people could log in to and not only use the web app to signal which gestures they were showing each other, but actually have the meanings listed right there, as well as all possible response gestures.
Such a web app would also enable even more complex rule sets without breaking the seamless nature of the spoken portion of the game. Which of course leads me to my ulterior motive: I want a game that provides tools for making a dramatic improvised podcast. With as little editing as possible.
Which is why I got hung up on Mridangam's "thought balloon" mechanic, wherein you raise a flat palm to signify that your words are being thought, not spoken, by your character. Unless the web app you were using to send your threats also recorded the incidence and duration of those signals - making it significantly harder to program - this would result in a confusing sound recording. How would you know for sure which portions of the recording you then needed to go back and add that echoey, internal-monologue sound effect to?
Then I realized: you'd just have to add it live. And sure enough, there's a Skype plugin called DoNaut
which adds realtime voice and sound effects to your Skype calls. I haven't gotten to play with it yet, but it has enough effects that it could possibly not just augment a (crude web-based simulation of a) gestural mechanic, but replace it entirely with sound effects.
I think some of the possibilities are obvious. Let's talk about them all anyway.