• SWISTWIT for greater clarity & accuracy for explanation and demonstration
• With white-boarding for illustration
• Screen-sharing to facilitate discussion
• Native Language Chat in text or speech with automatic translation in real time among multiple participants in a conversation each writing or speaking and listening in their preferred language
Use it as a Virtual Interpreter in any situation
• TMU™ with MRESENCE™ is available in
Users may use any of the 3 formats of TMU™ to interact with one another!
Budroid with MRESENCE™ for Remote Care, Recreation for the elderly; CJ MRESENCE™ for routine and incidental reporting; eGovernment operation incorporating TMU™ with MRESENCE™ services to avoid the need to provide in-person service in completing registration forms, license renewal, paying utility bills – this will greatly reduce the need of long queues of consumers waiting for their turn to have a face-to-face conversation / interaction with a service agent.
TMU with MRESENCE™ is designed to offer better feature than Zoom Video Conference Service and Whatsapp. TMU™ with MRESENCE™ offers these outstanding features:
• Presence in Mixed Reality features named SWISTWIT (See What I See Touch What I Touch) for pin-pointing and finger-pointing to give greater clarity and accuracy in interaction and discussion
• Native Language Chat in Text or Speech with Automatic Translation in real time MRESENCE’s Native Language Chat features work with the following:
Speech to Speech, Speech to Text and Text to Speech Languages Include:
Arabic, Catalan, Chinese, Czech, Danish, Dutch, English (UK) and (US), Finnish, French, Galician, German, Greek, Hebrew, Hindi, Hungarian, Indonesian, Italian, Japanese, Korean, Norwegian, Polish, Portuguese, Portuguese (Brazilian), Russian, Slovak, Spanish, Swedish, Thai and Turkish.
Text to Text Languages Include:
Afrikaans, Albanian, Arabic, Assamese (India), Azeri (Turkish), Belarusian, Bengali (India), Bosnian, Bulgarian, Catalan, Chinese, Croatian, Czech, Danish, Dari (Afghanistan), Divehi, Dutch, English (UK) + (US) + (AUS), Estonian, Filipino, Finnish, French, Galician, Georgian, German, Greek, Gujarati (India), Hausa (Ghana/Africa), Hebrew, Hindi, Hungarian, Igbo (Nigeria), Indonesian, isiXhosa (Zimbabwe), Italian, Japanese, Kannada (India), Kazakh (Kazakhstan), Khmer (Cambodia), Kinyarwanda (Uganda), Kiswahili (Tanzania), Korean, Kurdish (Iran), Lao (Laos), Latvian, Lithuanian, Macedonian (Slavia), Malay, Malayalam (India), Maltese, Marathi (India), Maori, Mongolian, Nepali, Norwegian, Pashto (Afghanistan), Persian, Polish, Portuguese, Portuguese-Brazilian, Punjabi (India), Romanian, Russian, Serbian, Sinhala (Sri Lanka), Slovak, Slovenian, Somali, Spanish, Swedish, Tamil (India), Telugu (India), Thai, Tibetan (Tibet), Turkish, Ukrainian, Urdu (India), Uzbek (Uzbekistan), Vietnamese, Yoruba (Nigeria)] and Zulu.
• White-Boarding for drawing on the screen of the Smartphone/Tablet with a finger or drawing on the images of a VR Stream with a finger or in the case of the Web version of TMU™ with MRESENCE™, drawing using a mouse
• Capture in multi-media the entire scenario of an incident or a situation for recording
• Curation of the multi-media images of the VR Streaming prior to Storage in order to facilitate ease of Retrieval of the images
Presence in Mixed Reality
SWISTWIT ("See What I See Touch What I Touch")
SWISTWIT improves video conference service by enabling greater clarity and accuracy in explanation and demonstration using finger-pointing or hand gestures on the remote party's video stream in real time.
The local user views the remote user's video stream on a smartphone and puts his/her hand behind the smartphone and points or gestures. The rear camera of a smartphone captures the local user's hand, the hand is detected on the local video stream, and the images of the hand and the remote user's video stream are merged in real time for both users to see. The image of the local user’s hand superimposed on the image of the remote user's environment simulates what the local user would do if in the same physical space and time as the remote user.
MRESENCE™ service is available in either web version or App version for use with iOS-compliant Smartphone/Tablet or Android-OS-compliant Smartphone/Tablet. A user using MRESENCE™ in any of the 3 formats can communicate or interact over the Internet with others in a group conference.
(a) During the group interaction, a user may point the rear camera of the Smartphone (or Tablet) at an object or at a situation and have the entire situation captured in multi-media of the scenario and transmitted in VR (Virtual Reality) streaming to the other users of MRESENCE in the group communication.
(b) Anyone of the users in the group interaction, while viewing the other user's VR Streaming, can hold his/her hand behind his/her Smartphone so that the rear camera can capture it, and the image of the hand is merged with the other user's VR Streaming. The user can use finger-pointing (or pin-pointing or gesturing) on the other user's VR Streaming while having a voice conversation with the other user to add clarity and accuracy to the visual presentation.
(c) The image showing the finger-pointing on the VR Streaming is transmitted to all the Smartphones in the group and appears on their screens.
(d) The users get to see the finger-pointing in real time while having voice discussion.
(e) In the case where a user in the group interaction is using the web version of MRESENCE at a computer (which has no rear camera) to view the other user's VR Streaming described in (b) above, the user may use the mouse of the computer to point/draw on the other user's VR Streaming. The image of the pointing/drawing made with the mouse is merged with the other user's VR Streaming.