The results of the project were brought together to produce interactive application prototypes which use sign language both for input and output in communication with Deaf users. These prototypes build on all parts of the project since they rely on linguistic models of sign language during processing and exploit the multilingual lexicon and corpora while incorporating tools that can automatically recognise many signs performed in front of a Microsoft Kinect camera, and also provide synthesis technology that can be used to present animated sign language using realistic 3D virtual characters.
Three prototypes have been developed:
- A search-by-example tool trained to identify the closest matches to a sign it is given
- A look-up tool which is able to display corresponding signs in several sign languages
- A Sign Wiki for collaborative development of sign language documents
A search-by-example system has integrated sign recognition for isolated signs with interfaces for searching an existing lexical database to showcase the technology behind dictation characteristics of the user interface parallel to its utility to sign language learners. The search tool is trained to recognise signs in GSL and DGS. The user of the tool is presented with a list of potential matches for the sign he/she has performed in front of a Kinect camera.
Sign look-up tool
Natural sign languages differ from one another, just like spoken languages. While some signs are iconic, and present some formation similarities in the languages used by the project, others are very different.
The Sign look-up tool is a simple extension of the search tool that enables a Deaf user to perform a sign and see the corresponding sign in the four languages used by the project. A practical use for this tool is where sign language users see a sign in an unfamiliar language on a video, or when travelling to another country. They can perform this sign to the look-up tool and if the sign is recognisable to the system, they will be shown a version in their own language.
Sign language Wiki (Sign-Wiki)
A major requirement of contemporary Web 2.0 applications is that user contributions are editable by an entire community. The oldest, and most popular, application of this type is a Wiki, where any contribution can be edited and refined, anonymously if so wished, by someone else.
In Dicta-Sign a server is developed providing the same service as a traditional Wiki, but using sign language. Instead of using text as the output medium, a signing avatar presents information:
The use of an avatar preserves the anonymity of the user, and facilitates modification and reuse of information present on the site. The system acts as a dictaphone using sign, providing recording, playback, and editing. A user can put information onto the server using sign language by means of the Microsoft Kinect device. The system then matches the user’s signs against a stored dictionary, and the matched signs are used to generate the movements of the signing avatar. The user may review the recognised signs as well as any other sign language content stored in the Sign-Wiki.