Open Captions gives the user the ability to select a particular word on the closed captions of a YouTube video and view its American Sign Language representation. Users can search for videos on Open Captions and will get back relevant video results that have closed captions. While viewing the video, they can see the closed captions at the bottom of the screen. If the Sign Language representation is not found for the word, it finds an image from the internet and shows it instead. This project was initiated at the Video Hackday at NYC and is still a work in progress. Suggestions, ideas,feedback welcome.
Currently the code (still improving) can be found here at Github
A weekend project at Eyebeam, NYC. Below is the video and in this blog entry I covered the project in detail.
Accessible NYC is mashup which gives information about which Subways, Parks, Playgrounds, Restrooms (the usual Public places) around you are accessible (meaning accessible via wheelchairs). This was built using Foursquare, which has a wealth of data on places in the city and the NYC Gov data, which has information about which public places like parks, subways are accessible. Built as a hack during the Foursquare Global Hackathon
TopWORDS TO ASL
Research says that around 90% of hearing impaired children are born to hearing parents. So if parents do not know Sign Language then it is difficult for the child to learn ASL quickly. Wouldn’t it be great if people could learn sign language on the fly when they are reading articles on the web? Maybe they are reading stories to their children from a website and they can have ready assistance to translate words in written language to Sign Language.
So, I thought of creating a simple Greasemonkey script that would show the ASL equivalent of a selected word.
Center for Accessible Technology in Sign at GeorgiaTech has a huge database of words and their American Sign Language equivalent. I used their web pages that they had created for around 25000 words. Whenever a word is selected, I load the ASL equivalent in an iframe on the right top corner of the page.
How to try this out is documented here
Facebook recently organized their first ever 24-hour hackathon at Georgia Tech – the first of a total of 5 such events at campuses across the USA. Check this for pics and more information. These scripts constitute what Tarun, Manohar , Params and I came up with as our hack submission.
We’ve seen a bunch of people on Facebook who post in their language but not in their script. To give you an example, an Indian student greeting a friend might type in ‘Kaisa hai, yaar?’ instead of ‘कैसा है, यार?’. And it quite obvious why – a) Most people only have English keyboards and b) Most people are used to typing in English, even if they’re thinking in another language.
On the flip side, when people are able to post in their own language, their friends from different countries or places will probably not be able to tell what it is they want to say. Of course, you could copy every post that you think might be interesting and find some translation engine, get your text translated and find out that your time would’ve been better spent browsing cows that you can buy on Farmville!
Now, Google has a pretty neat transliteration service that works with a bunch of Indic and other languages. They also have a translation service. Both are free and can be accessed via Ajax APIs.
Our hack was to let millions of multi-cultural Facebook users take advantage of these services with simple scripts for Greasemonkey. So you can now type away and see your text automatically get transliterated!
And when you come across posts from friends who’ve been using our script, you can figure what it is they’re saying, even if you don’t speak their language!
These scripts work for any Facebook page (the Greasemonkey script runs for URLs of the form http://*.facebook.com/* and https://*.facebook.com/*). To use the Greasemonkey script with Firefox, you’ll need to install the Greasemonkey extension; Chrome 4 supports Greasemonkey scripts natively. Once you’ve done that, download the scripts Facebook Transliterate and Facebook Translate
Once the script is installed, you’ll need to refresh any Facebook pages you may already have open, but other than that, it will load automatically. [Source:Tarun Yadav]
This project PiX-C- Pictures: Express & Communicate is a tool for Augmenting Communication with Visual Input for Children in the Autism Spectrum. This prototype application was built for the Student Contest at ACM UIST 2010 Conference. The poster for this can be seen here and the video demo is below.
TopVISUAL GUIDE TO INDIAN POLITICIANS v0.0
Overall Idea: PRS is one of the only organisations in the country that track the functioning of of Parliament. PRS provides a comprehensive and credible resource base to access Parliament-specific data, background information and analysis of key issues. Based on the rich data repository that PRS has on politicians, I wanted to create interesting and slick data visualizations that can give some useful information to the viewer. This is my first attempt at doing so and would welcome more ideas from people.
Process: Initially a couple of us brainstormed on different visualizations that we could come up with the data on the Members of Parliament (MPs). The data consists of the the name, sex, age, constituency, political party, educational qualifications, participation in the parliament (questions, debates participated). One of the ideas that came up was to create a visualization on the map of India and so that the viewers are able to connect to their constituency more easily. The version that I created shows 2 maps with the age distribution of the MPs and the number of questions they asked in the Parliament. The number of questions asked in Parliament can be a guess of the involvement of the elected representative. The viewer can filter based on the political party and look at the age distribution of the members of that political party. There is more data to be exploited and interesting correlations to be drawn from them.
I searched around for APIs that can show visualizations on Google Maps. I could not find any great ones and I finally ended up using this from Google Visualization Toolkit. A couple of limitations of this toolkit being that this is a flash object embedded in HTML and only 400 data points could be plotted on the map. Since, I wanted to get v0.0 out so that I could get feedback from people, I went ahead with this API, plotting around 350 data points related to the Members of the Parliament of India.
(Click on the image for the actual visualization – Flash enabled browser required)
** NEW **
Data of political candidates of the 2009 Indian General Election is available at MyNeta.info – more in this blog post.
Age of MPs mapped against their activity (questions, debates, attendance) in the Lok Sabha Parliament. Visualization here
Education level of the MPs of Lok Sabha mapped against their activity (questions, debates, attendance) in the Parliament. Can be seen here
TopRUBE GOLDBERG MACHINE
A Rube Goldberg machine or device is a deliberately over-engineered machine that performs a very simple task in a very complex fashion, usually including a chain reaction.[Source: Wikipedia] The simple task here was to raise a GeorgiaTech flag.
Developed a working prototype MiMiC (Muscle in Memory Controller). It’s a wearable system with sensors and force feedback that can be used in training for sports like Archery and Fencing for improving precision of positions.
The project builds on the ideas developed by the emBodied Digital Creativity project that uses a puppet interface to play a simple tea-pot game. The puppet interface that is currently used to play the game is fragile and not adaptable to people of different sizes.
Our project is a continuation of a previous SynLab project at Georgia Tech called emBodied Digital Creativity project. Details of the Project
Video Demo -
Developed a working prototype for the system PROWESS – Proactive Wellness Environment Support System, which facilitates collection of wellness data of employees in an office environment using ubiquitous sensing technologies.
TopMOBILE MUSIC TOUCH
Designed the hardware and conducted user study of the Mobile Music Touch Glove (MMT), a lightweight, wireless haptic music instruction system consisting of gloves and a mobile Bluetooth-enabled computing device (under Dr. Thad Starner).
Hand rehabilitation often consists of repetitive exercises, which may result in reduced patient compliance and decreased results. The Mobile Music Touch (MMT) is proposed as an engaging form of hand rehabilitation. MMT is a lightweight, wireless haptic music instruction system consisting of gloves and a mobile Bluetooth-enabled computing device, such as a mobile phone. Musical passages to be learned via “passive haptic learning” are loaded into the mobile device and played repeatedly while the user performs other tasks. As each note of the music plays, vibrators on each finger in the gloves activate, indicating which finger to use to play each note. We present observations from a pilot study of MMT used for hand rehabilitation for people with tetraplegia resulting from incomplete Spinal Cord Injury (SCI); observations from a study conducted on able-bodied people, providing baseline data for assessment methods; and observations on glove design for persons with tetraplegia.
This was published at the Pervasive Computing Technologies for Healthcare Conference, Munich, 2010
Guide: Dr. Melody Moore Jackson
This project aims to develop and conduct feasibility testing of a P300 response based Neural Web Browser. After communication, one of the most profound quality-of-life improvements for people with locked-in syndrome is access to the internet. A Brain Computer Interface (BCI) controlled web browser could enable control of finances, access to shopping, education, and possibly even employment. However, incorporating a Brain Control Interface into a web browser requires a dynamically changing control interface to accommodate the nearly infinite possibilities of web page organization.
One of the main objectives of this project is to research and solve the fundamental issues in providing Brain Computer Interface access to the web in a natural, reliable, and effective way for people. We plan to use the P300 Browser that has been built by Jeremy Johnson of the IMTC, Georgia Tech. This application currently has the ability to emulate the Donchin’s matrix that is currently used in successful P300 based applications. We will first fully integrate this P300 Browser application with the BCI 2000 Application framework. We will use this to run a study with subjects and determine the fundamental factors that influence the control of a dynamic interface such as a web browser with a P300 response. Stimulus size, location, shape, color, and flash pattern are examples of the experimental variables we will study.
Based on our study results, we would come up with set of design standards for web pages that need to be adhered to so that it can support such P300 response based browsing and navigation.
Also, such a study has not yet been conducted in the area of Brain-Computer research. So the results from the study that we conduct would be new (not very predictable) and useful contribution to this field of research. We aim to cover the normal, physically-abled population in our study.