The person using sign language as their native language is hard to understand the text. The objective of this project was to elevate the accessibility to hear-impaired people’s daily affairs including but not limited to reading and entertainment. We used the NLP(Natural Language Processing) to split the text source into vocabulary and used Machine Learning to analyse the joints of the human body to capture the skeleton to record a sign language video into the database. The methodology is to recall the prepared video in the database with different words for real-time text translating to sign language. The data sources are a major to complete translation. The sources include movie text files, input by device and the Vision framework for OCR in smart devices to obtain the text. We implemented the application smart devices platform with real-time sign language caption on the movie, sign language translation with OCR scanner support and sign language recorder functions to alleviate inconvenient to hear-impaired people. Hear inclusive is an application for real-time sign language translation to assist hear-impaired people in having better experiences.