Ironic that your captions are bad (no punctuation, wrong words occasionally like ATT ADHD at 5:04) making it hard for deaf and hard of hearing to fully access this video's content.
@paulmartin429 жыл бұрын
+J Freer I think that you are being less than fair. Subtitles are not universally provided and are often far from perfect but usually give the sense. Speech recognition is hard enough in Glasgow never mind thru a camera with limited sound quality. I believe the tech industry has tried hard and led the way in some cases.
@bizlibrarian9 жыл бұрын
+paul martin Hi! Thank you for your feedback on my comment. I agree I am being hard on Google as far as snark goes, but not in terms of expectations. Speech recognition technology is only the first part of captioning to capture extemporaneous speaking. It was impressive how well the speech recognition worked with so much other potential noise in a large room and with the speaker walking around. But, that said, transcripts need to be cleaned up to account for punctuation, misspellings or wrongly captured words. Captioning is not about the essence of what is said it is about relaying what is actually said, how it was said and it should include other sounds in the room, like laughter or other sounds in the environment that are part of the experience. I regularly caption my work videos using a combination of Camtasia’s built in captioning tool which uses Microsoft’s speech recognition software that comes with the computer I use. After I lay down a narrated video, Camtasia will create time stamped captions which I then edit to clean up misheard words and to add punctuation. It then takes just a short amount of time to clean up the captions and it is very easy to export and upload to the captioned file to KZbin when I post videos there. I can make and post a captioned 5 minute video in which I speak off the top of my head in less than a half hour. In my situation the biggest barrier is the cost of Camtasia, around $300. A $25 headset with a microphone takes care of good audio and KZbin takes care of server space to post videos. I work for an employer that has no qualms with the time spent to do this and expects videos to be captioned cleanly and correctly. Because this particular video is about accessibility, because Google has the time, money and availability of labor to take these extra steps they should be called out. A small non-profit with limited financial and human resources could be excused for not adding captions, but then again to transcribe something by hand and then time stamp the captions in KZbin is very doable. It just takes time which most people choose not to take or they choose to use the imperfect speech recognition technology when a few extra steps could produce a more accessible end product. I do acknowledge that one big barrier is employers not giving employees the tools, resources and mandate to caption. Captions, and descriptive text for those who are visually challenged, should be provided universally. In the States this issue is getting litigated more and more as groups sue entities for not captioning. www.nytimes.com/2015/02/13/education/harvard-and-mit-sued-over-failing-to-caption-online-courses.html?_r=0 Maybe at some point KZbin will be the focus of one of these suits. Thanks again for adding to this conversation!