Receiving and Incorporating Visual Information in English-to-ASL Interpreting

/Receiving and Incorporating Visual Information in English-to-ASL Interpreting
Receiving and Incorporating Visual Information in English-to-ASL Interpreting2019-05-22T13:59:04-05:00
Steve Frank headshotCEUs: 0.275 (Professional Studies)

One of the most frustrating things I hear us, interpreters, say about our job is “craning necksto see visual aids that hearing speakers display behind us. Thus, are we not easily receiving both the auditory and visual information that together formulate the speakers’ messages? Visual information conveys essential message content and context that we need to perceive prior to raising our hands. In this workshop, participants will learn the importance of visual information and new ways to receive it.

One of the most frustrating things I hear deaf people say about interpreting is “shifting eyes back and forth” trying to follow the speakers, visual aids and interpreters. Thus, do deaf consumers have trouble knowing what to look at and in what sequence? The degree to which deaf consumers are able to keep up with talks determines their level of understanding, participation and interaction. In this workshop, participants will learn the importance of proper sequencing and new ways to incorporate visual information into our work.

If these situations resonate with you and you are wondering how to handle them anew, then please attend my workshop Receiving and Incorporating Visual Information in English-to-ASL Interpreting. I open my talk with the rationale for examining visual information. Then, I ask you, participants, to share your experiences and answer, “How much of a frustration is receiving information for you and what do you do to reduce it?

Next, I define visual information, describe different types and explain why it is so important. I present examples of how failing to receive visual information can negatively affect our work. Also, I demonstrate existing and new methods of receiving visual information with our backs to the speakers. One new method uses a tablet, app, mount and stand. Finally, I define Visual Split-Attention and show its negative impact on deaf peoples’ understanding of interpreted activities. I demonstrate existing methods of handling Visual Split-Attention and a new method for incorporating visual information into our production. It involves learning preferences of deaf consumers and developing hand signals that effectively sequence what we suggest deaf consumers should look at and when.

July 11 – 09:00 am
9:00 am — 11:45 am

Steve Frank