The term Auditory Environmental Stimuli (AES) refers to sounds that occur, but are not part of spoken language that are conveyed by cued language transliterators (e.g., someone tapping a pencil, a dog barking, or a dropped book).
The relevance of a particular sound to a deaf or hard of hearing person varies by individual and according to the situation. For example, to a deaf consumer who is allergic, the buzzing of a nearby bee may be relevant (if not essential) information. Cued language transliterators provide access to AES in order to allow deaf consumers to filter out or act upon this information as they choose.
There are three main ways to represent AES. These techniques were first devised by Language Matters and are taught in detail in the CLT Professional Education Series courses. They are:
It is important that the transliterator distinguish environmental sounds from spoken text. In other words, it should be visually clear to the receiver that no one said, “boom.” This can be done by switching hands or through body shifting.
Attempts to represent environmental sounds are important because they can show important information like duration, intensity, proximity, etc. However, the representation (e.g., “jingle”) may not sufficiently convey whether the sound is coming from the ringing of small bells or the shifting of coins in a pocket. However, to a hearing person, these two sounds are easily distinguishable by sound alone. To make the provision of AES more equitable, a transliterator can follow the cued rendition (in this case, “jingle”) by adding a specifier by pointing to himself and cueing identifying information (e.g. “sounds like coins”).