Animal Behavior for Shelter Veterinarians and Staff. Группа авторов
Чтение книги онлайн.
Читать онлайн книгу Animal Behavior for Shelter Veterinarians and Staff - Группа авторов страница 41
The last basic arrangement is negative punishment. In this case, the removal of a stimulus decreases the target behavior. For example, if a dog jumps on their owner to get the person’s attention, the owner might remove that attention by walking away or turning their back to the dog in an attempt to decrease the behavior. If the jumping up behavior decreases when attention is removed, this is an example of negative punishment. Negative punishment occurs when a behavior results in the removal of a pleasant stimulus, causing a decrease in the behavior’s occurrence in the future.
3.4 Effectiveness of Consequences
There are two major factors that can determine the effectiveness of reinforcement and punishment: when and how often the consequences occur. Remember that operant conditioning takes place when a behavior is paired or associated with a consequence. It becomes increasingly difficult for an association to take place if the consequence is delayed from the moment behavior occurs (Wilkenfield et al. 1992). Therefore, timing (the when) is one important factor for the effectiveness of consequences during the acquisition of new behaviors.
Browne et al. (2013) demonstrated the importance of timing by attempting to teach dogs to sniff the inside of one of two containers with either an immediately delivered reinforcer or a reinforcer delayed by 1 second. Most dogs (86%) were able to learn the behavior within 20 minutes when treats were delivered immediately. In contrast, only 40% of dogs learned the behavior when treats were delayed by 1 second. In fact, if a consequence is delayed from the moment of the target behavior, then it is possible that other behaviors get associated with the consequence instead.
The problem of timing is a common one with pet owners. The following scenario might be familiar: Many dog owners come home to find that their dog has rummaged through the trash. In an attempt to punish trash‐rummaging behavior, the owner scolds the dog, perhaps by yelling or confining the dog to a crate. The problem, though, is that it is likely the dog rummaged through the trash hours before the owner came home. Then, even though the dog was peacefully chewing on its dog bone upon the owner’s return, it experienced an aversive consequence. Subsequently, the scolding was associated with appropriate behavior instead of the trash‐rummaging behavior that the owner attempted to punish. Timing, or more specifically, immediacy, is crucial for the development of a behavior‐consequence association.
The second major factor that determines the effectiveness of a reinforcer or punisher in establishing a new or eliminating an unwanted behavior is how often the behavior is followed by the consequence. Formally, how often a consequence follows a behavior is called a schedule. If a consequence follows every instance of behavior, then the consequence is on a continuous schedule. In contrast, if a consequence does not follow every time a behavior occurs, then the consequence is on an intermittent schedule. For a strong association between a behavior and a consequence to develop, the consequence needs to follow the behavior every time it occurs. This is especially true when attempting to teach a new behavior with reinforcement or when attempting to reduce an unwanted behavior with punishment (Zimmerman and Ferster 1963).
Schedules of consequence deliveries are usually referred to as reinforcement schedules, though they are relevant to punishment as well. Schedules of reinforcement can differ in two ways. First, they can differ based on whether the reinforcer is delivered after a certain number of responses or after some amount of time passed. In ratio schedules, reinforcement is delivered following a particular number of responses. Interval schedules are set to deliver reinforcement when one response is made after some amount of time has passed. Continuous and intermittent consequence deliveries can be broken down into four schedules: fixed ratio, variable ratio, fixed interval, and variable interval (see Table 3.2).
In fixed schedules, the number of responses needed to obtain reinforcement or the amount of time that needs to pass is the same every time. With fixed ratio schedules, the number of responses that need to occur for reinforcement to be delivered stays the same after each delivery. The number of responses can be 1, 10, or more. Regardless, the same number of responses is required for reinforcement to occur. For example, in scent detection dogs might not get reinforced with the target scent until the 10th bag they smell. With fixed interval schedules, the amount of time that must pass before a response is reinforced is the same across deliveries. Whether the interval is one minute or one hour, the same amount of time must pass before a response is reinforced. For example, a dog begging at the table will not be reinforced for the begging behavior until after the owner is done with dinner and gives the dog a handout.
In variable schedules, the number of responses or the interval duration for reinforcement changes around some average. A variable ratio schedule requires a different number of responses each time reinforcement occurs. That is, the number of responses can change from one reinforcement to the next (e.g., 5 responses may occur prior to one reinforcement, while 10 may occur prior to the next reinforcement, but overall the average number of responses to reinforcement is, for instance, 6). Similarly, with a variable interval schedule, the amount of time between reinforcements changes. For instance, on a variable interval schedule of five seconds, reinforcement might be delivered when the animal responds after two seconds has passed this time and not until nine seconds has passed the next time. Box 3.2 explores some examples of variable schedule reinforcement in the shelter.
Table 3.2 Reinforcement schedules.
Reinforcement schedule | Definition | Example |
---|---|---|
Fixed interval | Reinforcement is delivered at a predictable time interval | Letting animals out in the play yard: every morning at 9 a.m. the animal caregiver opens the enclosure door, but the animal’s behavior of checking the door to go outside isn’t reinforced until it checks the door after 9 a.m. |
Variable interval | Response is reinforced after an interval of time that varies but centers around some average amount of time | Animal feedings: the time of feeding an animal may vary from day to day, but on average a caregiver provides food every eight hours. Therefore, the animal’s response to checking the bowl will not be reinforced until an average of eight hours has passed. |
Fixed ratio | Response is reinforced only after a specified number of responses | Multiple repetitions: a trainer wants an animal to do multiple repetitions of the same behavior. Therefore, the trainer delivers reinforcement after every two correct responses. |
Variable ratio | Response is reinforced after an average number of responses | Opening the door: an animal might paw at the door several times to be let through. The owner lets the animal in after the animal paws on average five times. |
Though intermittent schedules don’t work as well as continuous reinforcement for establishing a new behavior, they work really well in maintaining an already established behavior (Jenkins and Stanley 1950). Typically, after a dog is trained to sit, trainers reduce the number of reinforcers she receives for sitting. The trainer gradually transitions the continuous schedule of reinforcement to an intermittent schedule. As long as the dog receives a treat once in a while, she reliably sits on cue. Changing a continuous schedule of reinforcement to an intermittent one is often called “schedule thinning.” This procedure is beneficial for trainers because not only does it reduce the number of reinforcers needed to maintain behavior, but it also causes the animal to perform consistently. Intermittent