This assignment assesses your progress in mastering the following course competencies:
Synthesize research and historical findings in experimental behavior analysis.
Evaluate the link between experimental analysis of behavior and its application.
Communicate in a professional manner consistent with the field of behavior analysis.
Goal: In this assignment you will demonstrate your understanding of the use of reinforcement schedules and how they are related to Skinner′s theory of behavior as well as ABA technologies that have grown from these theories.
Instructions
Identify and describe three different types of schedules (such as fixed ratio, fixed interval, variable ratio, or variable interval) used in operant conditioning to address behavior deficits or excesses.
Analyze the connections between operant conditioning and schedules of reinforcement.
Evaluate the criticisms and possible deficits of operant conditioning.
Align examples from your life or work environment to show that you understand the core tenets of the applied theories.      
Locate two examples of each of the three schedules you identified in the professional literature.
Using the literature you located, evaluate the advantages and disadvantages of using your selected types of reinforcement schedules.
Note: This is a P.h.D. Level assignment
References used throughout the course so far:
Iversen, I. H., & Lattal, K. A. (1991). Experimental analysis of behavior. Amsterdam, Netherlands: Elsevier Science.
Chapter 6, ″Methods of Analyzing Behavior Patterns,″ pages 193–230.
Critchfield, T. S. (2011). Translational contributions of the experimental analysis of behavior. The Behavior Analyst, 34(1), 3–17.
Critchfield, T. S. (2015). What counts as high-quality practitioner training in applied behavior analysis? Behavior Analysis in Practice, 8(1), 3–6.
Poling, A., Picker, M., Grossett, D., Hall-Johnson, E., & Holbrook, M. (1981). The schism between experimental and applied behavior analysis: Is it real and who cares? The Behavior Analyst, 4(2), 93–102.
Skinner, B. F. (2012). The experimental analysis of behavior. American Scientist, 100(1), 54–59.
Virues-Ortega, J., Hurtado-Parrado, C., Cox, A. D., & Pear, J. J. (2014). Analysis of the interaction between experimental and applied behavior analysis. Journal of Applied Behavior Analysis, 47, 380–403.
Ward, T. A., & Houmanfar, R. (2011). Human simulations in behavior analysis (1987–2010): Facilitating research in complex human behavior. Behavior and Social Issues, 20, 72–101.
Iversen, I. H., & Lattal, K. A. (1991). Experimental analysis of behavior. Amsterdam, Netherlands: Elsevier Science.
Chapter 3, ″Behavioral Neurochemistry: Application Neurochemical and Neuropharmacological Techniques to the Study of Operant Behavior,″ pages 78–107.
Gershman, S. J., & Niv, Y. (2012). Exploring a latent cause theory of classical conditioning. Learning & Behavior, 40(3), 255–268.
Häderer, I. K., & Michiels, N. K. (2016). Successful operant conditioning of marine fish in their natural environment. Copeia, 104(2), 380–386.
Ludvig, E. A., Sutton, R. S., & Kehoe, E. J. (2012). Evaluating the TD model of classical conditioning. Learning & Behavior, 40(3), 305–319.
Miskovic, V., & Keil, A. (2012). Acquired fears reflected in cortical sensory processing: A review of electrophysiological studies of human classical conditioning. Psychophysiology, 49(9), 1230–1241.
Staddon, J. E. R., & Cerutti, D. T. (2003). Operant conditioning. Annual Review of Psychology, 54, 115–144.
Gerak, L. R., & France, C. P. (2014). Discriminative stimulus effects of pregnanolone in rhesus monkeys. Psychopharmacology, 231(1), 181–190.
Katz, J. L. (2016). Contributions to drug abuse research of Steven R. Goldberg′s behavioral analysis of stimulus-stimulus contingencies. Psychopharmacology, 233(10), 1921–1932.
Rispoli, M., O′Reilly, M., Lang, R., Machalicek, W., Kang, S., Davis, T., & Neely, L. (2016). An examination of within-session responding following access to reinforcing stimuli. Research in Developmental Disabilities, 48, 25–34.
Coen, K. M., Adamson, K. L., & Corrigall, W. A. (2009). Medication-related pharmacological manipulations of nicotine self-administration in the rat maintained on fixed- and progressive-ratio schedules of reinforcement. Psychopharmacology, 201(4), 557–568.
Derenne, A., & Baron, A. (2001). Time-out punishment of long pauses on fixed-ratio schedules of reinforcement. The Psychological Record, 51(1), 39–51.
Derenne, A., & Flannery, K. A. (2007). Within-session changes in the preratio pause on fixed-ratio schedules of reinforcement. The Behavior Analyst Today, 8(2), 175–186.
Leslie, J. C., Boyle, C., & Shaw, D. (2000).  Effects of reinforcement magnitude and ratio values on behaviour maintained by a cyclic ratio schedule of reinforcement. Quarterly Journal of Experimental Psychology: Section B, 53(4), 289–308.
May, M. E., & Kennedy, C. H. (2009). Aggressions as positive reinforcement in mice under various ration- and time-based reinforcement schedules. Journal of the Experimental Analysis of Behavior, 91(2), 185–196.
Roane, H. S. (2008). On the applied use of progressive-ratio schedules of reinforcement. Journal of Applied Behavior Analysis, 41(2), 155–161.
Schlinger, H. D., Derenne, A., & Baron, A. (2008). What 50 years of research tell us about pausing under ratio schedules of reinforcement. The Behavior Analyst, 31(1), 39–60.
Shillinglaw, J. E., Everitt, I. K., & Robinson, D. L. (2014). Assessing behavioral control across reinforcer solutions on a fixed-ratio schedule of reinforcement in rats. Alcohol, 48(4), 337–344.
Wilson, A. N., & Gratz, O. H. (2016). Using a progressive ratio schedule of reinforcement as an assessment tool to inform treatment. Behavior Analysis in Practice, 9(3), 257–260.
Andrzejewski, M. E. (2001). An experimental analysis of pigeons′ choosing between fixed- and random-interval schedules of reinforcement. ProQuest Dissertations & Theses Global.
Read pages 1–31.
Berry, M. S., Kangas, B. D., & Branch, M. N. (2012). Development of key-pecking, pause, and ambulation during extended exposure to a fixed-interval schedule of reinforcement. Journal of the Experimental Analysis of Behavior, 97(3), 333–346.
Gutierrez, C. (2011). Are progressive ratio and fixed interval schedules of reinforcement related measures of impulsivity? ProQuest Dissertations & Theses Global.
Read pages 1–26.
Reed, P., & Morgan, T. A. (2008). Effect on subsequent fixed-interval schedule performance of prior exposure to ratio and interval schedules of reinforcement. Learning & Behavior, 36(2), 82–91.
Saville, B. K. (2009). Performance under competitive and self-competitive fixed-interval schedules of reinforcement. The Psychological Record, 59(1), 21–38.
Shurtleff, D., & Silberberg, A. (1990). Income maximizing on concurrent ratio-interval schedules of reinforcement. Journal of the Experimental Analysis of Behavior, 53(2), 273–284.
Aoyama, K. (2007). Effects of post-session wheel running on within-session changes in operant responding. Learning and Motivation, 38(3), 284–293.
Fichtner, C. S., & Tiger, J. H. (2015). Teaching discriminated social approaches to individuals with Angelman syndrome. Journal of Applied Behavior Analysis, 48(4), 734–748.
Lippman, L. G., & Tragesser, S. L. (2003). Contingent magnitude of reward in modified human-operant DRL-LH and CRF schedules. The Psychological Record, 53(3), 429–442.
Reynolds, G., & Reed, P. (2011). Effects of schedule of reinforcement on over-selectivity. Research in Developmental Disabilities, 32(6), 2489–2501.
Yukl, G. A., Latham, G. P., & Pursell, E. D. (1976). The effectiveness of performance incentives under continuous and variable ratio schedules of reinforcement. Personnel Psychology, 29(2), 221–231.
Yukl, G., Wexley, K. N., & Seymore, J. D. (1972). Effectiveness of pay incentives under variable ratio and continuous reinforcement schedules. Journal of Applied Psychology, 56(1), 19–23.

Testimonials

Schedules of Reinforcement
We have updated our contact contact information. Text Us Or WhatsApp Us+1-(309) 295-6991