Tuesday, August 21, 2007

Motivating operations

The concept of motivating operation (MO) is defined and discussed quite differently in Ch. 16 of Applied Behavior Analysis and in Ch. 9 of Principles of Behavior. In the former, Michael defines and describes MOs as having two kinds of effects – behavior-altering (BA) effects and value-altering (VA) effects. BA effects are the temporary effects of the MO on the frequency of current behavior. For example, the MO of food deprivation temporarily increases the frequency of behaviors that have been reinforced by food in the past. VA effects are the temporary effects of the MO on the reinforcing or punishing effectiveness of a stimulus, event, object, or condition. For example, the MO of food deprivation temporarily increases the reinforcing effectiveness of food.

These two effects of an MO are usually presented as if they were two different and independent types of effects that are brought about by an MO. But in my opinion this is an incorrect understanding. An alternative description of an MO's effect, which I prefer, is that MOs have only one kind of effect – a behavior-altering effect. An MO causes a change in the frequency of behaviors that have been reinforced or punished by a stimulus, event, object, or condition in the past. The so-called value-altering effect is not a second, different effect that's independent of the BA effect. We see that when we realize that the value or effectiveness of a reinforcer or punisher can only be understood in terms of whatever changes in behavioral frequency are observed. In other words, when we talk about an MO's value-altering effect, it's really just another way of talking about its behavior-altering effect.

Malott seems to be on the same track, although he doesn't say so explicitly. But he defines MO as "a procedure or condition that affects learning and performance with respect to a particular reinforcer or aversive stimulus." By "affects learning and performance" he can only mean "changes the frequency of the target behavior." So this definition focuses on the MO's BA effects and says nothing about the value or effectiveness of the relevant reinforcer or punisher (which he calls "aversive stimulus"), that is, it says nothing about the MO's VA effect.

As Michael points out in Ch. 16 of ABA, there's still a lot of work to be done before we'll fully understand MOs, especially MO's for punishment. In the meantime, I think Malott's definition is not only simpler to understand, but I also think it's more conceptually accurate because of its focus on the MO's BA effect without claiming that MOs also have a VA effect.

7 comments:

jennifer said...

Are we supposed to be reading a book by Michael as well? I'm a little confused by this post.

PW said...

No. The main readers of this blog include students from several different classes. Some of them are reading Applied Behavior Analysis by Cooper, Heron, & Heward. But in that book, Ch. 16 is actually written by Jack Michael.

ajones said...

I understood the article as say that we should ignore VA effects on behavior because BA are stimulus such as events, objects or condition that can influecne the frequency of a behavoir occur. I not fully sure this the correct understanding about BA and VA.

Anonymous said...

I'm struggling through chapter 16 of the 2nd edition Cooper et al text and wonder if anybody out there could help me clearly distinguish between an SD and an MO? Any good examples to illustrate the difference? How does each affect responding and strength of the consequence? Thanks!

Sean said...

Sorry to disagree here, but the VA and BA effects can be dissociated experimentally, and, therefore, they represent different effects (although in practice the two effects are often observed together). For a behavior to be reinforced or punished, there must have been an effective consequence to do so. MOs make consequences more or less effective. This is called the VA effect. It is true that the VA effect is seen only after an effective consequence has been delivered (i.e., in the future), but a relevant MO must also be in effect at that time (the BA effect). For example, a water-deprived rat that receives a water drop for pressing a lever will make more lever presses in the future. The VA effect refers to future changes in behavior. The BA effect can be seen before the consequence has been delivered in the current situation. For example, placing the aforementioned rat in the chamber at a later date, the rat will press the lever even if no water drops are forthcoming. Conversely, put the same rat in the chamber after having given it a lot of water, and the rat will make few, if any, lever presses, even though the reinforcer used to establish lever pressing was initially effective in doing so. The BA effect refers to behavior changes seen in the present before consequences are delivered. Once a behavior is established (thanks to the VA effect of an MO in the past), you can manipulate the current frequency of behavior by changing MO conditions, even if no reinforcer is delivered ever again. This is an experimental dissociation of the two effects. Thus, they are not the same thing. Current behavior is the product of the joint VA and BA effects (among other factors, of course). The VA effect establishes the behavior in the first place and the BA effect determines the current probability of emission of that behavior. I would further disagree with your interpretation of Malott's definition of the MO. "Learning" would represent the VA affect: a change in future behavior (learning) as a result of the change in the strength of a reinforcing stimulus. "Performance" would represent a change in current behavior as a result of the current strength of the MO. If Malott only meant changes in future behavior (i.e., learning), he would not have added "performance," as this would be redundant and against common usage of the term in and outside of psychology. In this context, the term "performance" can only mean the current emission of a behavior and not possible, future emissions. Thus, Malott's definition does not correspond to your interpretation. I do applaud your interest in the MO concept. Best wishes, Sean Laraway

PW said...

Sean,

Thanks for this. I've delayed for so long in replying because doing so properly will require a stretch of time when I can carefully re-read and think about what you've written and how I want to respond. Right now is not that time. But on this holiday morning, one of the things I'm doing is going thru and cleaning up some things in my mailbox. Having re-read what you wrote, I thought I'd go ahead and attempt a point or two, tho not yet a comprehensive response.

Would you agree that the direct effect of an MO is to change the organism? And it's because the pre-MO & post-MO organisms are different that we see differences in behavior. The differences in behavior are observable and, in every sense, empirically real, even if they are secondary effects of the MO (the primary, direct effect being the change in the organism).

My argument against the notion of a VA effect is that in positing it, we're dealing with a horse of an entirely different color. Changes in behavior can be observed. And the changes in the organism that are the direct effects of an MO can also be observed in some cases, and with advances in biology it's reasonable to expect that some day all such changes will be observable.

But you can't observe changes in the value of a reinforcer, and the reason for that is not simply that our technology is not yet sufficiently sophisticated to do so. The reason is that reinforcer value or strength has no empirical reality. To put it simply, there's no such thing as reinforcer value/strength. It's a metaphor, an explanatory fiction. Explanatory fictions can be useful for some purposes, but they have no role in a genuinely scientific explanation.

In explaining MOs to my students, I would, of course, discuss the behavior-altering effect. I'd also talk about the value-altering effect, but if I stopped there, I'd be remiss. It's only legitimate to say that MOs function as they do because it's AS IF they change the value of the reinforcer. You then have to go on to explain this illusion that reinforcer value has been changed.

Joanna said...

Dear Anonymous,
For me, the easiest and simplest way to distinguish between an MO and an SD is that an MO changes what the subject wants (or doesn't want), while an SD signals that reinforcement is available for engaging in a specific behavior. This is extremely elementary, but pretty easy to remember when first learning to distinguish them. Depending on the situation MOs and SDs can be one in the same or different for different situations. What is an SD in one situation may be an MO in a different situation.