aclogo_icon_white

Special Educator Academy

Free resources, ep. 13: how to write useful fba hypothesis statements.

How to write useful and meaningful FBA hypothesis statements like a pro with a free download

Sharing is caring!

Welcome back and I am so glad that you have joined us again. We are talking about behavior, which I know is an issue for many of us in special education classrooms. I am Chris Reeve, I’m your host and up to now we’ve taken our data, we have gathered all of our information and today we’re going to start getting to the good stuff because we’re starting to get to the point where we’re going to look at why in the world is this behavior happening in the first place and what are we going to do about it

You also will see a number of visual examples that I obviously cannot give you on a podcast, so that may make it a little bit easier. So you can go to the blog post and you can see all the different examples of hypothesis statements, download the template and you’ll also be able to download a transcript or read this if you would rather make sense of it that way. It gets a little interesting when I start to talk about these things without any visuals, because you know how I love visuals. So let me give you just a quick disclaimer as well.

SYNTHESIZE FBA INFORMATION

I’m going to give you hypothesis statements in this podcast and I’m going to kind of give you a summary of the information about the student’s behavior. It’s going to sound like those instances came from one instance, but they didn’t.

We have to triangulate all of our information, our information from staff, our information from families, our data collection, our record review, all the things we’ve talked about up until this point are going to go into that hypothesis statement. So they are all very important and I’m going to pick up from where we’ve triangulated all that information. We’ve got some idea about some setting events, we’ve seen what happens before, we’ve seen what happens afterwards and put it in kind of a compilation. So it isn’t as easy as I make it sound because as I often say, human behavior is just not simple. But when you just hear me talk about it, the cases kind of sound like I’m just picking out one instance. I’m not picking out a single episode of behavior, I’m using a composite of all the different information. So let’s get started.

BUILDING EFFECTIVE HYPOTHESIS STATEMENTS

We’re now moving into step 3 in our 5-step process of meaningful behavioral support and that is really developing our hypothesis statements. Now keep in mind that a hypothesis is a best guess. We don’t know that this is what’s actually driving the behavior until we confirm our hypothesis and I’ll be talking about that later in the series and how we can do that. Because you can do it when you develop them and you can test them more likely you will develop interventions that address them and see if they work when we’re within a school setting.

We want to make sure that when we are developing our hypothesis statements that we are clearly tying them to the data that we’re not getting lost in our interviews and things like that. We want to make sure that we’re accounting for that interview and that less objective information, but that we are making sure that our data is solidly supporting our hypothesis. That’s why we took it.

FREE DOWNLOAD OF GRAPHIC ORGANIZER

Writing our hypothesis statements is critical to the success of the intervention plan because they should lead you to what your behavioral solutions are going to be and in the blog post that goes with today’s podcast, you will find a download that you can get that actually structures your hypothesis statements.

One of the things that I like about using this hypothesis statement structure is I can take my antecedent information and my setting event information and put it in one block. Then my behavior goes in the next block and how the environment is responding or what’s happening in the environment comes afterwards. So it’s very easy to take my ABCs and translate them into this. I can then take this set up and say, when this happens he is likely to engage in this behavior and in the environment this commonly happens if that’s what my data tech trends are telling me.

That then allows me to take those antecedents and make adjustments to the environment so that we can prevent the behavior from happening. It allows me to know if he start off with smaller behaviors, that should be an indicator to me that something bigger is coming, then I should intervene earlier and it lets me know what do we need to change about how we react or respond to the behavior or what’s happening in the environment after the behavior so that we can reduce the reinforcement to it. And all of that gets directly mapped onto the hypothesis from the hypothesis statement. So go to autismclassroomresources/episode13 and download the hypothesis statement graphic and it will walk you through how to put that together. And you can also download a transcript and you can also read this post if you’d rather do that rather than listening.

WHAT GOES INTO A HYPOTHESIS STATEMENT

So let’s talk for a few minutes about what goes into your hypothesis statement.

SETTING EVENTS

One is the setting events. So that leads us to how do we eliminate or reduce the impact of distant factors that might influence the behavior. So our setting events in our hypothesis. He is more likely to engage in this behavior when X, Y, and Z, tell us we need to address X, Y, and Z in some way. Now as we’ve said in  episode 11  we cannot always make X, Y and Z go away. If I could make him sleep through the night, I’d do it.

But I do know that maybe I can modify what I ask you to do on a day when you didn’t sleep well at night, or a day that you don’t feel well, or a day that you didn’t take your medicine. Maybe I modify my demands. Maybe I have you participate in group activities less. And that’s where that brainstorming process that we’ll talk about when we get to intervention plans becomes really key. But your setting events are going to tell you what you need to try to accommodate for if you cannot change it.

ANTECEDENTS

Your antecedents are going to lead us to to know exactly how to restructure the environment to prevent the behavior.

Our behavior tells us whether or not the form of the behavior is relevant to the function. So does he only scream and get attention, but when he hits people ignore him. Probably what happened the other way around, but it is the form related to the function. Most of the time, it’s not in my experience, but it is possible that you will have a student that engages one kind of behavior. Because people may come to him sometimes and another type of behavior because it gets people to go away.

CONSEQUENCES

The consequence tells us what might be maintaining the behavior. So we need to know how we need to change our response to try to prevent the behavior from increasing over time. So when we use the graphic organizer for the hypothesis statements, we have three boxes. When the student and we fill that in, he will. That’s the behavior. And as a result this happens. And the setting events kind of go over that. So when this situation is in place, when this student does this or encounters this, he engages in this behavior and this is what happens in the environment.

FBA HYPOTHESES STATEMENT EXAMPLES

So to give you an example of a hypothesis, when the student, so when faced with situations with social or academic demands, particularly those involving language. So very specific. I’ve been able to take my data and say this almost always happens in situations with social or academic demands, so not other kinds of demands. And those that involve making him practice language related tasks are much more likely to have problem behaviors. The behavior is when faced with those situations, he sometimes, because it’s not every single time hits, screams and or bites others, and then what happens as a result, he is sometimes removed from the situation, the task is delayed by the behavior or staff provides assistance in completing the task. And those are all consequences that often differ based on what situation he’s in and what setting he’s in and things like that, but they were common consequences to this behavior that basically kept him from having to do the activity or delayed it in some way.

INTEGRATE WHAT YOU’VE LEARNED

Now that’s a whole lot more descriptive than a function that just says he engages in this behavior to escape. Because now I know when he’s faced with situations with social or academic demands. in particular, those involving language, we need to maybe include more easy tasks in with our heart, with our language demands. We need to give him, maybe, more breaks during that time.

We know what his behavior is and he does a constellation of behavior. There’s not one specific form of behavior related to this situation and then we need to give him a way to replace this because it is an escape. We need to give him a way to ask for a break, because the result of his behavior is having to be removed or having the task be delayed. It’s essentially escape related. So we want to make sure that we’ve got a replacement behavior that focuses on that. And we will talk about a in a whole episode about replacement behaviors because they aren’t often what many people think they are. But back to task so you can see how that gives me much more specific information about where I’m going to address my behavior intervention plan.

MORE SPECIFIC

Now I may get even more specific. I may say something like…

James appears to engage in challenging behavior to escape from tasks that are difficult for him. Some of these tasks are work-related. Some may be overwhelming or difficult socially, and some may be things that are frustrating for him like waiting. Engaging in significant challenging behaviors serves to gain assistance or removal from these situations effectively.

You may also have,

James sometimes engages in challenging behavior to protest or express frustration about what not being allowed to have something that he wants.

BEHAVIOR OFTEN HAS MULTIPLE FUNCTIONS

So we know what situations he’s likely to have the problem in. And we also know that his behavior is complex. And you’ve heard me say this throughout this series. Human behavior isn’t simple. And rarely except in very young children occasionally, but rarely ,do we see behavior serving only one function very frequently. We see it having maybe a main function, but also another function.

So often we will see a student who engages in behavior to escape. But when you give him just a break where nobody interacts with them, you continue to see problems because that behavior was also to get attention. So it got him out of the task and it got people engaging with him together. So never think when you’re writing your hypothesis statements that you have to be limited to one function. We will have to pick what we’re going to do when we get to the behavior plan based on that. So our setting events factor into the  “When the student..”  section of the hypothesis and they help us explain why behaviors happen on one day in relation to an antecedent and on another day they don’t.

COMPLEX PROBLEMS HAVE COMPLEX HYPOTHESIS STATEMENTS

So James’ data indicated that the behaviors occurred on some days and not on others. And further investigation into the data showed us that days on which he hadn’t had his medicine were more likely to result in challenging behavior. One solution: make sure he always takes his medicine. We may be able to do that. And I’ve certainly had students that we’ve said, “You know what? Send his medicine to school. We’re happy to give it to him first thing in the morning if they’re having a hard time getting him to take it”. Sometimes even at school, James wouldn’t necessarily take his medicine. He put it in his mouth, he spit it out. Twenty minutes later we’d find out he hadn’t taken it. So, another solution factored into his program and the hypothesis statements:

On days when James has not had his medication, he is and he is presented with a language task, he is likely to engage in these behaviors which then result in being removed from the task.

So maybe on the days when we knew he hasn’t taken his medicine, we adjust our demands so that we might lower that antecedent that sets that behavior off.

MORE EXAMPLES OF HYPOTHESIS STATEMENTS

So let’s look at a few other examples for different kinds of functions.

ESCAPE FUNCTION HYPOTHESIS STATEMENT

So let’s look at Sammy. And Sammy’s data, one of his instances is when has been to more group activities during a day. These behaviors are more likely to occur when he checks his schedule and sees the teacher icon, he falls on the ground and screams. Sometimes he does this when he transitions out of the room for assembly and group activities. So this is kind of my summary of what we see in his data. Sammy screams and cries when the staff tries to redirect him, he screams louder. If given the opportunity to go to a quiet area and calm down, he stopped screaming and he’s calm and the outcome is his staff moves him to the work table or the upcoming activity. His behavior continues.

So that tells us that when we look at Sammie’s behavior,

Sammy appears to engage in challenging behaviors to escape from tasks that are difficult for him. Some of these tasks are work-related, some may be overwhelming or difficult socially and some may be things that are frustrating for him like waiting. Engaging in significant challenging behavior serves to gain assistance or removal from these situations effectively. Sammy is more likely to engage in these behaviors when he’s had a lot of group work during the day.

So I put my setting event kind of at the end of that one. But you can see it’s obviously an escape from work and social situations that is the real underlying function. But I now know that there are certain tasks that I need to adjust to prevent the behaviors. I can teach him a way to escape appropriately as a replacement behavior.

And my outcome needs to be that the behavior doesn’t get him out of the task as quickly as the replacement behavior. And we’ll talk about all of that more when we talk about behavior support plans. But I want you to understand how it all lines up.

ESCAPING FROM WHAT?

Let’s think about Simon. Simon has had several instances talking to his friends in the atrium of the high school. And suddenly in the middle of the conversation he started telling the other kids that he was going to kill them. tThe other kids left him alone and went to tell the teacher. So let’s think about the function for Simon or the hypothesis…

When presented with unstructured social interactions, which is when Simon is in the atrium of the school, there’s nobody there setting up interactions. Simon’s violent threats have been successful in extricating him from the social situation and escaping from the social demands.

So what we’re saying is that he is trying to escape social situations. Unstructured social situations set him up to have these behaviors, and this is a very efficient manner of getting people to leave him alone. So I now know that maybe I need to structure his social interactions a little bit more. I need to teach him a better way to get people to leave him alone more appropriately. And then we need to work on probably some underlying social skills as part of that as well.

ESCAPE FUNCTION WITH SETTING EVENT

Let’s look at Jimmy. Jimmy was playing with the other kids on the playground and they were playing horse with the basketball and when it was Jimmy’s Turney missed the basket. The other kids told him he got the letter S and the teacher, her, Bobby, tell him better luck next time and slap him on the back. Okay, very common. Hey, I’m trying to make you feel better kind of activity kind of behavior. Jimmy then hit Bobby and they got into a fight. When the playground supervisor asked what happened, Jimmy told her Bobby was bullying him. When we looked at Jimmy’s data, we found a large pattern of difficulty in social situations as the antecedent and that he was interpreting the perspectives when we talked to other kids that he was accusing of bullying him or fighting with them. He would tell them that, that they had done something.

And all of the things that he described were things that, from the perspective of the person who did them, were meant to be supportive, not problematic. So in knowing Jimmy and everything we know about Jimmy, we know that Jimmy has significant difficulty interpreting the perspectives of others and therefore understanding their intentions in his environment. He frequently interprets their behavior as a negative action toward himself. So….

When presented with an action, he interprets it negatively and he responds in a way to escape from that situation.

So he gets removed from the situation because he’s fighting. It gets him removed from the difficult situation. And so we’ve got an escape from social situations, but there’s an underlying setting event of not understanding the perspectives of other people.

And this is something we see a lot with our students with autism, that social piece is a big piece, but it’s also something I see a lot with students who have other types of disabilities other than autism where people aren’t necessarily picking up on the social thinking and the social perspective piece of it because they don’t have that diagnosis. So keep that in mind as we’re working with some of our students with emotional disturbances and things like that.

TANGIBLE SEEKING FUNCTION

Let’s look at two more. It’s time for Jimmy to be doing some math seat work and instead he gets up and he runs to the computer, he sits down and when the teacher tries to move him back to his desk, he throws himself on the floor and kicks her. So in this case we’ve got a kid who clearly wants something that he can’t have. It’s time to do work. And so he’s going to that thing that he wants and he’s behaving this way until it ends up being his term. So we’ve got an obtaining function of a tangible item.

Jimmy is highly interested in the computer when presented with a situation in which he has to wait his turn on the computer, he falls on the floor and kicks and screams until it is his turn.

AUTOMATIC FUNCTION HYPOTHESIS STATEMENT

Now let’s look at one has an automatic function because I think that’s a really hard one to focus on. Abe engages in a variety of repetitive movements throughout the day, including hitting his forehead and head with his hand. He will engage in these behaviors when there are no demands and there is no one to attend to him. These behaviors appear more frequently during downtime and appear to provide some type of internal reinforcement. So they occur more likely when people are not around and the staff report that he seems calmer after he hits himself. tTt’s kind of a summary of Abe. Our automatic reinforcement hypothesis might be… when asked to wait or left to work independently or without someone specifically engaging him.

Because remember, we can only have an automatic function if it would happen when nothing else is there and no one is around because that means there are no other factors.That’s the way we rule it out. It can’t simply be, we don’t know what the function is. So we think it’s automatic. It’s automatic, which some people call a sensory function. I think that’s a little misleading. And I talk about all of that in our  episode on functions,  which I’ll link in the show notes, but we really want to make sure that our antecedent is that he’s kind of left alone with nothing to do.

The behavior is that he frequently hits his head with his fist and following this behavior, his demeanor appears calmer. If stopped, he’ll begin to hit himself harder and scream. So that’s kind of our consequence for that behavior. So our hypothesis might be…

Abe engages in a variety of repetitive movements throughout the day, including hitting his forehead and head with his hand. He will engage in these behaviors when there are no demands and there is no one around to attend to him.   These behaviors appear more frequently during downtime and appear to provide some type of internal reinforcement. His demeanor appears calmer after completing them.

So that tells us that if we lead Abe alone, we need to give him something to do that he will engage with because not having that is going to be a trigger for the automatic self injury behavior. We know that when he does this, we need to engage him in something so that the behaviors decrease rather than simply trying to stop him. So this then leads us to what our behavior support plan is.

HYPOTHESIS STATEMENT DO AND DON’TS

So I want to finish just with a few do’s and don’ts about hypothesis statements. You want to make sure that you do include as much information as possible. I realized that when I talk about hypothesis statements, some people will think that they’re kind of wordy, but I find that wordy to be a good summary of the function of the behavior that can lead us directly into our behavior support plan. And I’ll talk in our next episode of how we do that.

How you write the hypothesis statements for your functional behavior assessment is critical to how strong your behavior support plan will be.

HYPOTHESIS STATEMENT DOS

Do: only describe what you can see and observe..

And we talked about that when we talked about  the data collection . And so I’ll link to that episode. But earlier in the series we’ve talked about the fact that if I can’t see it, I don’t know that it’s happened and so I really have to focus on the behaviors that I see.

DO: INCLUDE SETTING EVENTS

You want to make sure that we do include our setting events into our hypothesis statements because they are things we’re going to have to address in our behavior support plan.

DO: VERIFY HYPOTHESIS STATEMENTS

And so one thing that we can do is set up a situation similar to the thing that we think is setting off and reinforcing the behavior and see if it happens. So if the behavior is not self-injurious or really dangerous, then we could actually set up situations, take data and see if the behavior occurs in the situations that we think that they do.

DO: DEVELOP HYPOTHESIS STATEMENTS TIED TO OUR DATA

Another thing that we can do is develop a behavior support plan that we know is tightly tied to our hypotheses and take data to see whether or not the behavior continues. If it does continue that then confirms our hypothesis. If it does continue, then it tells us we need to go back and re look at our hypothesis. So we can use our intervention as our way to verify our hypotheses. But it’s critical when we do that that we make sure that our hypothesis statements and our behavior support plans are very tightly linked. And this format that you can download on the blog page actually will give you that linkage.

HYPOTHESIS STATEMENT DON’TS

So let’s talk about some things you shouldn’t do with your hypothesis statements.

DON’T GET DISTRACTED BY THE FORM OF BEHAVIOR

Don’t get misled by the form of the behavior. In other words, don’t assume that because somebody is biting or eating things that they’re not supposed to have, that it is an automatic reinforcer. Those behaviors can have outward impacts on an antecedent as well. So just because it involves a sense does not mean it’s a sensory function.

DON’T ASSUME FUNCTIONS.

I think a lot of times we assume the automatic and function or the sensory function because we can’t see what the pattern is. But that’s not really a valid way to make that decision as I’ve talked about earlier.

DON’T ASSUME THAT A BEHAVIOR HAS ONLY ONE FUNCTION.

Very frequently, behavior has more than one function and you might have more than one hypothesis. So you might have more than one hypothesis that describes the range of behaviors that the student is showing or the range of situations that the behaviors are occurring in.

DON’T STOP TAKING DATA.

Now you don’t necessarily need to continue to take ABC data unless you really don’t know what your functions are. So if you haven’t been able to come up with a hypothesis statement, you need more data.

If you have a hypothesis statement, take that, make sure you’ve got solid baseline data of how often behaviors are occurring now. You can do that if you’ve been taken ABC data throughout the day. You can do that by adding up the incidents. Then look at taking something like frequency data or duration data to monitor your plan and we’ll talk about that in a future episode. But it’s important that we don’t stop taking the data just because we’ve developed our hypothesis.

So I will be back next week and I will talk more about designing behavior support plans and how we take this information and actually turn it into something that actually may change the behavior of the student in your classroom, which I know is the piece that all of you have been waiting for, but you have to have these pieces in place in order to get to that place. So that will be our next topic and I will give you some examples and we’ll kind of walk through how do you take this information and turn it into that.

If you would like to do a bigger deep dive into behavioral problem solving, I highly encourage you to check out the  Special Educator Academy . That is where you’ll find me. I’m available in our forums to answer questions, provide support and also our behavioral course has a wide variety of data sheets, strategies, videos and information about this entire process and hopefully pulls it all together. And then when there are questions about it, people can come to the community and ask them and we’re all working off of the same page.

You can find more information about the Special Educator Academy at  specialeducatoracademy.com  come try our free 7- day trial and see if it’s for you.

Thank you so much for spending this time with me. I really appreciate it. I hope that this has been helpful in giving you some ideas about formulating hypotheses for your students, and I hope to see you again in our next episode.

I hope that you’re enjoying the podcast and I’d love it if you’d  hop over to iTunes  and leave a review or and or subscribe a so that you will continue to get episodes.

Never Miss An Episode!

hypothesis statement for a functional behavioral assessment

Grab a Free Resource

(and get free tips by email)

hypothesis statement for a functional behavioral assessment

Training & Professional Development

  • On-Site Training
  • Virtual Training

Privacy Policy

Disclosures and copyright.

  • Core Beliefs

Unlock Unlimited Access to Our FREE Resource Library!

Welcome to an exclusive collection designed just for you!

Our library is packed with carefully curated printable resources and videos tailored to make your journey as a special educator or homeschooling family smoother and more productive.

Free Resource Library

Functional Behavior Assessment

  • First Online: 30 March 2021

Cite this chapter

hypothesis statement for a functional behavioral assessment

  • Jill M. Harper 3 , 4 ,
  • Juliya Krasnopolsky 3 , 4 ,
  • Melissa C. Theodore 4 ,
  • Christen E. Russell 4 &
  • Eris J. Dodds 4  

Part of the book series: Advances in Preventing and Treating Violence and Aggression ((APTVA))

835 Accesses

2 Citations

This chapter discusses approaches to functional behavioral assessment (FBA) within applied settings. The chapter begins with an overview of the FBA process as best practice in the assessment and treatment of challenging behavior. The next section provides a review of FBA methods with a focus on indirect and descriptive assessments. Within this section, common FBA procedures are reviewed, considerations in the selection of FBA methods are discussed, and a summary of relevant literature is provided. The chapter ends with practice guidelines for practitioners and future areas of study for researchers.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

hypothesis statement for a functional behavioral assessment

Definition and Rationale for Functional Assessment

Research on challenging behaviors and functional assessment.

hypothesis statement for a functional behavioral assessment

Functional Assessment of Challenging Behavior

Alter, P. J., Conroy, M. A., Mancil, G. R., & Haydon, T. (2008). A comparison of functional behavior assessment methodologies with young children: Descriptive methods and functional analysis. Journal of Behavioral Education, 17 (2), 200–219.

Article   Google Scholar  

Anderson, C. M., Rodriguez, B. J., & Campbell, A. (2015). Functional behavior assessment in schools: Current status and future directions. Journal of Behavioral Education, 24 (3), 338–371.

Barton-Arwood, S. M., Wehby, J. H., Gunter, P. L., & Lane, K. L. (2003). Functional behavior assessment rating scales: Intrarater reliability with students with emotional or behavioral disorders. Behavioral Disorders, 28 (4), 386–400.

Beavers, G. A., Iwata, B. A., & Lerman, D. C. (2013). Thirty years of research on the functional analysis of problem behavior. Journal of Applied Behavior Analysis, 46 (1), 1–21.

Article   PubMed   Google Scholar  

Belva, B. C., Hattier, M. A., & Matson, J. L. (2013). Assessment of problem behavior. In Handbook of crisis intervention and developmental disabilities (pp. 123–146). New York: Springer.

Chapter   Google Scholar  

Bijou, S. W., Peterson, R. F., & Ault, M. H. (1968). A method to integrate descriptive and experimental field studies at the level of data and empirical concepts 1. Journal of Applied Behavior Analysis, 1 (2), 175–191.

Article   PubMed   PubMed Central   Google Scholar  

Blood, E., & Neel, R. S. (2007). From FBA to implementation: A look at what is actually being delivered. Education and Treatment of Children, 30 , 67–80.

Borgmeier, C., Horner, R. H., & Koegel, R. L. (2006). An evaluation of the predictive validity of confidence ratings in identifying functional behavioral assessment hypothesis statements. Journal of Positive Behavior Interventions, 8 (2), 100–105.

Camp, E. M., Iwata, B. A., Hammond, J. L., & Bloom, S. E. (2009). Antecedent versus consequent events as predictors of problem behavior. Journal of Applied Behavior Analysis, 42 (2), 469–483.

Chok, J. T., Harper, J. M., Weiss, M. J., Bird, F. L., & Luiselli, J. K. (2020). Functional analysis: A practitioner’s guide to implementation and training . New York: Elsevier/Academic Press.

Google Scholar  

Cooper, J. O., Heron, T. E., & Heward, W. L. (2020). Applied behavior analysis (3rd ed.). Hoboken, NJ: Pearson.

Book   Google Scholar  

Dracobly, J. D., Dozier, C. L., Briggs, A. M., & Juanico, J. F. (2018). Reliability and validity of indirect assessment outcomes: Experts versus caregivers. Learning and Motivation, 62 , 77–90.

Dufrene, B. A., Kazmerski, J. S., & Labrot, Z. (2017). The current status of indirect functional assessment instruments. Psychology in the Schools, 54 (4), 331–350.

Durand, V. M., & Crimmins, D. B. (1988). Identifying the variables maintaining self-injurious behavior. Journal of Autism and Developmental Disorders, 18 (1), 99–117.

Ellingson, S. A., Miltenberger, R. G., & Long, E. S. (1999). A survey of the use of functional assessment procedures in agencies serving individuals with developmental disabilities. Behavioral Interventions: Theory & Practice in Residential & Community-Based Clinical Programs, 14 (4), 187–198.

English, C. L., & Anderson, C. M. (2004). Effects of familiar versus unfamiliar therapists on responding in the analog functional analysis. Research in Developmental Disabilities, 25 (1), 39–55.

Fee, A., Schieber, E., Noble, N., & Valdovinos, M. G. (2016). Agreement between questions about behavior function, the motivation assessment scale, functional assessment interview, and brief functional analysis of children’s challenging behaviors. Behavior Analysis: Research and Practice, 16 (2), 94.

Floyd, R. G., Phaneuf, R. L., & Wilczynski, S. M. (2005). Measurement properties of indirect assessment methods for functional behavioral assessment: A review of research. School Psychology Review, 34 (1), 58–73.

Fryling, M. J., & Baires, N. A. (2016). The practical importance of the distinction between open and closed-ended indirect assessments. Behavior Analysis in Practice, 9 (2), 146–151. https://doi.org/10.1007/s40617-016-0115-2

Gable, R. A., Quinn, M. M., Rutherford Jr., R. B., Howell, K. W., & Hoffman, C. C. (1999). Addressing student problem behavior: Part 2. Conducting a functional behavioral assessment (3rd ed.). Washington, DC: Center for Effective Collaboration and Practice.

Gresham, F. M., Watson, T. S., & Skinner, C. H. (2001). Functional behavioral assessment: Principles, procedures, and future directions. School Psychology Review, 30 (2), 156–172.

Hagopian, L. P., Dozier, C. L., Rooker, G. W., & Jones, B. A. (2013). Assessment and treatment of severe problem behavior. In APA handbook of behavior analysis (Translating principles into practice) (Vol. 2, pp. 353–386). Washington, DC: American Psychological Association.

Hall, S. S. (2005). Comparing descriptive, experimental and informant-based assessments of problem behaviors. Research in Developmental Disabilities, 26 (6), 514–526.

Hanley, G. P. (2012). Functional assessment of problem behavior: Dispelling myths, overcoming implementation obstacles, and developing new lore. Behavior Analysis in Practice, 5 (1), 54–72.

Hanley, G. P., Iwata, B. A., & McCord, B. E. (2003). Functional analysis of problem behavior: A review. Journal of Applied Behavior Analysis, 36 (2), 147–185.

Iwata, B. A., DeLeon, I. G., & Roscoe, E. M. (2013). Reliability and validity of the functional analysis screening tool. Journal of Applied Behavior Analysis, 46 (1), 271–284.

Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27 (2), 197–209.

Iwata, B. A., Pace, G. M., Cowdery, G. E., & Miltenberger, R. G. (1994). What makes extinction work: An analysis of procedural form and function. Journal of Applied Behavior Analysis, 27 (1), 131–144.

Johnson, A. H., Goldberg, T. S., Hinant, R. L., & Couch, L. K. (2019). Trends and practices in functional behavior assessments completed by school psychologists. Psychology in the Schools, 56 (3), 360–377.

Kahng, S., Iwata, B. A., Fischer, S. M., Page, T. J., Treadwell, K. R., Williams, D. E., et al. (1998). Temporal distributions of problem behavior based on scatter plot analysis. Journal of Applied Behavior Analysis, 31 (4), 593–604.

Kelley, M. E., LaRue, R., Roane, H. S., & Gadaire, D. M. (2011). Indirect behavioral assessments: Interviews and rating scales. In W. W. Fisher, C. C. Piazza, & H. S. Roane (Eds.), Handbook of applied behavior analysis (pp.182–190). New York, NY: Guilford.

Lerman, D. C., Hovanetz, A., Strobel, M., & Tetreault, A. (2009). Accuracy of teacher-collected descriptive analysis data: A comparison of narrative and structured recording formats. Journal of Behavioral Education, 18 (2), 157–172.

Lerman, D. C., & Iwata, B. A. (1993). Descriptive and experimental analyses of variables maintaining self-injurious behavior. Journal of Applied Behavior Analysis, 26 (3), 293–319.

Luna, O., Petri, J. M., Palmier, J., & Rapp, J. T. (2018). Comparing accuracy of descriptive assessment methods following a group training and feedback. Journal of Behavioral Education, 27 (4), 488–508.

Maas, A. P., Didden, R., Bouts, L., Smits, M. G., & Curfs, L. M. (2009). Scatter plot analysis of excessive daytime sleepiness and severe disruptive behavior in adults with Prader-Willi syndrome: A pilot study. Research in Developmental Disabilities, 30 (3), 529–537.

Matson, J. L., & Vollmer, T. R. (1995). The questions about behavioral function (QABF) user’s guide . Baton Rouge, LA: Scientific Publishers.

Matson, J. L., Kuhn, D. E., Dixon, D. R., Mayville, S. B., Laud, R. B., Cooper, C. L., et al. (2003). The development and factor structure of the Functional Assessment for multiple CausaliTy (FACT). Research in Developmental Disabilities, 24 (6), 485–495.

Matson, J. L., & Wilkins, J. (2008). Reliability of the autism spectrum disorders-comorbid for children (ASD-CC). Journal of Developmental and Physical Disabilities, 20 (4), 327–336.

Mayer, G. R., Sulzer-Azaroff, B., & Wallace, M. (2014). Behavior analysis for lasting change (3rd ed.). Cornwall-on-Hudson, NY: Sloan Publishing.

Mayer, K. L., & DiGennaro Reed, F. D. (2013). Effects of a training package to improve the accuracy of descriptive analysis data recording. Journal of Organizational Behavior Management, 33 (4), 226–243.

Neidert, P. L., Rooker, G. W., Bayles, M. W., & Miller, J. R. (2013). Functional analysis of problem behavior. In Handbook of crisis intervention and developmental disabilities (pp. 147–167). New York: Springer.

O’Neil, R. E., Horner, R. H., Ablin, R. W., Sprague, J. R., Storey, K., & Newton, J. S. (1997). Functional assessment and program development for problem behaviors: A practical handbook . New York: Brooks/Cole.

Oliver, A. C., Pratt, L. A., & Normand, M. P. (2015). A survey of functional behavior assessment methods used by behavior analysts in practice. Journal of Applied Behavior Analysis, 48 (4), 817–829.

Paclawskyj, T. R., Matson, J. L., Rush, K. S., Smalls, Y., & Vollmer, T. R. (2000). Questions about behavioral function (QABF): A behavioral checklist for functional assessment of aberrant behavior. Research in Developmental Disabilities, 21 (3), 223–229.

Payne, L. D., Scott, T. M., & Conroy, M. (2007). A school-based examination of the efficacy of function-based intervention. Behavioral Disorders, 32 (3), 158–174.

Pence, S. T., Roscoe, E. M., Bourret, J. C., & Ahearn, W. H. (2009). Relative contributions of three descriptive methods: Implications for behavioral assessment. Journal of Applied Behavior Analysis, 42 (2), 425–446.

Rooker, G. W., DeLeon, I. G., Borrero, C. S., Frank-Crawford, M. A., & Roscoe, E. M. (2015). Reducing ambiguity in the functional assessment of problem behavior. Behavioral Interventions, 30 (1), 1–35.

Roscoe, E. M., Phillips, K. M., Kelly, M. A., Farber, R., & Dube, W. V. (2015). A statewide survey assessing practitioners’ use and perceived utility of functional assessment. Journal of Applied Behavior Analysis, 48 (4), 830–844.

Saini, V., Ubdegrove, K., Biran, S., & Duncan, R. (2019). A preliminary evaluation of interrater reliability and concurrent validity of open-ended indirect assessment. Behavior Analysis in Practice, 13 (1), 114–125. https://doi.org/10.1007/s40617-019-00364-3

Scott, T. M., McIntyre, J., Liaupsin, C., Nelson, C. M., Conroy, M., & Payne, L. D. (2005). An examination of the relation between functional behavior assessment and selected intervention strategies with school-based teams. Journal of Positive Behavior Interventions, 7 (4), 205–215.

Slocum, T. A., Detrich, R., Wilczynski, S. M., Spencer, T. D., Lewis, T., & Wolfe, K. (2014). The evidence-based practice of applied behavior analysis. The Behavior Analyst, 37 (1), 41–56.

Sloman, K. N. (2010). Research trends in descriptive analysis. The Behavior Analyst Today, 11 (1), 20.

Smith, C. M., Smith, R. G., Dracobly, J. D., & Pace, A. P. (2012). Multiple-respondent anecdotal assessments: An analysis of interrater agreement and correspondence with analogue assessment outcomes. Journal of Applied Behavior Analysis, 45 (4), 779–795.

Spencer, T. D., Detrich, R., & Slocum, T. A. (2012). Evidence-based practice: A framework for making effective decisions. Education and Treatment of Children, 35 (2), 127–151.

Sugai, G., Lewis-Palmer, T., & Hagan, S. (1998). Using functional assessments to develop behavior support plans. Preventing School Failure: Alternative Education for Children and Youth, 43 (1), 6–13.

Tarbox, J., Wilke, A. E., Najdowski, A. C., Findel-Pyles, R. S., Balasanyan, S., Caveney, A. C., et al. (2009). Comparing indirect, descriptive, and experimental functional assessments of challenging behavior in children with autism. Journal of Developmental and Physical Disabilities, 21 (6), 493.

Thompson, R. H., & Iwata, B. A. (2007). A comparison of outcomes from descriptive and functional analyses of problem behavior. Journal of Applied Behavior Analysis, 40 (2), 333–338.

Touchette, P. E., MacDonald, R. F., & Langer, S. N. (1985). A scatter plot for identifying stimulus control of problem behavior. Journal of Applied Behavior Analysis, 18 (4), 343–351.

Vollmer, T. R., Borrero, J. C., Wright, C. S., Camp, C. V., & Lalli, J. S. (2001). Identifying possible contingencies during descriptive analyses of severe behavior disorders. Journal of Applied Behavior Analysis, 34 (3), 269–287.

Walker, V. L., Chung, Y. C., & Bonnet, L. K. (2018). Function-based intervention in inclusive school settings: A meta-analysis. Journal of Positive Behavior Interventions, 20 (4), 203–216.

Zaja, R. H., Moore, L., Van Ingen, D. J., & Rojahn, J. (2011). Psychometric comparison of the functional assessment instruments QABF, FACT and FAST for self-injurious, stereotypic and aggressive/destructive behaviour. Journal of Applied Research in Intellectual Disabilities, 24 (1), 18–28.

Zarcone, J. R., Rodgers, T. A., Iwata, B. A., Rourke, D. A., & Dorsey, M. F. (1991). Reliability analysis of the Motivation Assessment Scale: A failure to replicate. Research in Developmental Disabilities, 12 (4), 349–360.

Download references

Author information

Authors and affiliations.

Melmark New England, Andover, MA, USA

Jill M. Harper & Juliya Krasnopolsky

Van Loan School, Endicott College, Beverly, MA, USA

Jill M. Harper, Juliya Krasnopolsky, Melissa C. Theodore, Christen E. Russell & Eris J. Dodds

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jill M. Harper .

Editor information

Editors and affiliations.

James K. Luiselli

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

Harper, J.M., Krasnopolsky, J., Theodore, M.C., Russell, C.E., Dodds, E.J. (2021). Functional Behavior Assessment. In: Luiselli, J.K. (eds) Applied Behavior Analysis Treatment of Violence and Aggression in Persons with Neurodevelopmental Disabilities . Advances in Preventing and Treating Violence and Aggression . Springer, Cham. https://doi.org/10.1007/978-3-030-68549-2_2

Download citation

DOI : https://doi.org/10.1007/978-3-030-68549-2_2

Published : 30 March 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-68548-5

Online ISBN : 978-3-030-68549-2

eBook Packages : Behavioral Science and Psychology Behavioral Science and Psychology (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Unauthorized Request

Unauthorized activity detected.

  • Enroll & Pay
  • Prospective Students
  • Current Students
  • Degree Programs

Functional Behavioral Assessment

Functional behavioral assessment (FBA) is a process used to gather details about the events that predict and maintain a student's problem behavior. The purpose of the FBA is to provide information that will be used to design effective positive behavior support plans. To support a student who is engaging in problem behaviors in your classroom, it is important to consider the reasons why a student may be engaging in problem behavior. Behaviors are not repeated unless they serve a function for the student. 

Why Do Students Engage in Problem Behavior?

Although there are many reasons why a student may engage in problem behavior, they fall into two major categories: to avoid or escape something unpleasant and to obtain something desirable. For instance, a student may try to escape from a difficult or boring task by becoming disruptive in class because he knows the teacher will send him to the office for misbehaving. In other situations, a student tells jokes and makes funny noises during independent seat work because she is seeking attention from her teacher and peers. In this way, problem behavior can be seen as a form of communication. It is the student's way of telling others that he or she is tired, bored, needs a break, and/or wants attention.    Some students do not have the skills to communicate and have learned over time that engaging in problem behavior results in desirable outcomes. Students may also engage in problem behavior even though they know how to communicate in more appropriate ways because problem behavior is usually more effective and efficient for them. Imagine a student who raises his hand to gain his teacher's attention but the teacher doesn't respond because she is busy working on another task. However, when the student yells loudly, the teacher immediately turns around, tells him to be quiet, and asks what he wants. If the teacher responds this way frequently, over time, the student will learn that the most efficient and effective way to get the teacher's attention is to engage in problem behavior.    Problem behavior may occur in order to escape from or obtain internal events as well. In some cases, students with too much energy are unable to sit still or participate in class. Students with developmental disabilities may engage in repetitive behaviors (including rocking, eye poking, or self-injury) which are maintained by internal physiological factors. Students with mental health concerns or students with physiological factors who maintaining problem behaviors can still benefit from a FBA. Although the behaviors in these cases may not be maintained by social situations or events, the environment still has an impact on the frequency and intensity of problem behavior. By understanding the variables within the environment that are associated with positive social interactions, students show lower levels of problem behavior, which leads to a higher quality of life for the student. This can help your student's team build an effective  PBS plan .    Sometimes, a student's behavior may initially be maintained by physiological factors, but over time the student learns that his behavior has an impact on the environment. For instance, a small child with an earache may strike at her ears with her fist because it decreases the pain she is experiencing. The student's self-injury results in immediate concern from his teacher who provides comfort and high levels of positive attention. Once the earache is gone, the student may still strike at her head because she knows her teacher will give her immediate comfort and attention. 

How is a Functional Behavioral Assessment Completed?

A FBA is not completed in the same way every time. The type of information that is collected varies depending upon the individual student's problem behavior, strengths, and needs. In some cases, specific tools are needed in a FBA to collect information about medications, sleeping patterns, or social and interactional skills. The level of complexity needed to complete a FBA varies as well. A teacher may conduct a simple and time efficient FBA to better understand a student's minor disruptive behaviors. However, a student who engages in serious aggression or self-injury at home, in school, and in the community may need higher levels of support from his teacher, parents, and other important people in his life. In this case, the FBA may require more time and energy to complete. Even though the FBA tools and level of intensity vary, the process remains the same.    The FBA is considered complete when the following products have been documented:

  • a clear and measurable definition of the problem behavior
  • events that predict when problem behaviors will occur and will not occur
  • consequences that maintain problem behaviors
  • one or more hypotheses about the function maintaining problem behavior
  • direct observations data supporting the hypotheses.

  Hypothesis Statements The hypothesis about the function maintaining a student's problem behavior is a very important outcome of the FBA. The hypothesis statement starts with any setting events that increase the likelihood of problem behavior that have been identified in the FBA. 

Setting Events

Antecedents(Triggers)

Problem Behavior

Consequences

Setting events affect how a student will respond to situations by temporarily increasing or decreasing reinforcers in the environment. For instance, a classroom activity a student usually enjoys may not be as reinforcing right before the holidays. Math class may be difficult for a student who has a learning disability, but on most days the student copes well. However, on days when this particular student has a bad headache, the presentation of math problems may be more aversive than usual. Setting events can occur immediately before a problem behavior or days in advance. Some setting events are obvious while other setting events can be more difficult to identify. For example, the death of a close family member that occurred before school started can increase the likelihood the student will engage in problem behavior a few months later when school starts. Setting events can be social (e.g. arguments), physiological (e.g. illness), or environmental (e.g. noisy or crowded rooms). 

Events that directly precede and serve as a "trigger" for a problem behavior are called antecedents. Antecedents serve as cues signaling when a behavior will be reinforced. A substitute teacher can sometimes be an antecedent for problem behavior. In this situation, the presence of someone other than the students' teacher signals that talking loudly, pretending to have homework already turned in, and off task behavior in general will be reinforced, allowing the students to escape from their school work. Antecedents can be related to the physical setting, materials, time of day or social situations. Examples of common antecedents include verbal demands, criticism, teasing, the absence of attention, and the presence or absence of specific people, materials, or events. The difference between an antecedent and a setting event is that setting events increase the likelihood that an antecedent will trigger problem behavior.   

One or more problem behaviors identified within a hypothesis statement may be maintained by the same function. Sometimes problem behaviors occur in a chain with less intense behaviors (complaining, tapping pencil loudly, placing head on desk) starting first and leading to more serious problem behavior (shouting, throwing pencil or books, pushing desk over). This important information can be used to intervene early in an escalating sequence of problem behaviors. 

A student's problem behavior may increase to obtain or avoid something. Consequences are the events that directly follow a behavior. Toys, praise, physical attention, and even "negative" attention are examples of events or items that may be identified as reinforcers. These events, items, or people immediately following a behavior are considered positive reinforcers if behavior increases when the consequence is presented. A behavior can also be reinforced by escaping or avoiding an event, item, or activity. If the consequence following a behavior results in escape or avoidance of events, items, or activities and behavior increases, it is referred to as negative reinforcement. Punishment, on the other hand, results in a decrease in behavior. A common mistake is to assume that a consequence is punishing for a student without considering whether the student's behavior is increasing or decreasing when the consequence is presented. The use of consequences such as time out, detention, and in-school suspension may actually be increasing the likelihood of problem behavior for students who engage in problem behavior to escape class or obtain attention from teachers and peers.    At times, there is not a clear social function for problem behavior. In these situations, internal sensory feedback can be positively or negatively reinforcing a person's problem behavior. Behaviors that continue to occur when the students are alone or occur across many situations and settings are sometimes maintained by internal reinforcers. 

Functional Behavioral Assessment Process

The process for conducting a FBA involves three different types of strategies: indirect assessment, direct observation, and functional analysis. These activities are completed by a team, including the teacher (or teachers), the student, parents, and other important individuals. A team approach ensures that the FBA gathers accurate information that reflects the perspectives of the student and the people within his or her social network. Sharing responsibilities for completing a more complicated FBA can reduce stress for any one person in the group. Schools who are implementing school-wide PBS often embed the FBA and PBS planning process into already existing student support teams. 

Indirect Assessment

Indirect assessment strategies are often the first type of FBA strategy conducted and involve a combination of activities including:

Interviews Interviews with key people are used to determine the concerns and perspectives about the student and to begin identifying the events associated with the occurrence and nonoccurrence of problem behavior. Teachers who are reporting that the student engages in problem behavior in their classrooms are interviewed to gather initial information. However, teachers who indicate the student does not engage in problem behavior in their classes may also be able to share important details about the setting, teaching strategies, or other characteristics of the class that result in the student's success. The student (whenever possible), parents, and others are also interviewed to gain their perspectives.    Record reviews Reviewing a student's academic, behavioral, and psychological reports provides information that can uncover important information about possible setting events, social skills, issues related to quality of life, and academic strengths and problems.    Checklists and questionnaires A variety of checklists and questionnaires are available which assist in the FBA. Quality of life measures highlight the social aspects of the individual's life that may need attention. Checklists and rating scales related to social skills and problem behavior provide insight into the function maintaining the student's problem behavior.

  • interviews,
  • record reviews, and
  • checklists and questionnaires

Indirect assessment measures should be used in combination with direct observation methods.    Direct Observation Direct observations of a student should be used to develop and support the hypothesis you have about why problem behaviors are occurring. Often, direct observations include gathering information about when problem behavior occurs, what happens right before problem behavior (e.g., antecedent triggers), what problem behavior looks like, and how people respond to the occurrence of problem behavior (e.g., consequences). There are many types of direct observation methods available. Here are some common strategies for collecting direct observation data.

Scatter plot A method called the scatter plot is frequently used to collect information about a problem behavior during specific time intervals across the day. The scatter plot helps identify whether problem behaviors occur at predictable time periods. This information can be used to identify specific routines and settings where interventions might occur.    ABC Chart The Antecedent-Behavior-Consequence (ABC) chart is used to record descriptive information while observing a student in natural classroom, recess, lunch, home, or community settings. The ABC chart assists in the development and confirmation of the hypothesis statement.    Direct measures of behavior Measurement methods can include recording the frequency, duration, latency, and intensity of problem behavior. Permanent products refer to a result of the behavior that can be measured. For instance, the number of assignments turned in to the teacher or completed office referral forms are examples of permanent products. Direct measures of behavior collected during the FBA process are often used later to compare with measures of a problem behavior once an intervention has been implemented. If there is a decrease in problem behavior or increase in adaptive behavior compared to the data collected during the FBA (the baseline data), there is support for the PBS plan's effectiveness.

Functional analysis

A "functional analysis" systematically tests hypotheses by manipulating the events that are thought to be associated with the occurrence of problem behavior. A functional analysis is a formal test of the relationship between environmental events and problem behavior. Each event that is suspected to contribute to the occurrence of a problem behavior is presented by itself while controlling other possible sources of variance. Researchers often use this approach because it is the most rigorous way to test a hypothesis about the function maintaining problem behavior.    To conduct a FBA effectively, combining indirect assessment with either direct observational strategies or functional analysis is necessary. Interviews, checklists, and rating scales may seem to save time. Unfortunately, the information gathered can be highly subjective and inaccurate. Without more objective methods to verify the indirect assessment information, your FBA will be incomplete. In most applied situations, a combination of indirect assessment and direct observation data will provide the information necessary to support your hypothesis.    If you have not completed an FBA before, the best way to learn how to use the tools in this module is to find someone who has a background and expertise in positive behavior support or applied behavior analysis. Ask this person to coach you as you complete your first FBA. This person can help you learn more about the FBA process and teach you how to make decisions about when a functional analysis may be necessary.    Developed by: Rachel Freeman University of Kansas

  • Learning Modules
  • About the Project
  • Project Resources

Functional Behavior Assessment

  • Overview of Functional Behavior Assessment
  • Step 1 Planning
  • Step 2.1 Collect baseline data using direct and indirect assessment methods
  • Step 2.2 Gather observation-based data on the occurrence of the interfering behavior
  • Step 2.3a Identify variables of the behavior
  • Step 2.3b Create a hypothesis statement for the purpose of the behavior

Step 2.3c Test the hypothesis (behavior) statement

  • Step 2.4 Develop a behavior intervention plan (BIP)
  • Practice Scenarios: Implementing FBA
  • Knowledge Check
  • Step 3 Monitoring Progress
  • Module Resources

Test the hypothesis statement to ensure it is correct

Once a hypothesis statement, or best guess, has been developed, the next step involves testing our guess of the purpose of the behaviors to ensure that it is correct, as long as there is no risk of injury or damage. If the behavior involves risk of injury or damage, then proceed to Step 6. In this step, caregivers or service providers test the hypothesis by modifying the setting/activity to increase the probability that the behavior occurs. 

To test the example hypothesis statement above, the mom could alternate between asking him to clean up his toys and to wash his hands. In addition, the mom would need to change how she responds to the behaviors. Rather than taking him to his room and rocking him, the mom might ask him to clean up before being rocked.

If changing the tasks and consequences result in an  increase in the interfering behavior  (because Tino is no longer getting what he wants, which is to avoid cleaning up his toys and getting attention from mom), then the  hypothesis is most likely correct . However, if Tino continues to have tantrums in both situations, the team would need to re-examine the hypothesis. Tino’s  behavior function  might be avoiding transition to another activity.

  • Printer-friendly version

This project is a program of the Frank Porter Graham Child Development Institute at University of North Carolina at Chapel Hill .

What is a functional behavioral assessment (FBA)?

hypothesis statement for a functional behavioral assessment

By Andrew M.I. Lee, JD

Expert reviewed by Amanda Morin

A teacher kneels down to talk to a student in the hallway.

At a glance

Some students struggle to learn in school because of behavior challenges.

Functional behavioral assessment (FBA) is a process schools use to figure out what’s causing challenging behavior.

An FBA leads to a plan with strategies to improve the behavior.

When students run into trouble at school, it’s not always because of academics. Often, behavior is the reason kids struggle. Kids may disrupt class, become withdrawn, or even cut class. 

To help students, schools use a process to identify and understand challenging behaviors, and come up with possible solutions. It’s called functional behavioral assessment (FBA). An FBA is like an evaluation focused on behavior.

An FBA tries to figure out what’s behind behavior challenges. The basic idea is that behavior serves a purpose. Whether kids know it or not, they act in certain ways for a reason. If schools and families know what’s causing a behavior, they can find ways to change it.

Here’s an example.

Aaron has strong math skills. But when the class gets a math word problem, he gets angry and argues with the teacher. The behavior continues and the teacher doesn’t know what to do. The school does an FBA and learns that Aaron has trouble showing work on word problems. That’s why he’s acting out  — to avoid this stressful math situation.

A school team works on the FBA. The team is led by a person trained in understanding behavior, like a school psychologist or a behavior specialist. The FBA team may also include teachers, school staff, service providers, the student, and their family. 

When an FBA is complete, the school should have a good idea of what’s causing the behavior and how to help. The next step is to create a behavior intervention plan (BIP) with strategies and interventions to improve the behavior. Over time, the plan may change depending on the student’s needs.

Dive deeper

The steps of an fba.

During an FBA, the school team gathers information and uses it to create a plan to improve behavior. Here are the steps the team takes.

1. Define the challenging behavior.

An FBA starts by defining the student’s behavior in a specific and objective way. For example, instead of saying the student is “defiant,” the team might say the student “rips up worksheets and doesn’t respond when asked to show work in math class.”

2. Gather and analyze information.

Next, the team pulls together information and data about the behavior. It may look at school records, interview school staff who know and work with the student, and screen or test the student. The goal is to answer questions like:

When and where is this behavior happening?

Where is it not happening?

How often is the behavior occurring?

Who is around when it occurs?

What tends to happen right before and right after the behavior?

The student can provide this information, too. Only kids know how they feel in the moment. Asking them to keep track of feelings and emotions helps the team. The team might also note how classmates react.

3. Find out the reason for the behavior.

Using the information collected, the team makes their best guess about what’s causing the behavior. It may be that the student is trying to escape or avoid something, for example.

4. Make a plan.

Finally, led by the school psychologist or a behavior specialist, the team creates a plan based on its best guess. Here’s where the school creates the BIP to teach and encourage positive behavior by the student. Often, as the team learns more, it will need to adjust the plan.

Learn about behavior intervention plans (BIPs).

Who has the right to an FBA

Not all students get an FBA, even if they have a behavior challenge. There are three situations where schools typically use this process.

As part of a school evaluation for special education . An evaluation looks at all aspects of a student’s learning. If the team thinks behavior is getting in the way of the student’s learning or the learning of classmates, it will do an FBA.

If new behavior concerns arise with kids who have an  IEP  or a  504 plan . When this happens, schools will do an FBA. The law requires schools to conduct an FBA whenever not  doing one would deny kids a free appropriate public education .

In certain school discipline situations. Federal law requires an FBA in specific situations when a student is disciplined or removed from school . The rules are complex. If a student’s behavior is caused by or had a direct relationship to their disability, then an FBA is required. It’s also required when law enforcement, weapons, drugs, or serious injury are involved. Schools often do FBAs to evaluate risk for students with serious behavior issues.

What about kids who don’t have an IEP or a 504 plan? Or those who aren’t in a school discipline situation? An FBA isn’t commonly used for these kids. The law doesn’t require it, either.

However, schools often have other systems, like PBIS, to help students with behavior. Learn about PBIS .

Next steps: Educators

As an educator, you may be asked to work on an FBA for a student. It’s important to learn as much as you can about the process, and to communicate with parents and caregivers about what’s happening.

Learn more about how to apply positive behavior strategies  in the classroom. And find out why kids may act out to communicate their needs .

Next steps: Parents and caregivers

Keep in mind that an FBA may not be a quick solution to behavior challenges. It will take a lot of work by the school, you, and your child to improve things. But the FBA can give a more complete picture of why your child is struggling. 

You can help by observing your child  and telling the school what you’re seeing at home. Use a frustration log to help you keep track of behavior patterns.

Explore related topics

  • Neuroscience

Reevaluating the Neural Noise Hypothesis in Dyslexia: Insights from EEG and 7T MRS Biomarkers

Agnieszka glica, katarzyna wasilewska, julia jurkowska, jarosław żygierewicz, bartosz kossowski.

  • Katarzyna Jednoróg author has email address
  • Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3 Street, 02-093 Warsaw, Poland
  • Faculty of Physics, University of Warsaw, Pasteur 5 Street, 02-093 Warsaw, Poland
  • Laboratory of Brain Imaging, Nencki Institute of Experimental Biology, Polish Academy of Sciences, Pasteur 3 Street, 02-093 Warsaw, Poland
  • https://doi.org/ 10.7554/eLife.99920.1
  • Open access
  • Copyright information

The neural noise hypothesis of dyslexia posits an imbalance between excitatory and inhibitory (E/I) brain activity as an underlying mechanism of reading difficulties. This study provides the first direct test of this hypothesis using both indirect EEG power spectrum measures in 120 Polish adolescents and young adults (60 with dyslexia, 60 controls) and direct glutamate (Glu) and gamma-aminobutyric acid (GABA) concentrations from magnetic resonance spectroscopy (MRS) at 7T MRI scanner in half of the sample. Our results, supported by Bayesian statistics, show no evidence of E/I balance differences between groups, challenging the hypothesis that cortical hyperexcitability underlies dyslexia. These findings suggest alternative mechanisms must be explored and highlight the need for further research into the E/I balance and its role in neurodevelopmental disorders.

eLife assessment

The authors combined neurophysiological (electroencephalography [EEG]) and neurochemical (magnetic resonance spectroscopy [MRS]) measures to empirically evaluate the neural noise hypothesis of developmental dyslexia. Their results are solid , supported by consistent findings from the two complementary methodologies and Bayesian statistics. Additional analyses, particularly on the neurochemical measures, are necessary to further substantiate the results. This study is useful for understanding the neural mechanisms of dyslexia and neural development in general.

  • https://doi.org/ 10.7554/eLife.99920.1.sa3
  • Read the peer reviews
  • About eLife assessments

Introduction

According to the neural noise hypothesis of dyslexia, reading difficulties stem from an imbalance between excitatory and inhibitory (E/I) neural activity ( Hancock et al., 2017 ). The hypothesis predicts increased cortical excitation leading to more variable and less synchronous neural firing. This instability supposedly results in disrupted sensory representations and impedes phonological awareness and multisensory integration skills, crucial for learning to read ( Hancock et al., 2017 ). Yet, studies testing this hypothesis are lacking.

The non-invasive measurement of the E/I balance can be derived through assessment of glutamate (Glu) and gamma-aminobutyric acid (GABA) neurotransmitters concentration via magnetic resonance spectroscopy (MRS) ( Finkelman et al., 2022 ) or through global, indirect estimations from the electroencephalography (EEG) signal ( Ahmad et al., 2022 ).

Direct measurements of Glu and GABA yielded conflicting findings. Higher Glu concentrations in the midline occipital cortex correlated with poorer reading performance in children ( Del Tufo et al., 2018 ; Pugh et al., 2014 ), while elevated Glu levels in the anterior cingulate cortex (ACC) corresponded to greater phonological skills ( Lebel et al., 2016 ). Elevated GABA in the left inferior frontal gyrus was linked to reduced verbal fluency in adults ( Nakai and Okanoya, 2016 ), and increased GABA in the midline occipital cortex in children was associated with slower reaction times in a linguistic task ( Del Tufo et al., 2018 ). However, notable null findings exist regarding dyslexia status and Glu levels in the ACC among children ( Horowitz-Kraus et al., 2018 ) as well as Glu and GABA levels in the visual and temporo-parietal cortices in both children and adults ( Kossowski et al., 2019 ).

Both beta (∼13-28 Hz) and gamma (> 30 Hz) oscillations may serve as E/I balance indicators ( Ahmad et al., 2022 ), as greater GABA-ergic activity has been associated with greater beta power ( Jensen et al., 2005 ; Porjesz et al., 2002 ) and gamma power or peak frequency ( Brunel and Wang, 2003 ; Chen et al., 2017 ). Resting-state analyses often reported nonsignificant beta power associations with dyslexia ( Babiloni et al., 2012 ; Fraga González et al., 2018 ; Xue et al., 2020 ), however, one study indicated lower beta power in dyslexic compared to control boys ( Fein et al., 1986 ). Mixed results were also observed during tasks. One study found decreased beta power in the dyslexic group ( Spironelli et al., 2008 ), while the other increased beta power relative to the control group ( Rippon and Brunswick, 2000 ). Insignificant relationship between resting gamma power and dyslexia was reported ( Babiloni et al., 2012 ; Lasnick et al., 2023 ). When analyzing auditory steady-state responses, the dyslexic group had a lower gamma peak frequency, while no significant differences in gamma power were observed ( Rufener and Zaehle, 2021 ). Essentially, the majority of studies in dyslexia examining gamma frequencies evaluated cortical entrainment to auditory stimuli ( Lehongre et al., 2011 ; Marchesotti et al., 2020 ; Van Hirtum et al., 2019 ). Therefore, the results from these tasks do not provide direct evidence of differences in either gamma power or peak frequency between the dyslexic and control groups.

The EEG signal comprises both oscillatory, periodic activity, and aperiodic activity, characterized by a gradual decrease in power as frequencies rise (1/f signal) ( Donoghue et al., 2020 ). Recently recognized as a biomarker of E/I balance, a lower exponent of signal decay (flatter slope) indicates a greater dominance of excitation over inhibition in the brain, as shown by the simulation models of local field potentials, ratio of AMPA/GABA a synapses in the rat hippocampus ( Gao et al., 2017 ) and recordings under propofol or ketamine in macaques and humans ( Gao et al., 2017 ; Waschke et al., 2021 ). However, there are also pharmacological studies providing mixed results ( Colombo et al., 2019 ; Salvatore et al., 2024 ). Nonetheless, the 1/f signal has shown associations with various conditions putatively characterized by changes in E/I balance, such as early development in infancy ( Schaworonkow and Voytek, 2021 ), healthy aging ( Voytek et al., 2015 ) and neurodevelopmental disorders like ADHD ( Ostlund et al., 2021 ), autism spectrum disorder ( Manyukhina et al., 2022 ) or schizophrenia ( Molina et al., 2020 ). Despite its potential relevance, the evaluation of the 1/f signal in dyslexia remains limited to one study, revealing flatter slopes among dyslexic compared to control participants at rest ( Turri et al., 2023 ), thereby lending support to the notion of neural noise in dyslexia.

Here, we examined both indirect (1/f signal, beta, and gamma oscillations during both rest and a spoken language task) and direct (Glu and GABA) biomarkers of E/I balance in participants with dyslexia and age-matched controls. The neural noise hypothesis predicts flatter slopes of 1/f signal, decreased beta and gamma power, and higher Glu concentrations in the dyslexic group. Furthermore, we tested the relationships between different E/I measures. Flatter slopes of 1/f signal should be related to higher Glu level, while enhanced beta and gamma power to increased GABA level.

No evidence for group differences in the EEG E/I biomarkers

We recruited 120 Polish adolescents and young adults – 60 with dyslexia diagnosis and 60 controls matched in sex, age, and family socio-economic status. The dyslexic group scored lower in all reading and reading-related tasks and higher in the Polish version of the Adult Reading History Questionnaire (ARHQ-PL) ( Bogdanowicz et al., 2015 ),where a higher score indicates a higher risk of dyslexia (see Table S1 in the Supplementary Material). Although all participants were within the intellectual norm, the dyslexic group scored lower on the IQ scale (including nonverbal subscale only) than the control group. However, the Bayesian statistics did not provide evidence for the difference between groups in the nonverbal IQ.

We analyzed the aperiodic (exponent and offset) components of the EEG signal at rest and during a spoken language task, where participants listened to a sentence and had to indicate its veracity. Due to a technical error, the signal from one person (a female from the dyslexic group) was not recorded during most of the language task and was excluded from the analyses. Hence, the results are provided for 119 participants – 59 in the dyslexic and 60 in the control group.

First, aperiodic parameter values were averaged across all electrodes and compared between groups (dyslexic, control) and conditions (resting state, language task) using a 2×2 repeated measures ANOVA. Age negatively correlated both with the exponent ( r = -.27, p = .003, BF 10 = 7.96) and offset ( r = -.40, p < .001, BF 10 = 3174.29) in line with previous investigations ( Cellier et al., 2021 ; McSweeney et al., 2021 ; Schaworonkow and Voytek, 2021 ; Voytek et al., 2015 ), therefore we included age as a covariate. Post-hoc tests are reported with Bonferroni corrected p -values.

For the mean exponent, we found a significant effect of age ( F (1,116) = 8.90, p = .003, η 2 p = .071, BF incl = 10.47), while the effects of condition ( F (1,116) = 2.32, p = .131, η 2 p = .020, BF incl = 0.39) and group ( F (1,116) = 0.08, p = .779, η 2 p = .001, BF incl = 0.40) were not significant and Bayes Factor did not provide evidence for either inclusion or exclusion. Interaction between group and condition ( F (1,116) = 0.16, p = .689, η 2 p = .001, BF incl = 0.21) was not significant and Bayes Factor indicated against including it in the model.

For the mean offset, we found significant effects of age ( F (1,116) = 22.57, p < .001, η 2 p = .163, BF incl = 1762.19) and condition ( F (1,116) = 23.04, p < .001, η 2 p = .166, BF incl > 10000) with post-hoc comparison indicating that the offset was lower in the resting state condition ( M = -10.80, SD = 0.21) than in the language task ( M = -10.67, SD = 0.26, p corr < .001). The effect of group ( F (1,116) = 0.00, p = .964, η 2 p = .000, BF incl = 0.54) was not significant while Bayes Factor did not provide evidence for either inclusion or exclusion. Interaction between group and condition was not significant ( F (1,116) = 0.07, p = .795, η 2 p = .001, BF incl = 0.22) and Bayes Factor indicated against including it in the model.

Next, we restricted analyses to language regions and averaged exponent and offset values from the frontal electrodes corresponding to the left (F7, FT7, FC5) and right inferior frontal gyrus (F8, FT8, FC6), as well as temporal electrodes, corresponding to the left (T7, TP7, TP9) and right superior temporal sulcus, STS (T8, TP8, TP10)( Giacometti et al., 2014 )( Scrivener and Reader, 2022 ). A 2×2×2×2 (group, condition, hemisphere, region) repeated measures ANOVA with age as a covariate was applied. Power spectra from the left STS at rest and during the language task are presented in Figure 1A and C , while the results for the exponent, offset, and beta power are presented in Figure 1B and D .

hypothesis statement for a functional behavioral assessment

Overview of the main results obtained in the study. (A) Power spectral densities averaged across 3 electrodes (T7, TP7, TP9) corresponding to the left superior temporal sulcus (STS) separately for dyslexic (DYS) and control (CON) groups at rest and (C) during the language task. (B) Plots illustrating results for the exponent, offset, and the beta power from the left STS electrodes at rest and (D ) during the language task. (E) Group results (CON > DYS) from the fMRI localizer task for words compared to the control stimuli (p < .05 FWE cluster threshold) and overlap of the MRS voxel placement across participants. (F) MRS spectra separately for DYS and CON groups. (G) Plots illustrating results for the Glu, GABA, Glu/GABA ratio and the Glu/GABA imbalance. (H ) Semi-partial correlation between offset at rest (left STS electrodes) and Glu controlling for age and gray matter volume (GMV).

For the exponent, there were significant effects of age ( F (1,116) = 14.00, p < .001, η 2 p = .108, BF incl = 11.46) and condition F (1,116) = 4.06, p = .046, η 2 p = .034, BF incl = 1.88), however, Bayesian statistics did not provide evidence for either including or excluding the condition factor. Furthermore, post-hoc comparisons did not reveal significant differences between the exponent at rest ( M = 1.51, SD = 0.17) and during the language task ( M = 1.51, SD = 0.18, p corr = .546). There was also a significant interaction between region and group, although Bayes Factor indicated against including it in the model ( F (1,116) = 4.44, p = .037, η 2 p = .037, BF incl = 0.25). Post-hoc comparisons indicated that the exponent was higher in the frontal than in the temporal region both in the dyslexic ( M frontal = 1.54, SD frontal = 0.15, M temporal = 1.49, SD temporal = 0.18, p corr < .001) and in the control group ( M frontal = 1.54, SD frontal = 0.17, M temporal = 1.46, SD temporal = 0.20, p corr < .001). The difference between groups was not significant either in the frontal ( p corr = .858) or temporal region ( p corr = .441). The effects of region ( F (1,116) = 1.17, p = .282, η 2 p = .010, BF incl > 10000) and hemisphere ( F (1,116) = 1.17, p = .282, η 2 p = .010, BF incl = 12.48) were not significant, although Bayesian statistics indicated in favor of including them in the model. Furthermore, the interactions between condition and group ( F (1,116) = 0.18, p = .673, η 2 p = .002, BF incl = 3.70), and between region, hemisphere, and condition ( F (1,116) = 0.11, p = .747, η 2 p = .001, BF incl = 7.83) were not significant, however Bayesian statistics indicated in favor of including these interactions in the model. The effect of group ( F (1,116) = 0.12, p = .733, η 2 p = .001, BF incl = 1.19) was not significant, while Bayesian statistics did not provide evidence for either inclusion or exclusion. Any other interactions were not significant and Bayes Factor indicated against including them in the model.

In the case of offset, there were significant effects of condition ( F (1,116) = 20.88, p < .001, η 2 p = .153, BF incl > 10000) and region ( F (1,116) = 6.18, p = .014, η 2 p = .051, BF incl > 10000). For the main effect of condition, post-hoc comparison indicated that the offset was lower in the resting state condition ( M = -10.88, SD = 0.33) than in the language task ( M = -10.76, SD = 0.38, p corr < .001), while for the main effect of region, post-hoc comparison indicated that the offset was lower in the temporal ( M = -10.94, SD = 0.37) as compared to the frontal region ( M = -10.69, SD = 0.34, p corr < .001). There was also a significant effect of age ( F (1,116) = 20.84, p < .001, η 2 p = .152, BF incl = 0.23) and interaction between condition and hemisphere, ( F (1,116) = 4.35, p = .039, η 2 p = .036, BF incl = 0.21), although Bayes Factor indicated against including these factors in the model. Post-hoc comparisons for the condition*hemisphere interaction indicated that the offset was lower in the resting state condition than in the language task both in the left ( M rest = -10.85, SD rest = 0.34, M task = -10.73, SD task = 0.40, p corr < .001) and in the right hemisphere ( M rest = -10.91, SD rest = 0.31, M task = -10.79, SD task = 0.37, p corr < .001) and that the offset was lower in the right as compared to the left hemisphere both at rest ( p corr < .001) and during the language task ( p corr < .001). The interactions between region and condition ( F (1,116) = 1.76, p = .187, η 2 p = .015, BF incl > 10000), hemisphere and group ( F (1,116) = 1.58, p = .211, η 2 p = .013, BF incl = 1595.18), region and group ( F (1,116) = 0.27, p = .605, η 2 p = .002, BF incl = 9.32), as well as between region, condition, and group ( F (1,116) = 0.21, p = .651, η 2 p = .002, BF incl = 2867.18) were not significant, although Bayesian statistics indicated in favor of including them in the model. The effect of group ( F (1,116) = 0.18, p = .673, η 2 p = .002, BF incl < 0.00001) was not significant and Bayesian statistics indicated against including it in the model. Any other interactions were not significant and Bayesian statistics indicated against including them in the model or did not provide evidence for either inclusion or exclusion.

Then, we analyzed the aperiodic-adjusted brain oscillations. Since the algorithm did not find the gamma peak (30-43 Hz) above the aperiodic component in the majority of participants, we report the results only for the beta (14-30 Hz) power. We performed a similar regional analysis as for the exponent and offset with a 2×2×2×2 (group, condition, hemisphere, region) repeated measures ANOVA. However, we did not include age as a covariate, as it did not correlate with any of the periodic measures. The sample size was 117 (DYS n = 57, CON n = 60) since in 2 participants the algorithm did not find the beta peak above the aperiodic component in the left frontal electrodes during the task.

The analysis revealed a significant effect of condition ( F (1,115) = 8.58, p = .004, η 2 p = .069, BF incl = 5.82) with post-hoc comparison indicating that the beta power was greater during the language task ( M = 0.53, SD = 0.22) than at rest ( M = 0.50, SD = 0.19, p corr = .004). There were also significant effects of region ( F (1,115) = 10.98, p = .001, η 2 p = .087, BF incl = 23.71), and hemisphere ( F (1,115) = 12.08, p < .001, η 2 p = .095, BF incl = 23.91). For the main effect of region, post-hoc comparisons indicated that the beta power was greater in the temporal ( M = 0.52, SD = 0.21) as compared to the frontal region ( M = 0.50, SD = 0.19, p corr = .001), while for the main effect of hemisphere, post-hoc comparisons indicated that the beta power was greater in the right ( M = 0.52, SD = 0.20) than in the left hemisphere ( M = 0.51, SD = 0.20, p corr < .001). There was a significant interaction between condition and region ( F (1,115) = 12.68, p < .001, η 2 p = .099, BF incl = 55.26) with greater beta power during the language task as compared to rest significant in the temporal ( M rest = 0.50, SD rest = 0.20, M task = 0.55, SD task = 0.24, p corr < .001), while not in the frontal region ( M rest = 0.49, SD rest = 0.18, M task = 0.51, SD task = 0.22, p corr = .077). Also, greater beta power in the temporal as compared to the frontal region was significant during the language task ( p corr < .001), while not at rest ( p corr = .283). The effect of group ( F (1,115) = 0.05, p = .817, η 2 p = .000, BF incl < 0.00001) was not significant and Bayes Factor indicated against including it in the model. Any other interactions were not significant and Bayesian statistics indicated against including them in the model or did not provide evidence for either inclusion or exclusion.

Additionally, building upon previous findings which demonstrated differences in dyslexia in aperiodic and periodic components within the parieto-occipital region ( Turri et al., 2023 ), we have included analyses for the same cluster of electrodes in the Supplementary Material. However, in this region, we also did not find evidence for group differences either in the exponent, offset or beta power.

No evidence for group differences in Glu and GABA concentrations in the left STS

In total, 59 out of 120 participants underwent MRS session at 7T MRI scanner - 29 from the dyslexic group (13 females, 16 males) and 30 from the control group (14 females, 16 males). The MRS voxel was placed in the left STS, in a region showing highest activation for both visual and auditory words (compared to control stimuli) localized individually in each participant, based on an fMRI task (see Figure 1E for overlap of the MRS voxel placement across participants and Figure 1F for MRS spectra). We decided to analyze the neurometabolites’ levels derived from the left STS, as this region is consistently related to functional and structural differences in dyslexia across languages ( Yan et al., 2021 ).

Due to insufficient magnetic homogeneity or interruption of the study by the participants, 5 participants from the dyslexic group had to be excluded. We excluded further 4 participants due to poor quality of the obtained spectra thus the results for Glu are reported for 50 participants - 21 in the dyslexic (12 females, 9 males) and 29 in the control group (13 females, 16 males). In the case of GABA, we additionally excluded 3 participants based on the Cramér-Rao Lower Bounds (CRLB) > 20%. Therefore, the results for GABA, Glu/GABA ratio and Glu/GABA imbalance are reported for 47 participants - 20 in the dyslexic (12 females, 8 males) and 27 in the control group (11 females, 16 males). Demographic and behavioral characteristics for the subsample of 47 participants are provided in the Table S2.

For each metabolite, we performed a separate univariate ANCOVA with the effect of group being tested and voxel’s gray matter volume (GMV) as a covariate (see Figure 1G ). For the Glu analysis, we also included age as a covariate, due to negative correlation between variables ( r = -.35, p = .014, BF 10 = 3.41). The analysis revealed significant effect of GMV ( F (1,46) = 8.18, p = .006, η 2 p = .151, BF incl = 12.54), while the effects of age ( F (1,46) = 3.01, p = .090, η 2 p = .061, BF incl = 1.15) and group ( F (1,46) = 1.94, p = .170, 1 = .040, BF incl = 0.63) were not significant and Bayes Factor did not provide evidence for either inclusion or exclusion.

Conversely, GABA did not correlate with age ( r = -.11, p = .481, BF 10 = 0.23), thus age was not included as a covariate. The analysis revealed a significant effect of GMV ( F (1,44) = 4.39, p = .042, η 2 p = .091, BF incl = 1.64), however Bayes Factor did not provide evidence for either inclusion or exclusion. The effect of group was not significant ( F (1,44) = 0.49, p = .490, η 2 p = .011, BF incl = 0.35) although Bayesian statistics did not provide evidence for either inclusion or exclusion.

Also, Glu/GABA ratio did not correlate with age ( r = -.05, p = .744, BF 10 = 0.19), therefore age was not included as a covariate. The results indicated that the effect of GMV was not significant ( F (1,44) = 0.95, p = .335, η 2 p = .021, BF incl = 0.43) while Bayes Factor did not provide evidence for either inclusion or exclusion. The effect of group was not significant ( F (1,44) = 0.01, p = .933, η 2 p = .000, BF incl = 0.29) and Bayes Factor indicated against including it in the model.

Following a recent study examining developmental changes in both EEG and MRS E/I biomarkers ( McKeon et al., 2024 ), we calculated an additional measure of Glu/GABA imbalance, computed as the absolute residual value from the linear regression of Glu predicted by GABA with greater values indicating greater Glu/GABA imbalance. Alike the previous work ( McKeon et al., 2024 ), we took the square root of this value to ensure a normal distribution of the data. This measure did not correlate with age ( r = -.05, p = .719, BF 10 = 0.19); thus, age was not included as a covariate. The results indicated that the effect of GMV was not significant ( F (1,44) = 0.63, p = .430, η 2 p = .014, BF incl = 0.37) while Bayes Factor did not provide evidence for either inclusion or exclusion. The effect of group was not significant ( F (1,44) = 0.74, p = .396, η 2 p = .016, BF incl = 0.39) although Bayesian statistics did not provide evidence for either inclusion or exclusion.

Correspondence between Glu and GABA concentrations and EEG E/I biomarkers is limited

Next, we investigated correlations between Glu and GABA concentrations in the left STS and EEG markers of E/I balance. Semi-partial correlations were performed ( Table 1 ) to control for confounding variables - for Glu the effects of age and GMV were regressed, for GABA, Glu/GABA ratio and Glu/GABA imbalance the effect of GMV was regressed, while for exponents and offsets the effect of age was regressed. For zero-order correlations between variables see Table S3.

hypothesis statement for a functional behavioral assessment

Semi-partial Correlations Between Direct and Indirect Markers of Excitatory-Inhibitory Balance. For Glu the Effects of Age and Gray Matter Volume (GMV) Were Regressed, for GABA, Glu/GABA Ratio and Glu/GABA Imbalance the Effect of GMV was Regressed, While for Exponents and Offsets the Effect of Age was Regressed

Glu negatively correlated with offset in the left STS both at rest ( r = -.38, p = .007, BF 10 = 6.28; Figure 1H ) and during the language task ( r = -.37, p = .009, BF 10 = 5.05), while any other correlations between Glu and EEG markers were not significant and Bayesian statistics indicated in favor of null hypothesis or provided absence of evidence for either hypothesis. Furthermore, Glu/GABA imbalance positively correlated with exponent at rest both averaged across all electrodes ( r = .29, p = .048, BF 10 = 1.21), as well as in the left STS electrodes ( r = .35, p = .017, BF 10 = 2.87) although Bayes Factor provided absence of evidence for either alternative or null hypothesis. Conversely, GABA and Glu/GABA ratio were not significantly correlated with any of the EEG markers and Bayesian statistics indicated in favor of null hypothesis or provided absence of evidence for either hypothesis.

Testing the paths from neural noise to reading

The neural noise hypothesis of dyslexia predicts impact of the neural noise on reading through the impairment of 1) phonological awareness, 2) lexical access and generalization and 3) multisensory integration ( Hancock et al., 2017 ). Therefore, we analyzed correlations between these variables, reading skills and direct and indirect markers of E/I balance. For the composite score of phonological awareness, we averaged z-scores from phoneme deletion, phoneme and syllable spoonerisms tasks. For the composite score of lexical access and generalization we averaged z-scores from objects, colors, letters and digits subtests from rapid automatized naming (RAN) task, while for the composite score of reading we averaged z-scores from words and pseudowords read per minute, and text reading time in reading comprehension task. The outcomes from the RAN and reading comprehension task have been transformed from raw time scores to items/time scores in order to provide the same direction of relationships for all z-scored measures, with greater values indicating better skills. For the multisensory integration score we used results from the redundant target effect task reported in our previous work ( Glica et al., 2024 ), with greater values indicating a greater magnitude of multisensory integration.

Age positively correlated with multisensory integration ( r = .38, p < .001, BF 10 = 87.98), composite scores of reading ( r = .22, p = .014, BF 10 = 2.24) and phonological awareness ( r = .21, p = .021, BF 10 = 1.59), while not with the composite score of RAN ( r = .13, p = .151, BF 10 = 0.32). Hence, we regressed the effect of age from multisensory integration, reading and phonological awareness scores and performed semi-partial correlations ( Table 2 , for zero-order correlations see Table S4).

hypothesis statement for a functional behavioral assessment

Semi-partial Correlations Between Reading, Phonological Awareness, Rapid Automatized Naming, Multisensory Integration and Markers of Excitatory-Inhibitory Balance. For Reading, Phonological Awareness and Multisensory Integration the Effect of Age was Regressed, for Glu the Effects of Age and Gray Matter Volume (GMV) Were Regressed, for GABA, Glu/GABA Ratio and Glu/GABA Imbalance the Effect of GMV was Regressed, While for Exponents and Offsets the Effect of Age was Regressed

Phonological awareness positively correlated with offset in the left STS at rest ( r = .18, p = .049, BF 10 = 0.77) and with beta power in the left STS both at rest ( r = .23, p = .011, BF 10 = 2.73; Figure 2A ) and during the language task ( r = .23, p = .011, BF 10 = 2.84; Figure 2B ), although Bayes Factor provided absence of evidence for either alternative or null hypothesis. Furthermore, multisensory integration positively correlated with GABA concentration ( r = .31, p = .034, BF 10 = 1.62) and negatively with Glu/GABA ratio ( r = -.32, p = .029, BF 10 = 1.84), although Bayes Factor provided absence of evidence for either alternative or null hypothesis. Any other correlations between reading skills and E/I balance markers were not significant and Bayesian statistics indicated in favor of null hypothesis or provided absence of evidence for either hypothesis.

hypothesis statement for a functional behavioral assessment

Associations between beta power, phonological awareness and reading. (A) Semi-partial correlation between phonological awareness controlling for age and beta power (in the left STS electrodes) at rest and (B) during the language task. (C) Partial correlation between phonological awareness and reading controlling for age. (D) Mediation analysis results. Unstandardized b regression coefficients are presented. Age was included in the analysis as a covariate. 95% CI - 95% confidence intervals. left STS - values averaged across 3 electrodes corresponding to the left superior temporal sulcus (T7, TP7, TP9).

Given that beta power correlated with phonological awareness, and considering the prediction that neural noise impedes reading by affecting phonological awareness — we examined this relationship through a mediation model. Since phonological awareness correlated with beta power in the left STS both at rest and during language task, the outcomes from these two conditions were averaged prior to the mediation analysis. Macro PROCESS v4.2 ( Hayes, 2017 ) on IBM SPSS Statistics v29 with model 4 (simple mediation) with 5000 Bootstrap samples to assess the significance of indirect effect was employed. Since age correlated both with phonological awareness and reading, we also included age as a covariate.

The results indicated that both effects of beta power in the left STS ( b = .96, t (116) = 2.71, p = .008, BF incl = 7.53) and age ( b = .06, t (116) = 2.55, p = .012, BF incl = 5.98) on phonological awareness were significant. The effect of phonological awareness on reading was also significant ( b = .69, t (115) = 8.16, p < .001, BF incl > 10000), while the effects of beta power ( b = -.42, t (115) = -1.25, p = .213, BF incl = 0.52) and age ( b = .03, t (115) = 1.18, p = .241, BF incl = 0.49) on reading were not significant when controlling for phonological awareness. Finally, the indirect effect of beta power on reading through phonological awareness was significant ( b = .66, SE = .24, 95% CI = [.24, 1.18]), while the total effect of beta power was not significant ( b = .24, t (116) = 0.61, p = .546, BF incl = 0.41). The results from the mediation analysis are presented in Figure 2D .

Although similar mediation analysis could have been conducted for the Glu/GABA ratio, multisensory integration, and reading based on the correlations between these variables, we did not test this model due to the small sample size (47 participants), which resulted in insufficient statistical power.

The current study aimed to validate the neural noise hypothesis of dyslexia ( Hancock et al., 2017 ) utilizing E/I balance biomarkers from EEG power spectra and ultra-high-field MRS. Contrary to its predictions, we did not observe differences either in 1/f slope, beta power, or Glu and GABA concentrations in participants with dyslexia. Relations between E/I balance biomarkers were limited to significant correlations between Glu and the offset when controlling for age, and between Glu/GABA imbalance and the exponent.

In terms of indirect markers, our study found no evidence of group differences in the aperiodic components of the EEG signal. In most of the models, we did not find evidence for either including or excluding the effect of the group when Bayesian statistics were evaluated. The only exception was the regional analysis for the offset, where results indicated against including the group factor in the model. These findings diverge from previous research on an Italian cohort, which reported decreased exponent and offset in the dyslexic group at rest, specifically within the parieto-occipital region, but not the frontal region ( Turri et al., 2023 ). Despite our study involving twice the number of participants and utilizing a longer acquisition time, we observed no group differences, even in the same cluster of electrodes (refer to Supplementary Material). The participants in both studies were of similar ages. The only methodological difference – EEG acquisition with eyes open in our study versus both eyes-open and eyes-closed in the work by Turri and colleagues (2023) – cannot fully account for the overall lack of group differences observed. The diverging study outcomes highlight the importance of considering potential inflation of effect sizes in studies with smaller samples.

Although a lower exponent of the EEG power spectrum has been associated with other neurodevelopmental disorders, such as ADHD ( Ostlund et al., 2021 ) or ASD (but only in children with IQ below average) ( Manyukhina et al., 2022 ), our study suggests that this is not the case for dyslexia. Considering the frequent comorbidity of dyslexia and ADHD ( Germanò et al., 2010 ; Langer et al., 2019 ), increased neural noise could serve as a common underlying mechanism for both disorders. However, our specific exclusion of participants with a comorbid ADHD diagnosis indicates that the EEG spectral exponent cannot serve as a neurobiological marker for dyslexia in isolation. No information regarding such exclusion criteria was provided in the study by Turri et al. (2023) ; thus, potential comorbidity with ADHD may explain the positive findings related to dyslexia reported therein.

Regarding the aperiodic-adjusted oscillatory EEG activity, Bayesian statistics for beta power, indicated in favor of excluding the group factor from the model. Non-significant group differences in beta power at rest have been previously reported in studies that did not account for aperiodic components ( Babiloni et al., 2012 ; Fraga González et al., 2018 ; Xue et al., 2020 ). This again contrasts with the study by Turri et al. (2023) , which observed lower aperiodic-adjusted beta power (at 15-25 Hz) in the dyslexic group. Concerning beta power during task, our results also contrast with previous studies which showed either reduced ( Spironelli et al., 2008 ) or increased ( Rippon and Brunswick, 2000 ) beta activity in participants with dyslexia. Nevertheless, since both of these studies employed phonological tasks and involved children’s samples, their relevance to our work is limited.

In terms of direct neurometabolite concentrations derived from the MRS, we found no evidence for group differences in either Glu, GABA or Glu/GABA imbalance in the language-sensitive left STS. Conversely, the Bayes Factor suggested against including the group factor in the model for the Glu/GABA ratio. While no previous study has localized the MRS voxel based on the individual activation levels, nonsignificant group differences in Glu and GABA concentrations within the temporo-parietal and visual cortices have been reported in both children and adults ( Kossowski et al., 2019 ), as well as in the ACC in children ( Horowitz-Kraus et al., 2018 ). Although our MRS sample size was half that of the EEG sample, previous research reporting group differences in Glu concentrations involved an even smaller dyslexic cohort (10 participants with dyslexia and 45 typical readers in Pugh et al., 2014 ). Consistent with earlier studies that identified group differences in Glu and GABA concentrations ( Del Tufo et al., 2018 ; Pugh et al., 2014 ) we reported neurometabolite levels relative to total creatine (tCr), indicating that the absence of corresponding results cannot be ascribed to reference differences. Notably, our analysis of the fMRI localizer task revealed greater activation in the control group as compared to the dyslexic group within the left STS for words than control stimuli (see Figure 1E and the Supplementary Material) in line with previous observations ( Blau et al., 2009 ; Dębska et al., 2021 ; Yan et al., 2021 ).

Irrespective of dyslexia status, we found negative correlations between age and exponent and offset, consistent with previous research ( Cellier et al., 2021 ; McSweeney et al., 2021 ; Schaworonkow and Voytek, 2021 ; Voytek et al., 2015 ) and providing further evidence for maturational changes in the aperiodic components (indicative of increased E/I ratio). At the same time, in line with previous MRS works ( Kossowski et al., 2019 ; Marsman et al., 2013 ), we observed a negative correlation between age and Glu concentrations. This suggests a contrasting pattern to EEG results, indicating a decrease in neuronal excitation with age. We also found a condition-dependent change in offset, with a lower offset observed at rest than during the language task. The offset value represents the uniform shift in power across frequencies ( Donoghue et al., 2020 ), with a higher offset linked to increased neuronal spiking rates ( Manning et al., 2009 ). Change in offset between conditions is consistent with observed increased alpha and beta power during the task, indicating elevated activity in both broadband (offset) and narrowband (alpha and beta oscillations) frequency ranges during the language task.

In regard to relationships between EEG and MRS E/I balance biomarkers, we observed a negative correlation between the offset in the left STS (both at rest and during the task) and Glu levels, after controlling for age and GMV. This correlation was not observed in zero-order correlations (see Supplementary Material). Contrary to our predictions, informed by previous studies linking the exponent to E/I ratio ( Colombo et al., 2019 ; Gao et al., 2017 ; Waschke et al., 2021 ), we found the correlation with Glu levels to involve the offset rather than the exponent. This outcome was unexpected, as none of the referenced studies reported results for the offset. However, given the strong correlation between the exponent and offset observed in our study ( r = .68, p < .001, BF 10 > 10000 and r = .72, p < .001, BF 10 > 10000 at rest and during the task respectively) it is conceivable that similar association might be identified for the offset if it were analyzed.

Nevertheless, previous studies examining relationships between EEG and MRS E/I balance biomarkers ( McKeon et al., 2024 ; van Bueren et al., 2023 ) did not identify a similar negative association between Glu and the offset. Instead, one study noted a positive correlation between the Glu/GABA ratio and the exponent ( van Bueren et al., 2023 ), which was significant in the intraparietal sulcus but not in the middle frontal gyrus. This finding presents counterintuitive evidence, suggesting that an increased E/I balance, as indicated by MRS, is associated with a higher aperiodic exponent, considered indicative of decreased E/I balance. In line with this pattern, another study discovered a positive relationship between the exponent and Glu levels in the dorsolateral prefrontal cortex ( McKeon et al., 2024 ). Furthermore, they observed a positive correlation between the exponent and the Glu/GABA imbalance measure, calculated as the absolute residual value of a linear relationship between Glu and GABA ( McKeon et al., 2024 ), a finding replicated in the current work. This implies that a higher spectral exponent might not be directly linked to MRS-derived Glu or GABA levels, but rather to a greater disproportion (in either direction) between these neurotransmitters. These findings, alongside the contrasting relationships between EEG and MRS biomarkers and age, suggest that these methods may reflect distinct biological mechanisms of E/I balance.

Evidence regarding associations between neurotransmitters levels and oscillatory activity also remains mixed. One study found a positive correlation between gamma peak frequency and GABA concentration in the visual cortex ( Muthukumaraswamy et al., 2009 ), a finding later challenged by a study with a larger sample ( Cousijn et al., 2014 ). Similarly, a different study noted a positive correlation between GABA in the left STS and gamma power ( Balz et al., 2016 ), another study, found non-significant relation between these measures ( Wyss et al., 2017 ). Moreover, in a simultaneous EEG and MRS study, an event-related increase in Glu following visual stimulation was found to correlate with greater gamma power ( Lally et al., 2014 ). We could not investigate such associations, as the algorithm failed to identify a gamma peak above the aperiodic component for the majority of participants. Also, contrary to previous findings showing associations between GABA in the motor and sensorimotor cortices and beta power ( Cheng et al., 2017 ; Gaetz et al., 2011 ) or beta peak frequency ( Baumgarten et al., 2016 ), we observed no correlation between Glu or GABA levels and beta power. However, these studies placed MRS voxels in motor regions which are typically linked to movement-related beta activity ( Baker et al., 1999 ; Rubino et al., 2006 ; Sanes and Donoghue, 1993 ) and did not adjust beta power for aperiodic components, making direct comparisons with our findings limited.

Finally, we examined pathways posited by the neural noise hypothesis of dyslexia, through which increased neural noise may impact reading: phonological awareness, lexical access and generalization, and multisensory integration ( Hancock et al., 2017 ). Phonological awareness was positively correlated with the offset in the left STS at rest, and with beta power in the left STS, both at rest and during the task. Additionally, multisensory integration showed correlations with GABA and the Glu/GABA ratio. Since the Bayes Factor did not provide conclusive evidence supporting either the alternative or null hypothesis, these associations appear rather weak. Nonetheless, given the hypothesis’s prediction of a causal link between these variables, we further examined a mediation model involving beta power, phonological awareness, and reading skills. The results suggested a positive indirect effect of beta power on reading via phonological awareness, whereas both the direct (controlling for phonological awareness and age) and total effects (without controlling for phonological awareness) were not significant. This finding is noteworthy, considering that participants with dyslexia exhibited reduced phonological awareness and reading skills, despite no observed differences in beta power. Given the cross-sectional nature of our study, further longitudinal research is necessary to confirm the causal relation among these variables. The effects of GABA and the Glu/GABA ratio on reading, mediated by multisensory integration, warrant further investigation. Additionally, considering our finding that only males with dyslexia showed deficits in multisensory integration ( Glica et al., 2024 ), sex should be considered as a potential moderating factor in future analyses. We did not test this model here due to the smaller sample size for GABA measurements.

Our findings suggest that the neural noise hypothesis, as proposed by Hancock and colleagues (2017) , does not fully explain the reading difficulties observed in dyslexia. Despite the innovative use of both EEG and MRS biomarkers to assess excitatory-inhibitory (E/I) balance, neither method provided evidence supporting an E/I imbalance in dyslexic individuals. Importantly, our study focused on adolescents and young adults, and the EEG recordings were conducted during rest and a spoken language task. These factors may limit the generalizability of our results. Future research should include younger populations and incorporate a broader array of tasks, such as reading and phonological processing, to provide a more comprehensive evaluation of the E/I balance hypothesis. Additionally, our findings are consistent with another study by Tan et al. (2022) which found no evidence for increased variability (’noise’) in behavioral and fMRI response patterns in dyslexia. Together, these results highlight the need to explore alternative neural mechanisms underlying dyslexia and suggest that cortical hyperexcitability may not be the primary cause of reading difficulties.

In conclusion, while our study challenges the neural noise hypothesis as a sole explanatory framework for dyslexia, it also underscores the complexity of the disorder and the necessity for multifaceted research approaches. By refining our understanding of the neural underpinnings of dyslexia, we can better inform future studies and develop more effective interventions for those affected by this condition.

Materials and methods

Participants.

A total of 120 Polish participants aged between 15.09 and 24.95 years ( M = 19.47, SD = 3.06) took part in the study. This included 60 individuals with a clinical diagnosis of dyslexia performed by the psychological and pedagogical counseling centers (28 females and 32 males) and 60 control participants without a history of reading difficulties (28 females and 32 males). All participants were right-handed, born at term, without any reported neurological/psychiatric diagnosis and treatment (including ADHD), without hearing impairment, with normal or corrected-to-normal vision, and IQ higher than 80 as assessed by the Polish version of the Abbreviated Battery of the Stanford-Binet Intelligence Scale-Fifth Edition (SB5) ( Roid et al., 2017 ).

The study was approved by the institutional review board at the University of Warsaw, Poland (reference number 2N/02/2021). All participants (or their parents in the case of underaged participants) provided written informed consent and received monetary remuneration for taking part in the study.

Reading and Reading-Related Tasks

Participants’ reading skills were assessed by multiple paper-pencil tasks described in detail in our previous work ( Glica et al., 2024 ). Briefly, we evaluated words and pseudowords read in one minute ( Szczerbiński and Pelc-Pękała, 2013 ), rapid automatized naming ( Fecenec et al., 2013 ), and reading comprehension speed. We also assessed phonological awareness by a phoneme deletion task ( Szczerbiński and Pelc-Pękała, 2013 ) and spoonerisms tasks ( Bogdanowicz et al., 2016 ), as well as orthographic awareness (Awramiuk and Krasowicz-Kupis, 2013). Furthermore, we evaluated non-verbal perception speed ( Ciechanowicz and Stańczak, 2006 ) and short-term and working memory by forward and backward conditions from the Digit Span subtest from the WAIS-R ( Wechsler, 1981 ). We also assessed participants’ multisensory audiovisual integration by a redundant target effect task, which results have been reported in our previous work ( Glica et al., 2024 ).

Electroencephalography Acquisition and Procedure

EEG was recorded from 62 scalp and 2 ear electrodes using the Brain Products system (actiCHamp Plus, Brain Products GmbH, Gilching, Germany). Data were recorded in BrainVision Recorder Software (Vers. 1.22.0002, Brain Products GmbH, Gilching, Germany) with a 500 Hz sampling rate. Electrodes were positioned in line with the extended 10-20 system. Electrode Cz served as an online reference, while the Fpz as a ground electrode. All electrodes’ impedances were kept below 10 kΩ. Participants sat in a chair with their heads on a chin-rest in a dark, sound-attenuated, and electrically shielded room while the EEG was recorded during both a 5-minute eyes-open resting state and the spoken language comprehension task. The paradigm was prepared in the Presentation software (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com ).

During rest, participants were instructed to relax and fixate their eyes on a white cross presented centrally on a black background. After 5 minutes, the spoken language comprehension task automatically started. The task consisted of 3 to 5 word-long sentences recorded in a speech synthesizer which were presented binaurally through sound-isolating earphones. After hearing a sentence, participants were asked to indicate whether the sentence was true or false by pressing a corresponding button. In total, there were 256 sentences – 128 true (e.g., “Plants need water”) and 128 false (e.g., “Dogs can fly”).

Sentences were presented in a random order in two blocks of 128 trials. At the beginning of each trial, a white fixation cross was presented centrally on a black background for 500 ms, then a blank screen appeared for either 500, 600, 700, or 800 ms (durations set randomly and equiprobably) followed by an auditory sentence presentation. The length of sentences ranged between 1.17 and 2.78 seconds and was balanced between true ( M = 1.82 seconds, SD = 0.29) and false sentences ( M = 1.82 seconds, SD = 0.32; t (254) = -0.21, p = .835; BF 10 = 0.14). After a sentence presentation, a blank screen was displayed for 1000 ms before starting the next trial. To reduce participants’ fatigue, a 1-minute break between two blocks of trials was introduced, and it took approximately 15 minutes to complete the task.

fMRI Acquisition and Procedure

MRI data were acquired using Siemens 3T Trio system with a 32-channel head coil. Structural data were acquired using whole brain 3D T1-weighted image (MP_RAGE, TI = 1100 ms, GRAPPA parallel imaging with acceleration factor PE = 2, voxel resolution = 1mm 3 , dimensions = 256×256×176). Functional data were acquired using whole-brain echo planar imaging sequence (TE = 30ms, TR = 1410 ms, flip angle FA = 90°, FOV = 212 mm, matrix size = 92×92, 60 axial slices 2.3mm thick, 2.3×2.3 mm in-plane resolution, multiband acceleration factor = 3). Due to a technical issue, data from two participants were acquired with a 12-channel coil (see Supplementary Material).

The fMRI task served as a localizer for later MRS voxel placement in language-sensitive left STS. The task was prepared using Presentation software (Version 20.1, Neurobehavioral Systems, Inc., Berkeley, CA, www.neurobs.com ) and consisted of three runs, each lasting 5 minutes and 9 seconds. Two runs involved the presentation of visual stimuli, while the third run of auditory stimuli. In each run, stimuli were presented in 12 blocks, with 14 stimuli per block. In visual runs, there were four blocks from each category: 1) 3 to 4 letters-long words, 2) the same words presented as a false font string (BACS font) ( Vidal et al., 2017 ), and 3) strings of 3 to 4-long consonants. Similarly, in the auditory run, there were four blocks from each category: 1) words recorded in a speech synthesizer, 2) the same words presented backward, and 3) consonant strings recorded in a speech synthesizer. Stimuli within each block were presented for 800 ms with a 400 ms break in between. The duration of each block was 16.8 seconds. Between blocks, a fixation cross was displayed for 8 seconds. Participants performed a 1-back task to maintain focus. The blocks were presented in a pseudorandom order and each block included 2 to 3 repeated stimuli.

MRS Acquisition and Procedure

The GE 7T system with a 32-channel coil was utilized. Structural data were acquired using whole brain 3D T1-weighted image (3D-SPGR BRAVO, TI = 450ms, TE = 2.6ms, TR = 6.6ms, flip angle = 12 deg, bandwidth = ±32.5kHz, ARC acceleration factor PE = 2, voxel resolution = 1mm, dimensions = 256 x 256 x 180). MRS spectra with 320 averages were acquired from the left STS using single-voxel spectroscopy semiLaser sequence ( Deelchand et al., 2021 ) (voxel size = 15 x 15 x 15 mm, TE = 28ms, TR = 4000ms, 4096 data points, water suppressed using VAPOR). Eight averages with unsuppressed water as a reference were collected.

To localize left STS, T1-weighted images from fMRI and MRS sessions were coregistered and fMRI peak coordinates were used as a center of voxel volume for MRS. Voxels were then adjusted to include only the brain tissue. During the acquisition, participants took part in a simple orthographic task.

Statistical Analyses

The continuous EEG signal was preprocessed in the EEGLAB ( Delorme and Makeig, 2004 ). The data were filtered between 0.5 and 45 Hz (Butterworth filter, 4th order) and re-referenced to the average of both ear electrodes. The data recorded during the break between blocks, as well as bad channels, were manually rejected. The number of rejected channels ranged between 0 and 4 ( M = 0.19, SD = 0.63). Next, independent component analysis (ICA) was applied. Components were automatically labeled by ICLabel ( Pion-Tonachini et al., 2019 ), and those classified with 50-100% source probability as eye blinks, muscle activity, heart activity, channel noise, and line noise, or with 0-50% source probability as brain activity, were excluded. Components labeled as “other” were visually inspected, and those identified as eye blinks and muscle activity were also rejected. The number of rejected components ranged between 11 and 46 ( M = 28.43, SD = 7.26). Previously rejected bad channels were interpolated using the nearest neighbor spline ( Perrin et al., 1989 , 1987 ).

The preprocessed data were divided into a 5-minute resting-state signal and a signal recorded during a spoken language comprehension task using MNE ( Gramfort, 2013 ) and custom Python scripts. The signal from the task was cut up based on the event markers indicating the beginning and end of a sentence. Only trials with correct responses given between 0 and 1000 ms after the end of a sentence were included. The signals recorded during every trial were further multiplied by the Tukey window with α = 0.01 in order to normalize signal amplitudes at the beginning and end of every trial. This allowed a smooth concatenation of signals recorded during task trials, resulting in a continuous signal derived only when participants were listening to the sentences.

The continuous signal from the resting state and the language task was epoched into 2-second-long segments. An automatic rejection criterion of +/-200 μV was applied to exclude epochs with excessive amplitudes. The number of epochs retained in the analysis ranged between 140–150 ( M = 149.66, SD = 1.20) in the resting state condition and between 102–226 ( M = 178.24, SD = 28.94) in the spoken language comprehension task.

Power spectral density (PSD) for 0.5-45 Hz in 0.5 Hz increments was calculated for every artifact-free epoch using Welch’s method for 2-second-long data segments windowed with a Hamming window with no overlap. The estimated PSDs were averaged for each participant and each channel separately for the resting state condition and the language task. Aperiodic and periodic (oscillatory) components were parameterized using the FOOOF method ( Donoghue et al., 2020 ). For each PSD, we extracted parameters for the 1-43 Hz frequency range using the following settings: peak_width_limits = [1, 12], max_n_peaks = infinite, peak_threshold = 2.0, mean_peak_height = 0.0, aperiodic_mode = ‘fixed’. Apart from broad-band aperiodic parameters (exponent and offset), we also extracted power, bandwidth, and the center frequency parameters for the theta (4-7 Hz), alpha (7-14 Hz), beta (14-30 Hz) and gamma (30-43 Hz) bands. Since in the majority of participants, the algorithm did not find the peak above the aperiodic component in theta and gamma bands, we calculated the results only for the alpha and beta bands. The results for other periodic parameters than the beta power are reported in Supplementary Material.

Apart from the frequentist statistics, we also performed Bayesian statistics using JASP ( JASP Team, 2023 ). For Bayesian repeated measures ANOVA, we reported the Bayes Factor for the inclusion of a given effect (BF incl ) with the ’across matched model’ option, as suggested by Keysers and colleagues (2020) , calculated as a likelihood ratio of models with a presence of a specific factor to equivalent models differing only in the absence of the specific factor. For Bayesian t -tests and correlations, we reported the BF 10 value, indicating the ratio of the likelihood of an alternative hypothesis to a null hypothesis. We considered BF incl/10 > 3 and BF incl/10 < 1/3 as evidence for alternative and null hypotheses respectively, while 1/3 < BF incl/10 < 3 as the absence of evidence ( Keysers et al., 2020 ).

MRS voxel localization in the native space

The data were analyzed using Statistical Parametric Mapping (SPM12, Wellcome Trust Centre for Neuroimaging, London, UK) run on MATLAB R2020b (The MathWorks Inc., Natick, MA, USA). First, all functional images were realigned to the participant’s mean. Then, T1-weighted images were coregistered to functional images for each subject. Finally, fMRI data were smoothed with a 6mm isotropic Gaussian kernel.

In each subject, the left STS was localized in the native space as a cluster in the middle and posterior left superior temporal sulcus, exhibiting higher activation for visual words versus false font strings and auditory words versus backward words (logical AND conjunction) at p < .01 uncorrected. For 6 participants, the threshold was lowered to p < .05 uncorrected, while for another 6 participants, the contrast from the auditory run was changed to auditory words versus fixation cross due to a lack of activation for other contrasts.

In the Supplementary Material, we also performed the group-level analysis of the fMRI data (Tables S5-S7 and Figure S1).

MRS data were analyzed using fsl-mrs version 2.0.7 ( Clarke et al., 2021 ). Data stored in pfile format were converted into NIfTI-MRS using spec2nii tool. We then used the fsl_mrs_preproc function to automatically perform coil combination, frequency and phase alignment, bad average removal, combination of spectra, eddy current correction, shifting frequency to reference peak and phase correction.

To obtain information about the percentage of WM, GM and CSF in the voxel we used the svs_segmentation with results of fsl_anat as an input. Voxel segmentation was performed on structural images from a 3T scanner, coregistered to 7T structural images in SPM12. Next, quantitative fitting was performed using fsl_mrs function. As a basis set, we utilized a collection of 27 metabolite spectra simulated using FID-A ( Simpson et al., 2017 ) and a script tailored for our experiment. We supplemented this with synthetic macromolecule spectra provided by fsl_mrs . Signals acquired with unsuppressed water served as water reference.

Spectra underwent quantitative assessment and visual inspection and those with linewidth higher than 20Hz, %CRLB higher than 20%, and poor fit to the model were excluded from the analysis (see Table S8 in the Supplementary Material for a detailed checklist). Glu and GABA concentrations were expressed as a ratio to total-creatine (tCr; Creatine + Phosphocreatine).

Data Availability Statement

Behavioral data, raw and preprocessed EEG data, 2 nd level fMRI data, preprocessed MRS data and Python script for the analysis of preprocessed EEG data can be found at OSF: https://osf.io/4e7ps/

Acknowledgements

This study was supported by the National Science Centre grant (2019/35/B/HS6/01763) awarded to Katarzyna Jednoróg.

We gratefully acknowledge valuable discussions with Ralph Noeske from GE Healthcare for his support in setting up the protocol for an ultra-high field MR spectroscopy and sharing the set-up for basis set simulation in FID-A.

  • Buitelaar J
  • dos Santos FP
  • Verschure PFMJ
  • McAlonan G.
  • Krasowicz-Kupis G
  • Albertini G
  • Roa Romero Y
  • Ittermann B
  • Senkowski D
  • Baumgarten TJ
  • Oeltzschner G
  • Hoogenboom N
  • Wittsack H-J
  • Schnitzler A
  • van Atteveldt N
  • Bogdanowicz KM
  • Bogdanowicz M
  • Sajewicz-Radtke U
  • Karpińska E
  • Łockiewicz M
  • Ciechanowicz A
  • Napolitani M
  • Gosseries O
  • Casarotto S
  • Brichant J-F
  • Massimini M
  • Chieregato A
  • Harrison PJ
  • Dzięgiel-Fivet G
  • Łuniewska M
  • Grabowska A
  • Deelchand DK
  • Berrington A
  • Seraji-Bozorgzad N
  • Del Tufo SN
  • Fulbright RK
  • Peterson EJ
  • Sebastian P
  • Jaworowska A
  • Yingling CD
  • Johnstone J
  • Davenport L
  • Finkelman T
  • Furman-Haran E
  • Fraga González G
  • van der Molen MJW
  • de Geus EJC
  • van der Molen MW.
  • Roberts TPL
  • Giacometti P
  • Wasilewska K
  • Kossowski B
  • Żygierewicz J
  • Horowitz-Kraus T
  • Ermentrout B
  • Wagenmakers E-J
  • Bogorodzki P
  • Roberts M V.
  • Haenschel C
  • Lasnick OHM
  • MacMaster FP
  • Villiermet N
  • Manyukhina VO
  • Prokofyev AO
  • Obukhova TS
  • Schneiderman JF
  • Altukhov DI
  • Stroganova TA
  • Orekhova E V
  • Marchesotti S
  • Donoghue JP
  • van den Heuvel MP
  • Hilleke E. HP
  • Hetherington H
  • McSweeney M
  • Swerdlow NR
  • Muthukumaraswamy SD
  • Swettenham JB
  • Karalunas SL
  • Echallier JF
  • Pion-Tonachini L
  • Kreutz-Delgado K
  • Edenberg HJ
  • Chorlian DB
  • O’Connor SJ
  • Rohrbaugh J
  • Schuckit MA
  • Hesselbrock V
  • Conneally PM
  • Tischfield JA
  • Begleiter H
  • Grigorenko EL
  • Seidenberg MS
  • Brunswick N
  • Hatsopoulos NG
  • Salvatore S V.
  • Zorumski CF
  • Mennerick S
  • Schaworonkow N
  • Scrivener CL
  • Hennessy TJ
  • Spironelli C
  • Penolazzi B
  • Szczerbiński M
  • Pelc-Pękała O
  • van Bueren NER
  • van der Ven SHG
  • Cohen Kadosh R.
  • Van Hirtum T
  • Ghesquière P
  • Tempesta ZR
  • Achermann R

Article and author information

Katarzyna jednoróg, for correspondence:, version history.

  • Sent for peer review : June 11, 2024
  • Preprint posted : June 12, 2024
  • Reviewed Preprint version 1 : September 5, 2024

© 2024, Glica et al.

This article is distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use and redistribution provided that the original author and source are credited.

Views, downloads and citations are aggregated across all versions of this paper published by eLife.

Be the first to read new articles from eLife

COMMENTS

  1. Functional Behavior Assessment

    Step 2.3b Create a hypothesis statement for the purpose ...

  2. Functional Behavioral Assessment Hypothesis Examples

    In a functional behavioral assessment (FBA), the hypothesis statement provides information about the reason and motivation for students' behaviors.

  3. Ep. 13: How to Write Useful FBA Hypothesis Statements

    How you write the hypothesis statements for your functional behavior assessment is critical to how strong your behavior support plan will be. HYPOTHESIS STATEMENT DOS DO: ONLY DESCRIBE WHAT YOU CAN SEE AND OBSERVE. And we talked about that when we talked about the data collection. And so I'll link to that episode. But

  4. PDF Tip Sheet: Functional Behavior Assessment and Function-Based Interventions

    Functional Behavior Assessment (FBA) is a process of gathering information from a variety of sources to develop a hypothesis regarding why a student is displaying challenging behaviors. The ... statement or request assistance; or working on an assignment during class.

  5. PDF Functional Behavior Assessment (FBA)

    Functional Behavior Assessment (FBA)- EBP Brief Packet

  6. PDF Using FBA for Diagnostic Assessment in Behavior

    Handout 3a: Functional Behavior Assessment (FBA) Process In the school setting, a functional assessment is conducted when teachers are faced with serious ... The hypothesis statement is used to guide the development of the Positive Behavior Support Plan (PBSP). The plan should directly address the function that was identified by the FBA process

  7. PDF Conducting a Functional Behavior Assessment: A Technical Handbook

    This document will provide readers with general information regarding the process of completing a Functional Behavior Assessment (FBA). This document does not replace or serve as an alternative to the in-depth training necessary to conduct an FBA and complete a quality Behavior Intervention Plan (BIP). Table of Contents.

  8. PDF Functional Behavior Assessment (FBA) Training Manual

    9. Antecedent-Behavior Consequence (ABC) Analysis 26 10. Functional Assessment Observation Form - structured form with key codes 29 11. Reinforcer identification - conduct reinforcer assessments 31 12. Ecological context - clear description of events, people, places and things 35 13. Development of hypothesis statement 36 14.

  9. PDF Functional Behavioral Assessments (FBAs)

    behavior occurs 3) consequences that follow the occurrence of problem behavior 4) environmental variables or setting events that make the problem behavior more likely. Create hypothesis statement that describes how the student obtains or escapes attention, tasks, item, or sensory input. PROCESS of a Functional Behavioral Assessment in a School ...

  10. PDF Individual Positive Behavior Support: Functional Behavioral Assessment

    Outcomes of a Functional Assessment. Provide a clear description of the problem behavior. Identify the events, times, and situations that predict when problem behaviors both occur and do not occur. Identify the consequences that maintain a problem behavior. Develop a hypothesis about the function a problem behavior serves.

  11. Functional Behavior Assessment

    Functional behavior assessment (FBA) involves gathering information about the context(s) during which an individual engages in a particular behavior (Cooper, Heron, & Heward, 2020; Hagopian, Dozier, Rooker, & Jones, 2013).During the process, behavior analysts examine how the environment and behavior interact to determine what environmental events are likely to set the occasion, or evoke the ...

  12. PDF Steps for Implementation: Functional Behavior Assessment

    Functional Behavior Assessment - Steps for Implementation

  13. PDF Evidence-based Practice Brief Packet: Fba Functional Behavior Assessment

    Functional Behavior Assessment WHAT IS FBA? At times, all children and youth can struggle with challenging behavior. If a challenging behavior interferes with the learner's ability to learn, then a functional behavior assessment (FBA) is needed. FBA can be used when the intensity, duration, or type of interfering behavior creates

  14. PDF Functional Behavior Assessment (FBA)

    Functional Behavior Assessment: means an assessment that. operationally defines the target behaviors, identifies the situations in which the target behaviors are likely to occur and not occur, and. generates a hypothesis of why the behaviors occur. Proposed Positive Supports Rule 9544.0020 Definition of a FBA.

  15. Behavior Plans

    Functional behavioral assessment (FBA) is a process used to gather details about the events that predict and maintain a student's problem behavior. The purpose of the FBA is to provide information that will be used to design effective positive behavior support plans. ... Hypothesis Statements The hypothesis about the function maintaining a ...

  16. PDF Chapter 5: Functional Behavior Assessment (Fba)

    Functional Behavior Assessment is a problem-solving process for identifying the events that reliably predict and maintain problem behavior. In general, antecedent events or conditions trigger a specific ... When describing the functions that maintain behavior, Summary Statements (or hypothesis statements) are narrowed to two primary behavioral ...

  17. PDF Components of a Functional Behavior Assessment (FBA)

    on the most concerning behavior, or a few closely related behaviors - Create a hypothetical problem statement. 3.Sources of Information/Data Collection: Information and baseline data must be collected to formulate a hypothesis about the function (what purpose the behavior serves for the student) of a behavior.

  18. PDF Functional Behavioral Assessments (FBAs) and Behavior Intervention

    A functional behavior assessment (FBA) is a tool used to help identify and understand a child's behavior. Much of the information below comes from NC Department of Public ... Step 3: Analyze the information collected and write a hypothesis statement. Once all the information is collected, the IEP Team meets to review and analyze the data. Then the

  19. Step 2.3c Test the hypothesis (behavior) statement

    Step 2.1 Collect baseline data using direct and indirect assessment methods; Step 2.2 Gather observation-based data on the occurrence of the interfering behavior; Step 2.3 Analyze data collected. Step 2.3a Identify variables of the behavior; Step 2.3b Create a hypothesis statement for the purpose of the behavior

  20. What is a functional behavioral assessment (FBA)?

    What is a functional behavioral assessment (FBA)?

  21. Functional Behavior Assessment "Cheat Sheet" Func

    ional Behavior Assessment "Cheat Sheet" A functional behavior assessment was completed utilizing a variety of strategies including observ. ions, interviews and a review of records. The following is a. nd their frequency, severity and duration: Off task when the student does not engage with the scheduled activity for more than 10s; The ...

  22. PDF Functional Behavior Assessments: A Guide for Parents

    3 Glossary Antecedent refers to environmental factors that occur before a behavior possible triggering it. Consequences maintain the undesired behavior. Hypothesis is an operational statement defining the behavior using antecedents and consequences. Behavior support plan refers to the course of action for ending the negative behavior and replacing it with more acceptable behaviors.

  23. Functional behavior assessment.

    This chapter discusses approaches to functional behavioral assessment (FBA) within applied settings. The chapter begins with an overview of the FBA process as best practice in the assessment and treatment of challenging behavior. The next section provides a review of FBA methods with a focus on indirect and descriptive assessments. Within this section, common FBA procedures are reviewed ...

  24. PDF FUNCTIONAL BEHAVIORAL ASSESSMENT (FBA)

    FUNCTIONAL BEHAVIORAL ASSESSMENT (FBA)

  25. Functional behavior assessment

    Functional behavior assessment

  26. Reevaluating the Neural Noise Hypothesis in Dyslexia: Insights ...

    Data Availability Statement. Behavioral data, raw and preprocessed EEG data, 2 nd level fMRI data, preprocessed MRS data and Python script for the analysis of preprocessed EEG data can be found at OSF: https://osf.io/4e7ps/ Acknowledgements. This study was supported by the National Science Centre grant (2019/35/B/HS6/01763) awarded to Katarzyna ...