GLM ps_ctl ps_tx /WSFACTOR = group 2 /EMMEANS = TABLES(group) /PRINT = DESCRIPTIVE /WSDESIGN = group . | T-TEST PAIRS= ps_ctl WITH ps_tx (PAIRED). | III. Matching in Quasi-Experimental Designs: Normative Group MatchingSuppose that you have a quasi-experiment where you want to compare an experimental group (e.g., people who have suffered mild head injury) with a sample from a normative population. Suppose that there are several hundred people in the normative population. One strategy is to randomly select the same number of people from the normative population as you have in your experimental group. If the demographic characteristics of the normative group approximate those of your experimental group, then this process may be appropriate. But, what if the normative group contains equal numbers of males and females ranging in age from 6 to 102, and people in your experimental condition are all males ranging in age from 18 to 35? Then it is unlikely that the demographic characteristics of the people sampled from the normative group will match those of your experimental group. For that reason, simple random selection is rarely appropriate when sampling from a normative population. The Normative Group Matching ProcedureDetermine the relevant characteristics (e.g., age, gender, SES, etc.) of each person in your experimental group. E.g., Exp person #1 is a 27 year-old male. Then randomly select one of the 27 year-old males from the normative population as a match for Exp person #1. Exp person #2 is a 35 year-old male, then randomly select one of the 35 year-old males as a match for Exp person #2. If you have done randomize normative group matching then the matching variable should be used as a blocking factor in the ANOVA. If you have a limited number of people in the normative group then you can do caliper matching . In caliper matching you select the matching person based a range of scores, for example, you can caliper match within a range of 3 years. Exp person #1 would be randomly selected from males whose age ranged from 26 to 27 years. If you used a five year caliper for age then for exp person #1 you randomly select a males from those whose age ranged from 25 to 29 years old. You would want a narrower age caliper for children and adolescents than for adults. This procedure becomes very difficult to accomplish when you try to start matching on more than one variable. Think of the problems of finding exact matches when several variables are used, e.g., an exact match for a 27-year old, white female with an IQ score of 103 and 5 children. Analysis of a Normative Group Matching DesignThe analysis is the same as for a matched random assignment design. If the matching variable is related to the dependent variable, then you can incorporate the matching variable as a blocking variable in your analysis of variance. III. Matching in Quasi-Experimental Designs: Normative Group EquivalenceBecause of the problems in selecting people in a normative group matching design and the potential problems with the data analysis of that design, you may want to make the normative comparison group equivalent on selected demographic characteristics. You might want the same proportion of males and females, and the mean age (and SD) of the normative group should be the same as those in the experimental group. If the ages of the people in the experimental group ranged from 18 to 35, then your normative group might contain an equal number of participants randomly selected from those in the age range from 18 to 35 in the normative population. Analysis of a Normative Group Equivalence DesignIn the case of normative group equivalence there is no special ANOVA procedure as there is in Normative Group Matching. In general, demographic characteristics themselves rarely predict the d.v., so you haven’t lost anything by using the group equivalence method. A Semantic CautionThe term "matching" implies a one-to-one matching and it implies that you have incorporated that matched variable into your ANOVA design. Please don’t use the term "matching" when you mean mere "equivalence." 13. Study design and choosing a statistical testSample size. What statistical analysis should I use for a quasi experimental design?Top contributors to discussions in this field. - Portland State University
Get help with your research Join ResearchGate to ask questions, get input, and advance your work. All Answers (3)Similar questions and discussions- Asked 16 May 2019
- Asked 27 October 2020
Related Publications- Recruit researchers
- Join for free
- Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
Physical Review Accelerators and Beams- Special Editions
- Editorial Team
- Open Access
Bayesian optimization algorithms for accelerator physicsRyan roussel et al., phys. rev. accel. beams 27 , 084801 – published 6 august 2024. - No Citing Articles
- INTRODUCTION
- BACKGROUND AND MOTIVATION
- GAUSSIAN PROCESS MODELING
- ACQUISITION FUNCTION DEFINITION
- ACQUISITION FUNCTION OPTIMIZATION
- ACKNOWLEDGMENTS
Accelerator physics relies on numerical algorithms to solve optimization problems in online accelerator control and tasks such as experimental design and model calibration in simulations. The effectiveness of optimization algorithms in discovering ideal solutions for complex challenges with limited resources often determines the problem complexity these methods can address. The accelerator physics community has recognized the advantages of Bayesian optimization algorithms, which leverage statistical surrogate models of objective functions to effectively address complex optimization challenges, especially in the presence of noise during accelerator operation and in resource-intensive physics simulations. In this review article, we offer a conceptual overview of applying Bayesian optimization techniques toward solving optimization problems in accelerator physics. We begin by providing a straightforward explanation of the essential components that make up Bayesian optimization techniques. We then give an overview of current and previous work applying and modifying these techniques to solve accelerator physics challenges. Finally, we explore practical implementation strategies for Bayesian optimization algorithms to maximize their performance, enabling users to effectively address complex optimization challenges in real-time beam control and accelerator design. - Received 9 December 2023
- Accepted 3 June 2024
DOI: https://doi.org/10.1103/PhysRevAccelBeams.27.084801 Published by the American Physical Society under the terms of the Creative Commons Attribution 4.0 International license. Further distribution of this work must maintain attribution to the author(s) and the published article’s title, journal citation, and DOI. Published by the American Physical Society Physics Subject Headings (PhySH)Authors & AffiliationsArticle text. Vol. 27, Iss. 8 — August 2024 Authorization RequiredOther options. - Buy Article »
- Find an Institution with the Article »
Download & ShareOverview of challenges in using optimization algorithms for online accelerator control. Accelerator control algorithms make decisions about setting a wide variety of accelerator parameters in order to control beam parameters at target locations. Optimal decision making takes into account limited online accelerator measurements, as well as various sources of prior knowledge about the accelerator, including previous measurements, physics simulations, and physics principals. Optimization must also consider complicated aspects of realistic accelerator operation including external conditions, feedback systems, safety constraints, and repeatability errors. Overview of optimization challenges in accelerator physics simulations. Ideal algorithms aim to minimize the computational cost of performing optimization by orchestrating parallel simulation evaluations at multiple fidelities ranging from analytical models to high-fidelity (computationally expensive) simulations. Correlations between simulation predictions at different fidelities can be leveraged to reduce the number of high-fidelity simulation evaluations needed to find an ideal solution at the highest-fidelity level. Illustration of the Bayesian optimization process to find the maximum of a simple function. A Gaussian process (GP) model makes predictions of the function value (solid blue line) along with associated uncertainties (blue shading) based on previously collected data. An acquisition function then uses the GP model to predict the “value” of making potential future measurements, balancing both exploration and exploitation. The next observation is chosen by maximizing the acquisition function in parameter space. This process is repeated iteratively until optimization goals have been reached. Illustration of Bayesian regression using a linear model f ( x ) = w 1 x + w 0 . (a,d,g) Posterior probability density of the linear weights { w 1 , w 0 } conditioned on N observations of the function y = f ( x ) + ε . (b,e,h) Model predictions using random samples of { w 1 , w 0 } drawn from the posterior probability distribution. (c,f,i) Predictive mean (solid line) and 90% uncertainty intervals (shading) of the posterior model. Red cross and black dashes denote true parameters and values of the function f ( x ) , respectively. Reproduced with permission from [ 39 ]. Illustration of GP model predictions. (a) Prior model prediction of the function mean (solid blue line) and confidence interval (blue shading) at a set of test points in parameter space. The probability of the output value y at any given test point x * is a normal distribution. (b) The posterior GP model also predicts normal probability distributions at each test point, conditioned on the dataset D . (c) Individual function samples can also be drawn from the posterior GP model and can be used for Monte Carlo computations of function quantities. Visualization of how the length scale hyperparameter l effects GP modeling. Three GP models are trained on the same dataset using a Matérn kernel with fixed length scales of (a) 0.1, (b) 1, and (c) 10. Remaining hyperparameters are trained by maximizing the marginal log-likelihood. Examples of GP modeling with varying treatment of measurement noise. (a) Shows a GP model containing zero noise, forcing the GP prediction to fit experimental data exactly. (b) Shows a GP model trained on the same data with a fixed (homoskedastic) noise parameter. (c) Illustrates a GP model incorporating heteroskedastic noise, where the data variance for each point is explicitly specified. Illustration of the improvement in prediction accuracy that can be gained by including expected correlations, such as those that arise from adjacent quadrupoles, into the GP kernel design. Here, a 2D function has input correlations that are similar to what one might observe between adjacent quadrupoles (a). For a fixed set of training data points (shown in orange), a GP model using an uncorrelated kernel (b) produces less accurate posterior predictions of the true function than a model with an accurate correlated kernel (c). In the context of BO, learning a more accurate model with fewer data training points translates to faster convergence in optimization. Illustration of nonzero prior mean. In the absence of local data, the mean of the posterior distributions reverts to (a) zero or (b) the nonzero prior mean. The variance remains unchanged. Transmission optimization at ATLAS subsection using different prior mean functions. Solid and dashed lines depict the medians and the shaded areas depict the corresponding 90% confidence levels across 10 to 20 runs. Reproduced from [ 24 ]. Example of using log transformations in GP modeling for strictly positive output values. Data in real space (a) are transformed to log space before fitting a GP model (b). Samples drawn from the GP model in log space can then be transformed back into real space to make GP predictions. The resulting likelihood in real space is then a log-normal distribution which is strictly positive. Simulated application of standard and time-aware BO in a drifting trajectory stabilization problem. Simple BO settles on the mean value of the oscillations. ABO-ISO (isotropic) follows the changes but lags them because it only uses isotropic (local) kernel. ABO-SM (spectral mixture) captures long-range correlations and eventually correctly predicts necessary future changes in phase. By default, ABO-SM continues to explore around maximum value for optimization, producing a small step jitter. It can be eliminated by using the posterior mean as the acquisition function at the cost of convergence speed. Illustration of the prediction of a multifidelity Gaussian process, by comparing (a) a single-fidelity Gaussian process trained only on high-fidelity data, and (b),(c) a multifidelity Gaussian process trained on both high-fidelity and low-fidelity data, in the case where (b) high-fidelity and low-fidelity data are highly correlated, as well as (c) high-fidelity data and low-fidelity data are largely uncorrelated. In this particular example, the multifidelity GP is a multitask GP [ 61 ], as implemented in the library b o t orch. Dashed lines denote ground truth values of the low- and high-fidelity functions. Demonstration of combining GP models with a differentiable physics model of magnetic hysteresis. (a) Measured beam charge after passing through an aperture in the APS injector is plotted over three cycles of varying the current in an upstream quadrupole. Transmitted beam charge measurements are not repeatable due to hysteresis effects in the upstream quadrupole. (b) GP modeling with differentiable hysteresis model included accurately predicts beam charge over multiple hysteresis cycles with improved (reduced) uncertainty predictions. Reproduced from [ 28 ]. Examples of the EI and UCB acquisition functions for objective function maximization given the same GP model and training data. (a) EI acquisition function, where the dashed horizontal line denotes the best previously observed value f ( x * ) . (b) UCB acquisition function. Example of sampling behavior of Bayesian exploration (BE). (a) The BE acquisition function is maximized at locations in parameter space where the model uncertainty is highest, usually at locations farthest away from previous measurements. (b) In cases where the function is less sensitive to one parameter ( x 2 in this example), the model uncertainty is smaller along that axis, resulting in less frequent sampling along that dimension. Comparison between different constrained Bayesian optimization algorithms. (a) Weighting the acquisition function by the probability of satisfying the constraining function [ 73 ]. (b) Acquisition function optimization within a safe set using MoSaOpt in exploitation mode [ 32 ] and (c) SafeOpt [ 75 ]. (d) The constraint function, where valid regions satisfy c ( x ) > 0 . Summary of multiobjective BO (MOBO) using expected hypervolume improvement (EHVI). (a) Given Pareto front P and corresponding hypervolume H , the increase in hypervolume H I due to a new measurement y is given by the shaded green area. (b) Comparison between multiobjective optimization algorithms for optimizing the AWA injector problem. NSGA-II is a standard evolutionary algorithm [ 81 ], I-NN is surrogate model assisted NSGA-II [ 54 ]. (c) Projected hypervolume after a set number of MOBO iterations with insets showing hypervolume improvement due to fill in points (i) and measurement of newly dominant points (ii). Reproduced from [ 34 ]. Visualization of the BAX process for beam steering through quadrupole magnets. (a) Experimental measurements are used to build a GP model of the horizontal beam centroid position at a downstream screen C x as a function of the quadrupole strength and steering parameter. Note that the GP model is built with a first-order polynomial kernel, constraining predictions to planar surfaces. Dashed lines denote cross sections of the GP model shown in (b). (c) The BAX acquisition function that predicts the information gained about the ideal steering current by making future measurements. Demonstration of proximal biasing effects during Bayesian exploration (BE) of the constrained TNK test problem. (a) Normal BE. (b) BE using proximal biasing with l = 0.1 . The green arrow highlights a step where a larger jump in parameter space was allowed by proximal biasing. Reproduced from [ 86 ]. One-dimensional visualization of trust region BO (TuRBO) applied to a minimization problem with the UCB acquisition function. (a)–(d) Sequential evolution of the GP model and sampling pattern. Orange circles denote objective function measurements and green circles denote the most recent sequential measurement at each step. Trust region BO (TuRBO), simplex, and UCB applied to the minimization of total losses (maximization of lifetime) at the ESRF-EBS storage ring. Adapted from [ 96 ]. Comparison of optimization performance between a local optimization algorithm (Nelder-Mead simplex), BO using the UCB acquisition function ( β = 2 ), and BO using the UCB acquisition strongly weighted toward exploration ( β = 100 ). All algorithms are initialized with a single observation at x = 0.75 and aim to minimize the objective function. (a)–(c) Observations of the objective function in parameter space for each algorithm. The dashed line denotes the true objective function. (d)–(f) Objective function values as a function of algorithm iteration. Note that simplex terminates after reaching a convergence criteria. Comparison between GP modeling of hard and soft constraining functions. (a) GP modeling of a heaviside constraining function does not accurately predict constraint values due to a single sharp feature that cannot be learned without dense sampling on either side of the constraint boundary. (b) Smooth constraining functions with a single characteristic length scale are more accurately modeled with GP modeling. Inset: Visualization of bounding box constraint function f ( x ) = max i { | | C − S i ( x ) | | } used to keep beam distributions inside an ROI, where r is the radius of a circular ROI, C is the center coordinates of the ROI, and S i are corner coordinates of a bounding box around the beam. Comparison between GP modeling of the two-dimensional sphere function f ( x 1 , x 2 ) = x 1 2 + x 2 2 with and without interpolated measurements. (a) Shows the posterior mean of the GP model with four measurements taken sequentially. (b) Shows the same four measurements taken sequentially but with interpolated points in between each measurement. Incorporating interpolated points in the dataset leads to higher modeling accuracy, leading to accurate identification of the sphere function minimum at the origin. Performance scaling with dataset size for b o t orch/ gp y t orch ( 0.9.4 / 1.11 ) libraries on a single-objective optimization run. Synthetic five-variable quadratic objective was used with Monte Carlo version of UCB acquisition function and 100 Adam optimizer iterations. GPU memory usage is only applicable to GPU runs. Sign up to receive regular email alerts from Physical Review Accelerators and Beams Reuse & PermissionsIt is not necessary to obtain permission to reuse this article or its components as it is available under the terms of the Creative Commons Attribution 4.0 International license. This license permits unrestricted use, distribution, and reproduction in any medium, provided attribution to the author(s) and the published article's title, journal citation, and DOI are maintained. Please note that some figures may have been included with permission from other third parties. It is your responsibility to obtain the proper permission from the rights holder directly for these figures. - Forgot your username/password?
- Create an account
Article LookupPaste a citation or doi, enter a citation. |
IMAGES
VIDEO
COMMENTS
However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions. Methods used to analyze quasi-experimental data include 2-group tests, regression analysis, and time-series analysis, and they all have specific assumptions, data requirements, strengths, and ...
Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.
A quasi experimental design is a method for identifying causal relationships that does not randomly assign participants to the experimental groups. Instead, researchers use a non-random process. For example, they might use an eligibility cutoff score or preexisting groups to determine who receives the treatment.
Quasi-experimental designs (QEDs) are increasingly employed to achieve a better balance between internal and external validity. Although these designs are often referred to and summarized in terms of logistical benefits versus threats to internal validity, there is still uncertainty about: (1) how to select from among various QEDs, and (2 ...
Quasi-experimental design is a research method that seeks to evaluate the causal relationships between variables, but without the full control over the independent variable(s) that is available in a true experimental design. ... Inferential Statistics. This method involves using statistical tests to determine whether the results of a study are ...
23 Quasi-experimental. 23. Quasi-experimental. In most cases, it means that you have pre- and post-intervention data. Great resources for causal inference include Causal Inference Mixtape and Recent Advances in Micro, especially if you like to read about the history of causal inference as a field as well (codes for Stata, R, and Python ...
1.5: Common Quasi-Experimental Designs. Recall that when participants in a between-subjects designs are randomly assigned to treatment conditions, the resulting groups are likely to be quite similar. In fact, researchers consider them to be equivalent. When participants are not randomly assigned to conditions, however, the resulting groups are ...
Design and statistical techniques for a full coverage of quasi-experimentation are collected in an accessible format, in a single volume. The book begins with a general overview of quasi-experimentation. Chapter 2 defines a treatment effect and the hurdles over which one must leap to draw credible causal inferences.
Describe three different types of quasi-experimental research designs (nonequivalent groups, pretest-posttest, and interrupted time series) and identify examples of each one. The prefix quasi means "resembling.". Thus quasi-experimental research is research that resembles experimental research but is not true experimental research.
A quasi-experimental study (also known as a non-randomized pre-post intervention) is a research design in which the independent variable is manipulated, but participants are not randomly assigned to conditions. Commonly used in medical informatics (a field that uses digital information to ensure better patient care), researchers generally use ...
This article discusses four of the strongest quasi-experimental designs for identifying causal effects: regression discontinuity design, instrumental variable design, matching and propensity score designs, and the comparative interrupted time series design. For each design we outline the strategy and assumptions for identifying a causal effect ...
Quasi-experimental study designs are frequently used to assess interventions that aim to limit the emergence of antimicrobial-resistant pathogens. However, previous studies using these designs have often used suboptimal statistical methods, which may result in researchers making spurious conclusions.
Experimental and Quasi-Experimental Methods. Research designs are central to research projects in that they constitute the projects' basic structure that will permit researchers to address their main research questions. Designs include, for example, the selection of relevant samples or groups, measures, treatments or programs, and methods of ...
Sources of Invalidity for Designs 1 through 6 8 2. Sources of Invalidity for Quasi-Experimental Designs 7 through 12 40 3. Sources of Invalidity for Quasi-Experimental Designs 13 through 16 S6 FIGURES 1. Regression in the Prediction of Posttest Scores from Pretest, and Vice Versa 10 2. Some Possible Outcomes of a 3 X 3 Factorial Design 28 3.
A quasi-experiment is an empirical interventional study used to estimate the causal impact of an intervention on target population without random assignment. Quasi-experimental research shares similarities with the traditional experimental design or randomized controlled trial, but it specifically lacks the element of random assignment to ...
In medical informatics, the quasi-experimental, sometimes called the pre-post intervention, design often is used to evaluate the benefits of specific interventions. The increasing capacity of health care institutions to collect routine clinical data has led to the growing use of quasi-experimental study designs in the field of medical ...
See why leading organizations rely on MasterClass for learning & development. A quasi-experimental design can be a great option when ethical or practical concerns make true experiments impossible, but the research methodology does have its drawbacks. Learn all the ins and outs of a quasi-experimental design.
Featuring engaging examples from diverse disciplines, this book explains how to use modern approaches to quasi-experimentation to derive credible estimates of treatment effects under the demanding constraints of field settings. Foremost expert Charles S. Reichardt provides an in-depth examination of the design and statistical analysis of pretest-posttest, nonequivalent groups, regression ...
III. Matching in Quasi-Experimental Designs: Normative Group Equivalence. Because of the problems in selecting people in a normative group matching design and the potential problems with the data analysis of that design, you may want to make the normative comparison group equivalent on selected demographic characteristics. You might want the same proportion of males and females, and the mean ...
Quasi-experimental designs do not randomly assign participants to treatment and control groups. Quasi-experimental designs identify a comparison group that is as similar as possible to the treatment group in terms of pre-intervention (baseline) characteristics. There are different types of quasi -experimental designs and they use different ...
A quasi experimental design is one in which treatment allocation is not random. An example of this is given in table 9.1 in which injuries are compared in two dropping zones. This is subject to potential biases in that the reason why a person is allocated to a particular dropping zone may be related to their risk of a sprained ankle.
All Answers (3) First, a quasi-experimental design requires a comparisons between those who were exposed to some factor versus those in a "non-equivalent control group" who did not have any ...
Accelerator physics relies on numerical algorithms to solve optimization problems in online accelerator control and tasks such as experimental design and model calibration in simulations. The effectiveness of optimization algorithms in discovering ideal solutions for complex challenges with limited resources often determines the problem complexity these methods can address. The accelerator ...
Ever since the ground-breaking isolation of graphene, numerous two-dimensional (2D) materials have emerged with 2D metal dihalides gaining significant attention due to their intriguing electrical and magnetic properties. In this study, we introduce an innovative approach via anhydrous solvent-induced recrystallization of bulk powders to obtain crystals of metal dihalides (MX2, with M = Cu, Ni ...