What is an Easy Way to See if an Study Instrument Has Been Used in Other Studies

Creating and Validating an Instrument

To determine if an appropriate instrument is available, a researcher can search literature and commercially available databases to find something suitable to the study.  If it is determined that there are no instruments available that measure the variables in a study, there are four rigorous phases for developing an instrument that accurately measures the variables of interest (Creswell, 2005).  Those four phases are: planning, construction, quantitative evaluation, and validation.  Each phase consists of several steps that must be taken to fully satisfy the requirements for fulfilling a phase.

request a consultation

Discover How We Assist to Edit Your Dissertation Chapters

Aligning theoretical framework, gathering articles, synthesizing gaps, articulating a clear methodology and data plan, and writing about the theoretical and practical implications of your research are part of our comprehensive dissertation editing services.

  • Bring dissertation editing expertise to chapters 1-5 in timely manner.
  • Track all changes, then work with you to bring about scholarly writing.
  • Ongoing support to address committee feedback, reducing revisions.

The first phase is planning and the first step of planning includes identifying the purpose of the test and the target group.  In this step, the researcher should identify the purpose of the test, specify the content area to be studied, and identify the target group.  The second step of phase one is to, again, review the literature to be certain no instruments already exist for the evaluation of the variables of interest.  Several areas to look for existing instruments include theERICwebsite (www.eric.ed.gov),Mental Measurements Yearbook(Impara & Plake, 1999), andTests in Print(Murphy, Impara, & Plake, 1999).  Once the researcher is certain no other instruments exist, the researcher should review the literature to determine the operational definitions of the constructs that are to be measured.  This can be an arduous task because operationalizing a variable does not automatically indicate good measurement and therefore the researcher must review multiple literatures to determine an accurate and meaningful construct.  From this information, the researcher should develop open ended questions to present to a sample that is representative of the target group.  The open ended questions aid the researcher in determining areas of concern around the constructs to be measured.  The responses to the open ended questions and the review of the literature should be used in unison to create and modify accurate measures of the constructs.

The second phase is construction and it begins with identifying the objectives of the instrument and developing a table of specifications.  Those specifications should narrow the purpose and identify the content areas.  In the specification process, each variable should be associated with a concept and an overarching theme (Ford, http://www.blaiseusers.org/2007/papers/Z1%20-%20Survey%20Specifications%20Mgmt%20at%20Stats%20Canada.pdf).  Once the table of specification is completed, the researcher can write the items in the instrument.  The researcher must determine the format to be used, ie. Likert scale, multiple choice, etc.  The format of the questions should be determined by the type of data that needs to be collected.  Depending on the financial resources of the research project, experts within the field may be hired to write the items.  Once the items are written, they need to be reviewed for clarity, formatting, acceptable response options, and wording.  After several reviews of the questions, they should be presented to peers and colleagues in the format the instrument is to be administered.  The peers and colleagues should match the items with the specification table and if there are not exact matches, revisions must be made.  An instrument is content valid when the items adequately reflect the process and content dimensions of the objectives of the instrument (Benson & Clark, 1982).  Again, the instrument should be distributed to a sample that is representative of the target group.  This time the group should take the survey and critique the quality of the individual items and overall instrument.

Phase three is quantitative evaluation and includes administration of a pilot study to a representative sample.  It may be helpful to ask the participants for feedback to allow for further refinement of the instrument.  The pilot study provides quantitative data that the researcher can test for internal consistency by conducting Cronbach's alphas.  The reliability coefficient can range from 0.00 to 1.00, with values of 0.70 or higher indicating acceptable reliability (George and Mallery, 2003).  If the instrument is going to be used to predict future behavior, the instrument needs to be administered to the same sample at two different time periods and the responses will need to be correlated to determine if there is concurrent validity.  These measurements can be examined to aid the researcher in making informed decisions about revisions to the instrument.

Phase four is validation.  In this phase the researcher should conduct a quantitative pilot study and analyze the data.  It may be helpful to ask the participants for feedback to allow for further refinement of the instrument.  The pilot study provides quantitative data that the researcher can test for internal consistency by conducting Cronbach's alphas.  To establish validity, the researcher must determine which concept of validity is important.  The three types of validity include content, criterion-related, and construct.  Content validity is the extent to which the questions on a survey are representative of the questions that could be asked to assess a particular construct.  To examine content validity, the researcher should consult two to three experts.  Criterion-referenced validity is used when the researcher wants to determine if the scores from an instrument are a good predictor of an expected outcome.  In order to assess this type of validity, the researcher must be able to define the expected outcome.  A correlation coefficient of a .60 or above will indicate a significant, positive relationship (Creswell, 2005).  Construct validity is established by determining if the scores recorded by an instrument are meaningful, significant, useful, and have a purpose.  In order to determine if construct validity has been achieved, the scores need to be assessed statistically and practically.  This can be done by comparing the relationship of a question from the scale to the overall scale, testing a theory to determine if the outcome supports the theory, and by correlating the scores with other similar or dissimilar variables.  The use of similar instruments is referred to as convergent validity and the use of dissimilar instruments is divergent validity.

Creswell, J. W. (2005).Educational research: Planning, conducting, and evaluating quantitative and qualitative research(2nd ed.). Upper Saddle River, NJ: .Pearson Education, Inc.

George, D. & Mallery, P. (2003).SPSS for Windows step by step: a simple guide and reference, 11.0 update (4thed.). Boston, MA: Allyn and Bacon.

Murphy, L. L., Impara, J. C., & Plake, B. S. (Eds.). (1999)

bakeryespire45.blogspot.com

Source: https://www.statisticssolutions.com/creating-and-validating-an-instrument/

0 Response to "What is an Easy Way to See if an Study Instrument Has Been Used in Other Studies"

Enregistrer un commentaire

Iklan Atas Artikel

Iklan Tengah Artikel 1

Iklan Tengah Artikel 2

Iklan Bawah Artikel