CAT Score Computation Methodology: Is the procedure really justified?

5 Posts  ·  3 Users
About this group
:banghead::banghead: :: :nono: *Scoring* Prometric employed an industry-standard, psychometrically-sound approach to the scoring process for all IIM candidates. The three-step process is outlined here and is supported by the Standards for E...
Page 1 of 1
http://www.pagalguy.com/forum/cat-an...stice-261.html (Cat2009 results- is it injustice)
FOR PROTEST PLZ CONTINUE ON ABOVE LINK

HERE you can discuss what actually prometric has done ...anyone can write when he/she make out how actually it was done?....
Commenting on this post has been disabled by the moderator.

the information provided is not sufficient....what about varied difficulty levels...??
How scaling was done ? ..how the factor for linear transformation was derived....?? How day wise comparison was done? ??? Where are the answers to these??

Anybody know??
Commenting on this post has been disabled by the moderator.


its just a mockery!! a pure play of words nothing else!!!
Commenting on this post has been disabled by the moderator.
Commenting on this post has been disabled by the moderator.

:banghead::banghead::nono:
Scoring

Prometric employed an industry-standard, psychometrically-sound approach to the scoring process for all IIM candidates. The three-step process is outlined here and is supported by the Standards for Educational and Psychological Testing and the ETS Standards for Quality and Fairness.

Step 1: Raw Score is Calculated

Your raw scores are calculated for each section based on the number of questions you answered correctly, incorrectly, or that you omitted.
CorrectX Incorrect Answer0 Omitted

+ 3 points for questions you answered correctly
-1 Points for questions answered incorrectly
0 points subtracted for questions you did not answer

This scoring methodology ensures that candidates are only awarded points for what they know. Candidates are not awarded inappropriate points for random guessing. This is a standard process in the testing industry and is a methodology employed in scoring similar admissions tests such as the Graduate Record Examination (GRE).

Step 2: Raw Score is "Equated"

Equating is a statistical process used to adjust scores on two or more alternate forms of an assessment so that the scores may be used interchangeably. Industry standard processes were used for equating, such as those outlined within the ETS Standards for Quality and Fairness.

Step3: Equated Raw Score is "Scaled"
In order to ensure appropriate interpretation of an equated raw score, the scores must be placed on a common scale or metric. A linear transformation was used for this scaling process, which is an industry standard practice (Kolen & Brennan, 2004).

The IIM scaling model is as follows:
Section Scores = 0 to 150 Total Exam Score = 0 to 450

Four scaled scores are presented for each candidate: an overall scaled score and three separate scaled scores for each section. As the three sections evaluate three distinct sets of knowledge and skills, scores do not correlate across sections. A high score in one section does not guarantee a high score in another section. Percentile rankings are provided for each individual section as well as for the overall exam score.

About Test Difficulty
The CAT exam was developed to accurately identify top performing candidates and that design makes use of a scaled score range of 0-450. In order to appropriately identify the top performing candidates, the CAT exam was, by design, very difficult. As would be expected with the more difficult CAT exam, no candidate answered 100% of the items correctly and no candidate achieved the top theoretical score. The exam design accomplished the goal of identifying the top performing candidates who were, indeed, ranked at the top of the list. If the exam were designed to be substantially easier, it would be theoretically possible for a candidate to achieve a score of 450. However, an exam constructed to be that easy would not serve the distinct purposes of the IIM.


References
American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME). (1999). Standards for Educational and Psychological Testing. Washington, D.C.: Author.
Educational Testing Service (ETS). (2002) ETS Standards for Quality and Fairness. Princeton, N.J.
Kolen, M. J., & Brennan, R. L. (2004). Test equating, scaling and linking: Methods and practices. 2nd Ed.

Springer.

Commenting on this post has been disabled by the moderator.