Comments are separated by dashes.
-------
We have been using assessment for our programs. Why not use some of that information we collect every year? I don’t know if we have national test comparisons (we don’t request students to take those). So clarifications need to be given for the laundry list provided under this criterion. What, specifically are we looking for? E.g. faculty awards, service on editorial boards, student awards or professional involvement?
--------
• Question A: offering examples of exemplary performance is great except it is in the ‘testimonial’ category and usually exemplary performance is due more to the individual’s innate abilities and drive than the educational institution. We assist, of course, but I’m not sure we can take full credit (IMHO). I LIKE the ‘congruence between intended/actual outcomes’ but it is not something one can decide to measure today if you’ve not been doing good data keeping….
• Satisfaction—this is a standard measure but the alumni data is hard to get and employer information is very very difficult to get. And Nursing knows where our grads are working—I have NO idea how some depts. (English, math, psychology, etc.) would address that.
• ‘Professional’ admissions? Does this mean, did the pre-engineering students get into engineering programs?
• Question B: I think this is irrelevant.
• Question C: how does one address ‘external validation of quality’ of faculty (certifications?) or ‘outcomes mirror best practices’ (too vague); ‘recognition to institution’ ?? We seem to be missing student evaluations of individual’s teaching effectiveness???
• Needed: external accreditations or recognitions.
-------
• This is another place where I would suggest student research and/or scholarly and creative activities are deliberately identified. Perhaps using SAC as a measure?
• National test comparisons: costs for these can be prohibitive and for other test scores (GRE, MCATs) we are reliant on students to send data to the school we can't request?
• Grade inflation? How will average GPA's be compared when courses may be vastly different and teaching styles have been deliberately tailored to help students succeed?
• Professional productivity - isn't this already addressed in Criterion 4?
-------
This entire section can be condensed into “ Demonstrate Student Success.”
B. You will be provided with average GPA for graduating seniors from 10 and 5 years ago vs today This is misrepresentation of the intent of the book. Reviewing GPA trends as a means of determining grade inflation was just an example of how one single institution decided to take a quirky approach to measuring teaching effectiveness. Besides, a rising GPA trend does not prove the causality of grade inflation. What of the program that spends a decade carefully refining their first year foundations program to better prepare their students for success in the program? Should they be lumped in with grade inflation?
C. Faculty
o Peer reviewed or juried competitions
o External validation of quality
o Outcomes mirror best practices
Faculty effectiveness has already been covered under Criterion 4. Only in a research institution is faculty a measure of program outcome. In our type of institution it is better suited to aforementioned Criterion 4 Quality Inputs
o Recognition to institution
B. Has the program brought beneficial recognition to the institution?
(This should be its own item not a bullet.)
-------
Wednesday, February 4, 2009
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment