Evaluating a practical is just like evaluating a study, except you are the psychologist who conducted the study!
This means that GRAVE is a good starting point!
- Think about your sample, e.g. sample size, sampling technique (how they were recruited), characteristics of the participants; this AO1 knowledge of what you did will help you to make points about the generalisability of the conclusions you have drawn.
- Think about the research method that you used, questionnaire, observation, experiment or correlation; what are the strengths and limitations of these methods; but beware of generic criticism, you are evaluating YOUR STUDY, and therefore each chain of reason must carry forth under-pinning knowledge (procedural detail – or what you did) in order to make to explain the points you are making.
- Was the data qualitative or quantitative, this should provide more ideas for evaluation.
- If it was an experiment what design did you use?
- How did you operationalise the IV and DV? Were these valid ways to research this topic?
- Were there demand characteristics?
- To what extent was the procedure replicable?
As your research methods lexicon (vocabulary) develops you will find more and more ways to evaluate your practicals and studies in general :0)
Some quick but critical tips for practicals questions:
- Every year people, somewhere in the country, write about the wrong practical. Don’t let it be you! Take time to check which section of the paper you are in and think carefully before putting pen to paper, “am I writing about the correct practical??!”
- When giving improvements to practicals make sure you choose something sensible which can be elaborated effectively for the arks available; think HOW and WHY; make sure the improvements is specific, gives explicit details of what would be changed exactly (HOW) AND the you must say what impact this would have; WHY is this an improvement?