The good news is that we seem to be entering a period when games are becoming easier to make, with authoring tools such as ThinkingWorlds from Caspian Learning, Flash and so on. Participation in games results, as I always suspected, on much deeper processing than many other learning methods - lectures etc., so that retention is higher, but more importantly learnt skills result in actual behavioural change. We've known this from Flight Simulators for decades. The nice thing about this study is the elegance of the experiment, which isolates the causality.
The experiment looks like it is more about conditioning than learning. The colour of Jersey determines the taste of the drink. What if in real-life situation the associations are reversed every few incidents? This learning will be futile there. This learning can only be implemented where the element of surprise is missing and the association of things are simple.
For example, if I give a mango to a one year old, and this mango is sweet. The child will associate the sweet taste with the mango. Next time, if I give him an unripe sour mango, that will be another step in learning: mango is sweet but it can also be sour sometimes. As the neuron circuits process the new information. They will register the association with the colour of the fruit and the taste. Next time when the child is given the fruit he will have a task. The learning will come into action in building his anticipation, but there can be further surprises where he can encounter a ripe mango with a sour taste.
I don't know if I have put my point across clearly, but that is why I do not get X box for my son, who is whining for the gadget all the time.
The games may give you quick reflexes, but they might undo the learning by conditioning the mind and taking away from the learning that comes from analysis and exploration of establish facts. A very good research link. Thanks for sharing.
Kia ora Donald
I think we all have to take care with what we are really witnessing from reported findings of this type. It is well known that drawing useful conclusion from so-called behavioural data is not easy.
There is much scope for the practice of what I'd call pseudoscience, based on observations that are assumed to be related to factors when in fact all factors have not been taken into account and all variables are not known. I hesitate to use the word shonky, but that's how it is if there are uncharted variables at play. Similarly if results are not tested for reproducibility.
The difference between drawing a conclusion from a reproducible pattern and anecdotal, isolated inference is what I refer to here. Simple though performing a scientific experiment may be, it is often too simple given the real conditions present when observations are made when gathering the data.
I had a demonstration of the Caspian Tool at Learning Technologies and was very impressed. I showed the demo to some Computer Games Students that we are working with and they were quite excited by the prospect of working with the tool.
The simulation element of the tool will be very useful when working with clients where real-world 'training' is not possible.
I find this research a bit spurious unless I’m missing something. Seems to me it is saying that if you give a subject a reward for one thing and a penalty for another in a controlled situation, then in real life they will tend towards the thing for which they were rewarded. The fact that the control in this particular case was a video game is I feel irrelevant so doesn’t actually further the debate about the value or otherwise of video games as such.
But then if you read my post ‘Forget research. Let’s look at the evidence’ you will see that rather than academic research I am more in favour of what I like to call ‘observational evidence’. That is, looking around us to see what is going on in the real world rather than the academic one.
I am not saying there is no value in academic research. It’s just that I think we get the balance wrong with a tendency towards requiring academic research which often turns out to simply confirm the obvious.
Psychology does both - the strengths and weaknesses of observational 'field' studies versus designed 'experimental' work is well understood.
Observational field studies often lack the controls necessary to isolate the key variable(s) being tested. Experimental work is better for isolating hypothetical ideas and variables, but can often be artificial and interfere with the outcomes.
It's a balance, but both are necessary. This is agood experiment as it truly isolates the key variable.
Post a Comment