Sunday, October 9, 2016

4:3 Comments: Quantitative research

I disagree with what you're stating about many researchers not combining methods, unsure what reasoning you are basing this on.

You argue that a reason for excluding a research method is that it takes "too much time and effort". This sounds like a decision made in planning more than anything else. It is true that it can take more time to use a second method, but if the alternative is that your research isn't seen as valid this should be an easy choice to make a top priority.

The difference between paper- and web based questionnaires I do agree on - some eye openers for sure. Also nice that you are reflecting on the importance and potential impact of question formulation in questionnaires.

It is pedagogic, the way you describe quantitative research with measurement (the method) as central, rather than the data (the object). I also like how you continuously make efforts to incorporate media and its role in your reflections.

Hans Rosling changed my view from seeing poverty and the world from being overpopulated on a way to disaster into a more positive outlook. He is using statistics in a fantastic way by using simple tools as fruits to illustrate to common people what sometimes seem too big to grasp. Because it is through common people and their (baby)steps that the world can change. Thank you for taking the opportunity to highlight him in this context!

3. Kristina NybergOctober 9, 2016 at 1:39 PM
It would have improved on your reflection to include a contrast to what you already knew about qualitative methods and give some concrete examples.

The questioning of a hidden agenda is useful though. This is true for many scenarios online, such businesses posting fake reviews on their own sites and bloggers that get bribed to write product reviews. Google started taking action against some of these techniques, but in the world of online marketing there are frequently new questionable techniques to promote certain websites to get higher visibility in search engines. These are called "grey hat" or "black hat" (when a technique is allowed, it's "white hat". Supposedly the same terminology could be applied to research methods to label what's allowed, questionable or disallowed.

4. Kristina Nyberg9 oktober 2016 13:46
Although I agree that it's important to have a structure and conditions, I disagree with that this should be done during the study. I think it should be done before the research, in the planning stage.

What you question about bias is important because there may be "socially acceptable" answers, but also unexpected hidden bias that researchers did not foresee.

Time consumption of research is a topic that can later be expanded on, but peer review in particular I also reacted on. The length of peer review time conflicts with the turbulence of technology. This might risk that quality-checked and maybe fully valid research never even gets seen as people browsing might only include more recent research. This is something that needs to be addressed and the process itself perhaps needs to be peer reviewed.

6. Kristina Nyberg10 oktober 2016 02:56
Manipulation of data is a topic I've already come across whilst commenting other blogs. Perhaps you had a discussion about it, but I agree that one should always be considering the intent behind research and this would be an important part of it. As another commenter said, I feel you could have expanded more on the topic and not just specifying a generic procedure.

Nice connection to knowledge production. Also, the practical example about drumming showcases that you have grasped quantitative research and how it can be applied. That you specify where the hypotheses are used helps to avoid confusion as well.

On another note, I've also studied research methods before but from an entirely different perspective. This course is more philosophical and theoretical, whereas my previous courses were applied and practical. All useful, but at the same time this feels new.

8. Kristina Nyberg10 October 2016 at 03:18
To state the objective with quantitative research to prove the hypothesis is in itself a description a problem that can occur. Instead of researchers being open to that their hypothesis is accepted or rejected, based on the outcome, they can become more inclined to manipulate the study to fit the hypothesis. This is also known as "research bias" (https://explorable.com/research-bias).

9. Kristina Nyberg10 October 2016 at 12:35
As always with your posts, it's clear that you've spent time on your reflection and it's a pleasure to read.

I like that you highlight the necessity to keep research relevant and giving practical examples of when manipulation is justified. I also like that you highlight the possibility of a rejected hypothesis, this can sometimes be a problem with researcher bias (not seeing a rejected hypothesis as an option and therefore manipulating the research, for example).

Your own study as example is good because it shows your practical application and understanding of the theme. It was also very interesting to read that you have chosen this topic. Was it not hard to remain neutral, as you (if I've understood correctly) yourself has been one of the Bulgarians entering the British labor market?

Thanks for a well prepared and thorough reflection! It is clear that you've grasped the topic and its relevant processes.

Because quantitative research deals more with numbers and hard data than qualitative research do, I think although math as a skill isn't necessary, it is an advantage.

I want to comment about the part at the end, where you mention a problem with the Twitter study. I think the conflict between technology changing more rapidly and the sometimes very lengthy peer review process also is a problem that needs to be resolved. We want valid research, but unless it's recent enough it's never even going be seen. Maybe the process of peer review needs to change.

No comments:

Post a Comment