In physics, it’s common to develop a formula and then stick a constant to explain the unknown. For example, Newton’s theory of gravity uses the gravitational constant G on the formula F = G * m_1 * m_2 / r^2, later on Einstein gave a more accurate explanation with the theory of relativity which does not rely on a constant E = m * c^2. Constants provide a good enough explanation of the laws of physics that’s useful for centuries.

I was wondering what’s the equivalent in social studies? How do researchers deal with the uncertainty of human behaviour?

Edit: Comments made me remember how much I don’t understand the theory of relativity, terrible example, sorry for the confusion. I need to rephrase the question but I don’t know how.

I am looking for “glue” concepts, things that help connect observations with theory, aka if I calculate m_1 * m_2 / r^2 the result is slightly off but if I account for G, an empirical constant derived from observation, then everything makes sense for the observable universe.

Also, as someone said, I am referring to social studies.

  • JoBo@feddit.uk
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 months ago

    You reminded me of this exchange between Robert Cousins and Andrew Gelman:

    Our [particle physicists’] problems and the way we approach them are quite different from some other fields of science, especially social science. As one example, I think I recall reading that you do not mind adding a parameter to your model, whereas adding (certain) parameters to our models means adding a new force of nature (!) and a Nobel Prize if true. As another example, a number of statistics papers talk about how silly it is to claim a 10^{⁻4} departure from 0.5 for a binomial parameter (ESP examples, etc), using it as a classic example of the difference between nominal (probably mismeasured) statistical significance and practical significance. In contrast, when I was a grad student, a famous experiment in our field measured a 10^{⁻4} departure from 0.5 with an uncertainty of 10% of itself, i.e., with an uncertainty of 10^{⁻5}. (Yes, the order or 10^10 Bernoulli trials—counting electrons being scattered left or right.) This led quickly to a Nobel Prize for Steven Weinberg et al., whose model (now “Standard”) had predicted the effect.

    I replied:

    This interests me in part because I am a former physicist myself. I have done work in physics and in statistics, and I think the principles of statistics that I have applied to social science, also apply to physical sciences. Regarding the discussion of Bem’s experiment, what I said was not that an effect of 0.0001 is unimportant, but rather that if you were to really believe Bem’s claims, there could be effects of +0.0001 in some settings, -0.002 in others, etc. If this is interesting, fine: I’m not a psychologist. One of the key mistakes of Bem and others like him is to suppose that, even if they happen to have discovered an effect in some scenario, there is no reason to suppose this represents some sort of universal truth. Humans differ from each other in a way that elementary particles to not.

    And Cousins replied:

    Indeed in the binomial experiment I mentioned, controlling unknown systematic effects to the level of 10^{-5}, so that what they were measuring (a constant of nature called the Weinberg angle, now called the weak mixing angle) was what they intended to measure, was a heroic effort by the experimentalists.