When bad people do good things they are generally seen as sinister, as if they are concealing a horrible action behind a facade of good will. So if you believe the government is fundamentally evil, and you see it trying to do something good (which is the whole purpose of FEMA) then its actions are going to look sinister to you. So stories about FEMA having camps (at their core, these are stories about the government using the facade of aid and assistance to hide something evil) will make sense to you because they are consistent with your sentiments about the what the government is. So too would stories about FEMA using disasters as a pretext for land snatching or stories about FEMA ignoring people in peril because these are all stories about an evil government. To the extent that they are consistent with your sentiments about the government, they are easy to accept as true, even if they contradict each other.
One of my big beefs with ML/AL is that these tools can be used to wrap bad ideas in what I will call “Machine legitimacy”. Which is another way of saying that there are many cases where these models are built up around a bunch of unrealistic assumptions, or trained on data that is not actually generalizable to the applied situation but will still spit out a value. That value becomes the truth because it came from some automated process. People cant critically interrogate it because the bad assumptions are hidden behind automation.