they sampled an incredibly small sample size. It's extremely easy to get fucked up results from assuming that you can make a poll representative of Americans as a whole. Like. where I live... most people in the state want him locked up (or you know. burned at the stake.) But, you go an hour out the cities and even the democrats there would be likely to express some hesitancy. Because it's trump country out there.
and that assumes the poll wasn't meant to get this result (for example polling in ways that get maybe more conservative democrats. or people simply lying and saying they're democrats.)
Actually the sample size checks out. I love it when people see "Smol number not as big as big number, therefore sample size bad" and I am going to pull a very elitist argument here and say that people at Harvard University likely know more about polling than you do, just saying.
Small sample size is fine when it's representative of the population. Trying to extract nationwide sentiment, a very diverse thing, off a small size is unlikely to be very representative.
Except the numbers work out, and studies have made very very sweeping generalizations based on much smaller sample sizes of much larger demographics (for example the 1 in 5 myth comes from a study that had less than 100 respondents). This study is a dream compared to those.
Just because studies have made sweeping statements, doesn't mean they're right. I could say I've got the longest member I'm the world based on a study I conducted in my basement, but it doesn't change reality.
Where did I say making sweeping statements equals correctness? Man people are getting so emotional over this because it turns out the majority don't agree with them. Guess it shreds the narrative that they're the majority.
The fact that people who still do take stats classes likely know what they're talking about is not contradictory to the point that these classes may no longer be required in most degrees. Hope this helps! Try not to blow a head gasket trying to process this info.
A sample size of 2090, as in this study, is large enough to bring the margin of error down to 2%.
Furthermore, there is no need to speculate about who they polled, because this information is available. Questioning the results of the poll is as unreasonable as 2020 Trump supporters questioning every poll that showed Biden with an advantage.
The section that says "Results were weighted for age within gender, region, race/ethnicity, marital status, household size, income,
employment, education, political party, and political ideology where necessary to align them with their actual proportions in the population. Propensity score weighting was also used to adjust for respondents’ propensity to be online." kinda sticks out to me, too.
Yeah, that admission kind of makes me pause when considering the results. There should have been a page of the published poll that better described how this was taken. For instance, doing just a LAN line poll skews poll results considerably.
But it's only the beginning of the fed case against Trump, so I'm sure opinion will change.
That's how all reputable election polling was done in 2020. For example, if you take a random sample that happens to be 52% men and 48% women, it is completely appropriate to overweight the women's responses to match their actual percentage in the US, 50.5%.
In fact, in the 2020 election there was a bunch of Trump supporters who had the same doubts as you, and they would "unskew" polls with 52% men responding to give them 52% of the final weighting. Lo and behold, their "unskewed" polls showed Trump in the lead. But the proof of the method is in the election results...
Agreed. This poll is hard to believe. There was another one last week saying a majority agreed with the indictment. There’s lies, dammed lies, and polls.
Take this with a grain of salt. A pollster can come up with any results they want if they ask the question carefully.
This is almost certainly something put out by Trump’s team to manipulate public opinion. It’s bullshit and not worth anyone’s energy.
Harris Polls has been described as " when Harvard Poll meets Fox News" and "cherrypicks to advance agendas". Just like when looking into bias of news sources, it's important to look into the bias of polling sources.
It seems like you're thinking of this article, which is talking about Fox News misrepresenting a Harris poll, and the Fox "journalists" cherry-picking to fit an agenda. That article isn't criticizing Harris, which mostly over exaggerated Democrat victory in the 2020 elections. Not saying they're not biased, but it seems like you may have misunderstood a source?
I replied to a different comment linking that article, so I'll copy it here:
Thank you for showing where that phrase was used in writing, but that is not the only time he has been pointed out for the irony of his juxtaposition. He is a former pollster for the Clintons that became very "trumpy" (to use Politico's word) and instead of being on all news shows the only one that would bring him on is Fox.
The thing about polling is that one can write the questions in order to get the answers they want or need and data can be extracted to portray what is needed. Without the raw data, we really don't know what was asked or how the data portrayed was pulled.