So I've been reading about the Santa Clara antibody study, and it sounds like there are a lot of issues...
https://statmodeling.stat.columbia.edu/2020/04/19/fatal-flaws-in-stanford-study-of-coronavirus-prevalence/Primarily one of the issues is the reliability of the test. Out of 3300 people tested, they found 50 with antibodies. Literally if you employ known statistical tools to this test given the false positive rate, it's possible that zero of those 3300 people could have antibodies and you could still end up with 50 positive tests within standard confidence intervals... Which means there's very little that you can draw from it.
That doesn't get into any of the other statistical / sampling / postsampling adjustments done, some of which were questionable. 50 out of 3300 is 1.5%, yet the study authors concluded based upon this that 2.5% to 4.2% of Santa Clara County residents were likely previously infected. Statistically, the lowest bound of the results from the authors suggested an infection rate 66% higher than what they actually measured. Thats... questionable.
So upon reading, I place about zero confidence in that study, and to an extent the Los Angeles study might have some similar statistical issues because the antibody rate was still very low (but much higher than Santa Clara, so probably much more reliable). I believe the LA study had a 4.1% measured antibody rate. However IIRC it was <1000 participants, so that makes the potential ranges wider too.
It suggests that the NY study is probably more likely reliable. When you start getting numbers like 15% statewide and even 25% in NYC, you're no longer in the range where low rates of false positive/negative results really make a
meaningful difference to your confidence in the numbers. I.e. you could still be wrong, but the error bars of how far wrong you are are not exactly heavily impacted by test accuracy.
Note that the NY studies show probably somewhere in the range of a 0.5-0.78 true mortality rate... Much higher than the seasonal flu...