I don;t think antibody testing is very widespread yet, and at least one version of it seems to be deemed unreliable by the medical/scientific community.
Ultimately, a reliable and widely available antibody test will be the real key to helping open up and make people feel safer-- that is, of course, IFF some reasonable amount of immunity is conferred.
The issue that I've seen with antibody testing is that the false positive rate is too high for small exposure numbers.
In the Santa Clara study, they used a test that they claimed had only a 0.5% false positive rate, by combining the results of several studies regarding that test, but several of those studies showed much higher false positive rates (some at 1.5% and higher).
Even so, they tested 3330 people and got 50 positive results (1.5%). Fully a third of those positive results could have been false positives using their own estimation of the false positive rate. Their range for potential false positive rate was 0.3-0.8%, which means that if the false positive rate was at the top of their scale, they could have more than half of their positives be false. And if the false positive rate was as high as some of the other studies of that test suggest (1.5%)? Literally they could have potentially tested zero people who had been exposed/infected by COVID-19 and found 50 false positives.
Then there were other issues. It wasn't a random sampling. They basically advertised the test on Facebook, so you can't be certain that what they got was anywhere near a random sample of the population--it could be people who thought they might have had it and wanted to check, or it could be people who had higher risk tolerance to going outside (since they went for antibody testing during a stay-at-home order), etc.
Finally, when they tried to account for false positives/negatives, they said the likely real infection rate was 1.2% in their sample. So they then weighted that sample for the demographics of the county based on where the positive results came from and said the likely infection rate in the county was 2.8% (95CI 1.3-4.7%). Literally they said that accounting for test performance the likely prevalence rate was 1.2%, but that was below the lowest 95% confidence interval for the lower bound of prevalence due to weighting.
They should have just thrown out all their data and walked away... Or expanded their test to a much larger sample size. Or done something, ANYTHING, other than publish something that suggests the true mortality rate was between 0.12-0.20%, which is quite frankly BS.
So per your bolded part: I don't think there's any problem with the test itself. The issue is that you KNOW there's a false positive rate. If you're measuring a sample and the actual positive rate from the test is close to that false positive rate, you don't have enough prevalence to draw strong conclusions. If your false positive rate is 1.5%, and your actual positive rate from testing is near 25% (as in NYC), then you know that 1.5% false positive is only a small part of your potential measurement error. If your false positive rate is 1.5%, and your actual test result is 2.5% positive, you know that somewhere near 60% of your positive results are suspect and you can't really say much about the result.