I don't know what this means. When I get tested, it's either accurate or it's not.
I described it above. All tests have error.
Let's assume a test has a 1% false positive rate and a 5% false negative rate, and we go to the ultimate extremes.
If a population has 0% infection, i.e. not a single case, and you test that entire population, you'll get a 1% positive rate. Obviously for those 1%, the test is inaccurate, because we already stipulated 0% infection.
If a population has a 100% infection rate, and you test that entire population, you'll get only a 95% positive rate. Obviously for the 5% that tested negative, the test is inaccurate, because we already stipulated 100% infection.
Those are the easy cases...
Now for the hard one. If a population has a 50% infection rate, you know that you'll get an extra 0.5% test result from the 1% false positive rate on the half of the people who aren't infected, and you'll lose 2.5% of your true positives for the false negative rate on the half of the people who ARE infected, so you should expect a 48% result when testing the entire population.
If your population is 100,000 people, that means that you have 500 people for whom the positive result was inaccurate, and 2500 people for whom the negative result was inaccurate.
You know 50,000 people have the infection and 50,000 don't, but 3,000 people got an inaccurate test result. And none of the people can say with 100% certainty whether they're in the group which had an accurate result or an inaccurate result.
This is why we distinguish between the accuracy of a test, and its predictive power. A very accurate test can have low predictive power depending on prevalence of infection.
And it's MUCH more important in this case for false positives, because you don't want those 500 people to think they're invincible and can't get re-infected, because they don't realize they were never infected in the first place and have no immunity.