Collector
|
Oxbladder private msg quote post Address this user | |
Quote:
Originally Posted by CatmanAmerica Quote:
Originally Posted by JazzyJeffie @CatmanAmerica Those points are good, but can be complicated. How can CBCS keep track of a book that's been cracked again and again because of having new artists sign the book over and over? Sounds tedious.
I'll be content with CBCS just showing a population of each graded CBCS book, because I can just look up CGC info if I need to, and judge the value by myself. And it's just a start, CBCS can choose to evolve their population database into an actual more-accurate Census that takes into consideration the reholdering and crack-and-sign instances.
The first rule of developing an honest census is realizing that it will never be 100% accurate. You have to work from reliable data and extrapolate based on likelihood.
While it is entirely feasible to develope totally separate census data at each third party grading facility, the reliability diminishes if competing data isn't taken into account.
This is less important with low and mid-grade books as fewer of those get resubmitted for potential grade bumping. However, submitting books to different third party grading services may produce duplications on a broader scale, and on frequently resubmitted high grade books it can skew numbers dramatically.
This is where a FC/BC image data base would prove useful for submissions of rarer/popular SA books and higher profile GA books that command high prices.
I'm not suggesting that building a reliable census data base would be easy, but if properly developed it's enhanced usefulness would be extremely valuable to the collecting community and a huge feather in the cap of the grading service constructing it.
Good points but probably easier said than done. I guess my main issue would be just how reliable is the data in the CGC census considering how many crack and resubs have been done without turning in the previous label? When the Manufactured Gold thread existed over at CGC and the database at NOD it was quite apparent that not only were there many instances of books exist twice, or more in the census but some were also losing their pedigree designation. The Curmudgeon's idea of fingerprinting books is more than possible but is it also something that can be done quickly enough that it will not hinder processing books through the facility.
How the heck would you get the two companies to even agree to share such data?
I really like your points and I had never even considered such things but just how feasible is it?
(Just as an aside because of the duplicate data points in the CGC census I don't trust know how much to trust it beyond a snapshot of what has flowed through the doors of CGC. I can bring myself to trust it enough to use it for marketing scarcity the way some have.) |
8 years ago |
Post 26 IP
flag post |