From what I could find:
Specifically for the 2016 Mendez paper, they recovered around 120 kilobases of Y chromosome sequence from El Sidrón. The entire human Y chromosome is about 57 megabases, so they captured roughly 0.2% of it, which is definitely less than I would assume is ideal. They got that using exome capture... essentially baiting specific target sequences with synthetic probes to fish out fragments of interest from a soup of mostly degraded or contaminated material.
Now, for the 2020 Petr paper they did better, achieving something closer to full Y chromosome coverage but the raw DNA in the sample is still extremely old and damaged, not to mention, fragmented. The "sequences" are reconstructed from millions of tiny overlapping reads, not pulled out as intact strands.
And apologies for the use of AI next, but to your second question, I didn't want my personal bias (which is in agreement with you) to affect the analysis of the data. This is what plugging the studies and asking the question output:
"The legitimate criticism buried in your question
There is a real concern worth raising: when scientists say "we sequenced the Neandertal Y chromosome," what they mean is they reconstructed a statistical consensus sequence from millions of tiny, damaged, overlapping fragments, filtered for contamination, corrected for damage patterns, and assembled computationally. It is not like reading a book. It's more like reconstructing a shredded manuscript from confetti, after mice have eaten some of it.
That doesn't make it wrong — the methods are rigorous and have been extensively validated — but the popular science framing ("scientists read Neandertal DNA!") does obscure how indirect and probabilistic the process is. The researchers themselves are usually careful about this in the methods sections, even when the headlines aren't."