I have a method that takes a string of HTML from a page and then parses the nodes, looking for particular elements. I have used similar code successfully for other pages. If I run the code directly, the method occasionally returns null nodes partway through the page. If I run in debug mode, the method works correctly all the way through the page. There is no pattern to where the null node appears, and the page text is the same each time. I have put in debug statements to make sure the method is never called if the page is empty. No ParserException is triggered.

the code is basically this:

Java Code:
import org.htmlparser.Parser;
import org.htmlparser.util.NodeList;
import org.htmlparser.nodes.*;
import org.htmlparser.util.ParserException;
import org.htmlparser.lexer.*;
import org.htmlparser.filters.*;

(removing standard class stuff, it is in my actual code which does compile and run)

Lexer lexer=new Lexer(pageText);
Parser parser=new Parser(lexer);
HasAttributeFilter nameFilter=new HasAttributeFilter("class","name");
NodeList namelist=parser.parse(nameFilter);
AbstractNode nameNode=(AbstractNode)namelist.elementAt(0);
if(nameNode==null){
				
	System.out.println("name node is null: "+namelist.toHtml());
				
			}
When it fails, the printout of namelist is always empty as well as the node.

I'm guessing there is some kind of race condition going on. I'm hoping someone on the forum can give me some additional debugging tips on how to nail this issue.