yep.. have done that. Thanks.

The problem is exactly what Wikipedia states in the early section:

Quote
Despite the use of the terms "allow" and "disallow", the protocol is purely advisory. It relies on the cooperation of the web robot, so that marking an area of a site out of bounds with robots.txt does not guarantee exclusion of all web robots. In particular, malicious web robots are unlikely to honor robots.txt; some may even use the robots.txt as a guide and go straight to the disallowed urls.

I'm looking for some additional POWER to push back. grin


--BIll B