Description: External SecureAuth realms can be discovered through search engines.
Cause: These external sites are publicly accessible.
Resolution: In order to prevent search engines from crawling your website, simply update the robots.txt file, which is located in D:\inetpub\wwwroot\robots.txt.
This configuration allows all search engines to crawl your website.
This configuration disallows any search engine from crawling your website.
This configuration disallows a specific search engine from crawling your website.
Top 3 U.S. search engine User-agents:
Common search engine User-agents blocked:
SecureAuth Knowledge Base Articles provide information based on specific use cases and may not apply to all appliances or configurations. Be advised that these instructions could cause harm to the environment if not followed correctly or if they do not apply to the current use case.
Customers are responsible for their own due diligence prior to utilizing this information and agree that SecureAuth is not liable for any issues caused by misconfiguration directly or indirectly related to SecureAuth products.