This exercise covers the robots.txt file
For this challenge, your goal is to retrieve the robots.txt from the main website for hackycorp.com.
The robots.txt file is used to tell web spiders how to crawl a website. To avoid having confidential information indexed and searchable, webmasters often use this file to tell spiders to avoid specific pages. This is done using the keyword Disallow. You can find more about the robots.txt file by reading Robots exclusion standard