New Project: Spyder

I decided to write a simple web spider in order to learn Python, and to generate a list of urls for webserver benchmarking & stress testing… and so Spyder was born. It is written in Python 3.



a simple web spider written in Python 3

When called on a url, it will spider the pages and any links found up to the depth specified.
After it's done, it will print a list of resources that it found.
Currently, the resources it tries to find are:

images   -  any images found on the page (ie: <img src="THIS"/>)
styles   -  any external stylesheets found on the page.  CSS included via '@import' is currently only supported if within a style tag!
(ie: <link rel="stylesheet" src="THIS"/>  OR <style>@import url('THIS');</style> )
scripts  -  any external scripts found in the page (ie: <script src="THIS"> )
links    -  any urls found on the page.  'Fragments' are discarded. (ie: <a href="THIS#this-is-a-fragment"> )
emails   -  any email addresses found on the page (ie: <a href="mailto:THIS"> )

An example script for doing something like this, '', is included.  It uses apache benchmark as an example.
Eventually I'll be experimenting with 'siege' for benchmarking & server stress-testing.

NOTE: Currently the spider can throw exceptions in certain cases (mainly character encoding stuff, but there are probably other bugs too)
Getting *working* character encoding detection is a goal, and is sorta-working... ish?  Help in this area would be appreciated!
Filtering the results by domain is almost working too