I built SpiderSuite to consolidate several fragmented web security workflows into one high-performance platform. Most existing tools excel at either static crawling or proxying, but often struggle with modern JavaScript-heavy SPAs or specialized environments like Onion services.
Features:
Hybrid Crawling: Five distinct engines—Standard (static), Headless (JS-execution), Bruteforce (directory discovery), Onion/TOR, and a specialized Links crawler for file formats.
Intercepting Proxy: Real-time HTTP(S) traffic manipulation with minimal overhead.
Precision Request Crafter: Manual control over headers, payloads, and authentication flows.
Automated Fuzzer: Multi-technique fuzzing with response analysis to find edge cases and vulnerabilities.
Surface Mapping: Generates interactive graphs to visualize site architecture.
Serialization: SpiderSuite supports importing from common security tools and exporting results in multiple formats JSON, CSV, XML, and HTML.
Why I Built This:
I wanted a tool that was faster than Java-based alternatives but more comprehensive than simple CLI Go/Rust scripts. SpiderSuite is designed for attack surface mapping, bug bounty hunting, and deep reconnaissance.
I'm looking for feedback on the headless crawler's efficiency and how the UI handles large-scale crawl data. I'll be here to answer any questions about the architecture or roadmap.
sub3suite•1h ago
Features:
Hybrid Crawling: Five distinct engines—Standard (static), Headless (JS-execution), Bruteforce (directory discovery), Onion/TOR, and a specialized Links crawler for file formats.
Intercepting Proxy: Real-time HTTP(S) traffic manipulation with minimal overhead.
Precision Request Crafter: Manual control over headers, payloads, and authentication flows.
Automated Fuzzer: Multi-technique fuzzing with response analysis to find edge cases and vulnerabilities.
Surface Mapping: Generates interactive graphs to visualize site architecture.
Serialization: SpiderSuite supports importing from common security tools and exporting results in multiple formats JSON, CSV, XML, and HTML.
Why I Built This: I wanted a tool that was faster than Java-based alternatives but more comprehensive than simple CLI Go/Rust scripts. SpiderSuite is designed for attack surface mapping, bug bounty hunting, and deep reconnaissance.
I'm looking for feedback on the headless crawler's efficiency and how the UI handles large-scale crawl data. I'll be here to answer any questions about the architecture or roadmap.