They don't have thaaat much in common. As you have already seen you have to deploy your spiders to scrapyd and then schedule crawls. scrapyd is a standalone service running on a server where you can deploy and run every project/spider you like.
With ScrapyRT you choose one of your projects and you cd
to that directory. Then you run e.g. scrapyrt
and you start crawls for spiders on that project through a simple (and very similar to scrapyd's) REST API. Then you get crawled items back as part of the JSON response.
It's a very nice idea and it looks fast, lean and well defined. Scrapyd on the other hand is more mature and more generic.
Here are some key differences:
- Scrapyd supports multiple versions of spiders and multiple projects. As far as I can see if you want to run two different projects (or versions) with ScrapyRT you will have to use different ports for each.
- Scrapyd provides infrastructure for keeping items in the server while ScrapyRT sends them back to you on the response which, for me, means that they should be in the order of a few MBs (instead of potentially GBs.) Similarly, the way logging is handled in scrapyd is more generic when compared to ScrapyRT.
- Scrapyd (potentially persistently) queues jobs and gives you control over the number of Scrapy processes that run in parallel. ScrapyRT does something simple which as far as I can tell is to start a crawl for every request as soon as the request arrives. Blocking code in one of the spiders will block others as well.
- ScrapyRT requires an
url
argument which as far as I can tell overrides any start_urls
-related logic.
I would say that ScrapyRT and Scrapyd very cleverly don't overlap at this point in time. Of course you never know what future holds.