I am trying to build a academic paper recommender system that resembles arxiv-sanity but uses fancier algorithms. The first step is to request arXiv server and get all the metadata for further processing.
However, the crawler I wrote is often terminated by arXiv server even I set time.sleep(10)
, where 10
is (I think) a relatively long period of time to hold the crawler. Fortunately, as my crawler depends on the index in URL, I could save the idx
and rerun the program later. So here is my question: how could I automatically rerun my crawler after it has been terminated.
import time
import requests
area = "cat:cs.ML+OR+cat:cs.CV"
for idx in range(0, 10000, 100):
base_url = "http://export.arxiv.org/api/query?"
query = "search_query={}&sortBy=lastUpdatedDate&start={}&max_results={}".format(area, idx, 100)
response = requests.get(base_url + query)
# do some processing
time.sleep(10)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…