May-08-2020, 10:09 AM
Hi, I need to scrape out all the links on a website.
I want to crawl something like Screaming Frog but with Python.
This is my code:
Thanks!
I want to crawl something like Screaming Frog but with Python.
This is my code:
import urllib.request data = urllib.request.urlopen('https://consultarsimit.co').read().decode() from bs4 import BeautifulSoup soup = BeautifulSoup(data) tags = soup('a') for tag in tags: print(tag.get('href'))How can I save the links in a database and query them with multi-threading?
Thanks!