I have used rudimentary scrapers for years but always found that the process of training them doesn’t scale as well as I expect. In a scenario where one needs hundreds of scrapers all grabbing the same five or 10 pieces of information (dates, names, addresses, and ID numbers) is it possible to just grab the raw HTML and train an NLP model to find the relevant data rather than selecting specific selectors/paths?
It seems to me that finding these items in an HTML doc would be a trivial task for NLP and this would allow us to forgo scraper maintenance in the event that websites change. Assuming that I already have hundreds of thousands or millions of records in structured data that could be used for training, would modern tools make this a more efficient way of ingesting data?
Comments URL: https://news.ycombinator.com/item?id=34755877
Points: 2
# Comments: 1
from Hacker News: Newest https://ift.tt/IUGdnCq
via IFTTT
No comments:
Post a Comment