Data Harvesting

Harvest new and modified data from public websites

Website Harvesting and AI Transformations That Deliver Structured Data to Your Systems 


Organizations need to harvest and structure data and content posted and maintained on public websites. Websites are often the version of record for policy, procedure, legal, and regulatory content. Many businesses benefit from daily robotic scans of updated website content with structured XML feeds back into internal systems.

The volume and complexity of this type of information means that manual approaches are slow, error prone, and cost prohibitive. We provide automated website scraping configured to your business needs with customized XML feeds back to your organization. 

Tools: Data Harvester, GATE, Lucene tokenizer, JAVA, JAPE, PERL,  TensorFlow

A deeper solution beyond simple website scraping

DCL provides a truly useful solution that goes beyond web scraping. DCL has developed methods and bots to facilitate high-volume data retrieval from hundreds of websites, in a variety of source formats (HTML, RTF, DOCX, TXT, XML, etc.), in both European and Asian languages. We produce a unified data stream that is converted to XML for ingestion into derivative databases, data analytics platforms, and other downstream systems. This process of normalization and transformation of content to automate import into a customer’s business system maximizes business value. A key to successful projects is the depth and quality of up-front analysis to ensure complete and accurate results.



A major financial institution selected Data Conversion Laboratory to accurately track financial compliance requirements across hundreds of jurisdictions. 



I've got personal experience with DCL. Their Business Development department is very responsive, and their reputation is stellar. I've never heard anyone in my business  have a bad word to say about them. Highly recommended

Data Harvesting & Mining FAQ

How do you harvest data from websites?

DCL starts with a human analysis of the target websites/content by our expert engineering team. We then use tools like our in-house data harvesting software and custom scripts to scrape, harvest, re-structure, and validate the collected data. Special care is taken to ensure harvesting does not overload or accidentally DDOS the target services.

What source and file formats can data be harvested from?

Data harvesting can extract data from HTML, RTF, DOCX, TXT, XML, RSS, XSLX, CSV, and practically every imaginable file format.

What types of data can be collected during data harvesting?

Data harvesting can gather text, metadata, images, videos, and other files from online sources.

Can data harvesting produce structured data in a particular format?

DCL data harvesting can output the data in whatever format is desired. The most common formats are XML, DITA, HTML, and S1000D.

What's the difference between data mining and data harvesting?

Data mining typically refers to analyzing large datasets, often with AI or machine learning, to uncover hidden trends or statistics that traditional analysis methods may miss. Data harvesting is closely related, but focuses on collecting data from online sources so they can be analyzed or reused. Data harvesting and data mining often go hand-in-hand, with harvesting gathering the data to be mined.

How can data harvesting be used for data analytics?

Analytics are only as good as the data analyzed. DCL’s data harvesting services streamline the collection, validation, and structuring process so analytics are faster and more reliable.

Is web scraping the same as data harvesting?

Web scraping is a common term for crawling websites and downloading their contents. At DCL, we differentiate our data harvesting from simple web scraping by also incorporating machine learning and natural language processing to ensure the final output is well structured and ready for reuse. In casual conversation, terms like web scraping, web mining, data scraping, data extraction, and other names are often used interchangeably.

Can DCL harvest data in languages other than English?

Yes, we can harvest data in European and Asian languages.

How is harvested data cleaned and checked for errors?

At every stage of the data harvesting process, DCL uses a combination of human and machine validation processes to verify the quality of the collected data. Our system will flag errors so they can be quickly corrected. High quality, standardized data is DCL’s speciality.

What is data harvesting?

Data harvesting is the automated collection of information from online sources like websites or databases, usually by "scraping" the content of pages for text, images, etc. This data can then be used for other purposes like statistics, analytics, machine learning, or integration with other data sources.

DCL Data Harvester comprises

  • Filtering programs

  • Downloading handler

  • Metadata gatherer

  • File differencing programs

  • Natural Language Processing programs

  • Data and content transformation programs

  • Secure repository

DCL’s solution harnesses technology in Natural Language Processing and Machine Learning to help enable solutions powered by Artificial Intelligence. With sophisticated automated processes, DCL optimizes content to collect information, streamline compliance, facilitate migration to new systems and databases, maximize reuse potential, and ready it for delivery to all outputs.

Mark Gross, President, DCL

DCL Data Harvester is an ideal website scraping solution for all industries that rely on regulatory and compliance management data as well as keeping up to date with constantly changing website content. DCL conducts upfront human analysis of target websites and content to ensure your content and metadata are captured, structured, and complete.

Industries Served