Understanding the core concepts of our scraping platform.
Everything starts with an Account. Think of an Account as your team's workspace or a specific project environment. You can create multiple accounts (e.g., "Production", "Staging", "Client A") to keep your data and configurations separate.
Crawlers are the dedicated infrastructure units that power your scraping operations. When you provision a crawler, you get a dedicated instance that handles the heavy text processing and network requests.
A Worker is a specific job or task configuration that runs on a Crawler. While the Crawler provides the capability, the Worker defines what to scrape.
Connectors act as the bridge between OnScrape and your own systems. They define where the extracted data should go once a Worker completes its job.