Getting Started

Learn how to use the Scraping Browser Builder or run scrapers programmatically.

1

Sign up

Please note that using our Scraping Browser requires a Thordata account. If you do not have an account yet, please complete the registration process first. If you are already familiar with the setup, you can proceed directly to the dashboard to log in and get started.

2

Purchase a Data Package

After logging in, navigate to the 【Scraping Browser】 > 【Pricing】 page. Select a suitable package based on your needs and complete the payment. Upon successful purchase, a sub-account will be automatically assigned to you.

3

Proxy User Setup

In the user dashboard, go to the 【Scraping】 > 【Scraping Browser】 > 【Users】 page.

  • Create Single User: Click the 【Add User】 button. In the pop-up interface, set the username, password, select the account status (Enabled or Disabled), and configure the data usage limit (you can choose unlimited or set a specific MB cap). Save the configuration once completed.

  • Bulk Create Users: For rapid creation, use the 【Quick Add】 function. The system will automatically generate usernames and passwords.

4

Account Management

In the User Management interface, you can perform the following operations:

  • Filter by Time: Set start and end dates to retrieve users created within that timeframe.

  • Precise Search: Enter a username to locate a specific account directly.

  • Edit Information: Click the 【Edit】 button next to the target user to update their username, password, status, limit, and remarks.

  • Delete Accounts:

    • Single Delete: Click the 【Delete】 button within the edit interface.

    • Bulk Delete: Check the boxes in front of multiple users and then use the 【Quick Delete】 function at the top.

5

Start Scraping

  • Access Scraping Interface: Navigate to [Scraping]> [Scraping Browser]> [Playground].

  • Select Template: Choose an example template (e.g., News, E-commerce).

  • Select User: From the dropdown menu, select a proxy user you configured beforehand.

  • Get Credentials: The system will automatically generate credentials compatible with Puppeteer/Playwright and Selenium. You can copy these directly for integration with third-party tools.

  • Execute Request:

    • The page will display sample request code (the Puppeteer/Playwright version is shown by default).

    • Click the 【Run】 button to execute the scraping task.

    • Note: Selenium and other programming languages are also supported.

6

Preview and Download Results

After the task execution is complete:

  • Result Preview: The right panel of the dashboard will display a visually rendered preview or the raw HTML code, depending on your selected output type.

  • Console Information:

    • Displays request and response logs with timestamps.

    • Shows detailed metadata for each search result.

If you require further assistance, please feel free to contact us at: [email protected].

Last updated

Was this helpful?