Reddit 抓取参数
Web Scraper API Reddit 抓取参数
使用 Thordata 的 Web Scraper API 配置 Reddit 抓取参数,包括 URL、关键词、日期、最大帖子数、排序方式、subreddit url、时间排序、发布天数限制、加载回复、回复数量限制等参数。
唯一标识:
token
,访问令牌(必填)
此参数用作 API 访问令牌,以确保抓取请求的合法性。
示例请求:
Authorization: Bearer ********************
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer ********************" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042/comments/1cmqs1d/official_update_on_the_next_battlefield_game/\"},{\"url\": \"https://reddit.com/r/datascience/comments/1cmnf0m/technical_interview_python_sql_problem_but_not/\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
一、产品-抓取 Reddit 帖子信息:
Reddit - 通过 URL 抓取帖子信息
spider_id
,所属抓取工具(必填)
它定义了要使用的抓取工具。
示例请求:
spider_id=reddit_posts_by-url
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042/comments/1cmqs1d/official_update_on_the_next_battlefield_game/\"},{\"url\": \"https://reddit.com/r/datascience/comments/1cmnf0m/technical_interview_python_sql_problem_but_not/\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
url
,URL(必填)
该参数用于指定抓取 Reddit 帖子的 URL。
示例请求:
"url": "https://www.reddit.com/r/battlefield2042/comments/1cmqs1d/official_update_on_the_next_battlefield_game/"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042/comments/1cmqs1d/official_update_on_the_next_battlefield_game/\"},{\"url\": \"https://reddit.com/r/datascience/comments/1cmnf0m/technical_interview_python_sql_problem_but_not/\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
Reddit - 通过关键词抓取帖子信息
spider_id
,所属抓取工具(必填)
它定义了要使用的抓取工具。
示例请求:
spider_id=reddit_posts_by-keywords
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-keywords" ^
-d "spider_parameters=[{\"keyword\": \"datascience\",\"date\": \"All time\",\"num_of_posts\": \"10\",\"sort_by\": \"Hot\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
keyword
,关键词(必填)
该参数用于指定抓取 Reddit 帖子 的搜索关键词。
示例请求:
"keyword": "datascience"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-keywords" ^
-d "spider_parameters=[{\"keyword\": \"datascience\",\"date\": \"All time\",\"num_of_posts\": \"10\",\"sort_by\": \"Hot\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
date
,日期(可选)
该参数用于指定抓取帖子的时间限制条件,参数值包括:All time
、Past year
、Past month
、Past week
、Today
、Past hour
。
示例请求:
"date": "All time"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-keywords" ^
-d "spider_parameters=[{\"keyword\": \"datascience\",\"date\": \"All time\",\"num_of_posts\": \"10\",\"sort_by\": \"Hot\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
num_of_posts
,最大帖子数(可选)
该参数用于指定抓取帖子的最大数量。
示例请求:
"num_of_posts": "10"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-keywords" ^
-d "spider_parameters=[{\"keyword\": \"datascience\",\"date\": \"All time\",\"num_of_posts\": \"10\",\"sort_by\": \"Hot\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
sort_by
,排序方式(可选)
该参数用于指定抓取帖子的排序方式,参数值包括:Relevance
、Hot
、Top
、New
、Comment count
。
示例请求:
"sort_by": "Hot"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-keywords" ^
-d "spider_parameters=[{\"keyword\": \"datascience\",\"date\": \"All time\",\"num_of_posts\": \"10\",\"sort_by\": \"Hot\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
Reddit - 通过 subreddit url 抓取帖子信息
spider_id
,所属抓取工具(必填)
它定义了要使用的抓取工具。
示例请求:
spider_id=reddit_posts_by-subredditurl
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-subredditurl" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042\",\"sort_by\": \"Hot\",\"num_of_posts\": \"10\",\"sort_by_time\": \"All Time\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
url
,subreddit url(必填)
该参数用于指定抓取 Reddit 帖子的 subreddit URL。
示例请求:
"url": "https://www.reddit.com/r/battlefield2042"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-subredditurl" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042\",\"sort_by\": \"Hot\",\"num_of_posts\": \"10\",\"sort_by_time\": \"All Time\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
sort_by
,排序方式(可选)
该参数用于指定抓取帖子的排序方式,参数值包括:Hot
、Top
、New
、Rising
。
示例请求:
"sort_by": "Hot"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-subredditurl" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042\",\"sort_by\": \"Hot\",\"num_of_posts\": \"10\",\"sort_by_time\": \"All Time\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
num_of_posts
,最大帖子数(可选)
该参数用于指定抓取帖子的最大数量。
示例请求:
"num_of_posts": "10"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-subredditurl" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042\",\"sort_by\": \"Hot\",\"num_of_posts\": \"10\",\"sort_by_time\": \"All Time\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
sort_by_time
,时间排序(可选)
该参数用于指定抓取帖子的时间排序方式,参数值包括:Now
、Today
、This Week
、This Month
、This Year
、All Time
。
示例请求:
"sort_by_time": "All Time"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_posts_by-subredditurl" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/battlefield2042\",\"sort_by\": \"Hot\",\"num_of_posts\": \"10\",\"sort_by_time\": \"All Time\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
二、产品-抓取 Reddit 帖子评论信息:
Reddit - 通过 URL 抓取帖子评论信息
spider_id
,所属抓取工具(必填)
它定义了要使用的抓取工具。
示例请求:
reddit_comment_by-url
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_comment_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button\",\"days_back\": \"10\",\"load_all_replies\": \"true\",\"comment_limit\": \"5\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
url
,URL(必填)
该参数用于指定抓取 Reddit 评论或帖子的 URL。
示例请求:
"url": "https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_comment_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button\",\"days_back\": \"10\",\"load_all_replies\": \"true\",\"comment_limit\": \"5\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
days_back
,发布天数限制(可选)
该参数用于指定抓取您输入的天数内发布的所有评论。
示例请求:
"days_back": "10"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_comment_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button\",\"days_back\": \"10\",\"load_all_replies\": \"true\",\"comment_limit\": \"5\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
load_all_replies
,加载回复(可选)
该参数用于指定是否抓取评论的回复内容,标记为 True 将获取所有评论和所有回复的记录。 参数值:true
、false
示例请求:
"load_all_replies": "true"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_comment_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button\",\"days_back\": \"10\",\"load_all_replies\": \"true\",\"comment_limit\": \"5\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
comment_limit
,回复数量限制(可选)
该参数用于指定限制返回的评论数量。
示例请求:
"comment_limit": "5"
curl -X POST "https://scraperapi.thordata.com/builder" ^
-H "Authorization: Bearer Token-ID" ^
-H "Content-Type: application/x-www-form-urlencoded" ^
-d "spider_name=reddit.com" ^
-d "spider_id=reddit_comment_by-url" ^
-d "spider_parameters=[{\"url\": \"https://www.reddit.com/r/datascience/comments/1cmnf0m/comment/l32204i/?utm_source=share%26utm_medium=web3x%26utm_name=web3xcss%26utm_term=1%26utm_content=share_button\",\"days_back\": \"10\",\"load_all_replies\": \"true\",\"comment_limit\": \"5\"}]" ^
-d "spider_errors=true" ^
-d "file_name={{TasksID}}"
如果您需要进一步的帮助,请通过电子邮件联系 [email protected]。
Last updated
Was this helpful?