爬取淘宝某关键词的所有商品及其信息

发表于 Python, 爬虫, 程序语言 2018-11-24 阅读数: 40

源代码:https://github.com/Ckend/taobao_scrapy

语言:Python3

爬取平台:手机端

PS:在当前文件夹下先创建好result文件夹

源代码包含两个Python文件,get_items.py 用于通过关键词获得商品基本信息,包括商品名称、店铺、销量、评论数、图片、优惠、出货地、原价等等。输出格式为.csv文件。

另一个Python文件: get_review_cloud.py 用于通过userID和itemID (从get_items.py中获得这两个信息) 获得某商品的评论云。输出.csv文件。

源代码很简单,十几行,逻辑如下:

  1. 访问接口(f=urllib.request.urlopen(url))

  2. 读取源代码并解码为utf-8然后转化为json

  3. 写入至.csv文件。

以下给出get_items.py的内容,其他的请见GitHub,如果你有收获,请帮忙点个Star~

get_items.py:


import urllib.request

import urllib.parse

import json

import csv

def write_csv(data, filename):

    count = 0

    with open(filename, 'a') as outf:

        dw = csv.DictWriter(outf, fieldnames=["title","sold","commentCount","item_id","shipping","fastPostFee","userId","nick","userType","isB2c","location","sellerLoc","pic_path","type","tItemType","zkType","zkGroup","priceColor","priceWithRate","auctionURL","isP4p","itemNumId","originalPrice","freight","act","coinLimit","priceWap","price","category","auctionType","url","img2","wwimUrl","previewUrl","favoriteUrl","isMobileEcard","iswebp","name","iconList","icons","area"])

        if count == 0:

    ​        # 第一行才写入头部

    ​        dw.writeheader()

    ​    count += 1

        for row in data:

            ​dw.writerow(row)

def get_items(searchWords, page):

    url = "https://s.m.taobao.com/search?event_submit_do_new_search_auction=1&_input_charset=utf-8&topSearch=1&atype=b&searchfrom=1&action=home%3Aredirect_app_action&from=1&q="+ str(urllib.parse.quote(searchWords)) +"&sst=1&n=20&buying=buyitnow&m=api4h5&token4h5=&abtest=20&wlsort=20&page="+str(page)

    f = urllib.request.urlopen(url)

    result = json.loads(f.read().decode('utf-8'))

    return result['listItem']

def get_items_by_keywords(searchWords, page):

    for i in range(1,page):

        result = get_items(searchWords ,i)

    try:

        write_csv(result,"./result/"+str(searchWords)+"_result.csv")

    except INdexError:

        # 关键词产品已抓取完

        print("该关键词产品已全部抓取完毕")

get_items_by_keywords("空气清新器", 100)

所有代码仅供学习和参考。

1 评论

Add comment

  • Ckend 2周前
    评论爬取其实也很简单的,大家可以试试