当前位置 博文首页 > 王斯的博客:Python爬取《扫黑风暴》腾讯视频弹幕

    王斯的博客:Python爬取《扫黑风暴》腾讯视频弹幕

    作者:[db:作者] 时间:2021-09-11 16:50

    关键是找到弹幕的URL

    等待,广告完成,按下F12
    在Ctrl+R刷新
    在这里插入图片描述

    代码如下

    #-*- coding = uft-8 -*-
    #@Time : 2020/9/26 4:35 下午
    #@Author : 公众号 菜J学Python
    #@File : tengxun_danmu-1.py
    
    import requests
    import json
    import time
    import pandas as pd
    
    df = pd.DataFrame()
    # range设置起始页码从15-45,间隔30
    for page in range(15, 45, 30):
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/63.0.3239.132 Safari/537.36'}
        url = 'https://mfm.video.qq.com/danmu?otype=json&timestamp={}&target_id=5938032297%26vid%3Dx0034hxucmw&count=80'.format(page)
        print("正在提取第" + str(page) + "页")
        html = requests.get(url,headers = headers)
        print(html)
        bs = json.loads(html.text.encode('utf-8').decode('utf-8'),strict = False)  #strict参数解决部分内容json格式解析报错
        time.sleep(1)
        print(bs)
        #遍历获取目标字段
        for i in bs['comments']:
            content = i['content']  #弹幕
            upcount = i['upcount']  #点赞数
            user_degree =i['uservip_degree'] #会员等级
            timepoint = i['timepoint']  #发布时间
            comment_id = i['commentid']  #弹幕id
            cache = pd.DataFrame({'弹幕':[content],'会员等级':[user_degree],
                                  '发布时间':[timepoint],'弹幕点赞':[upcount],'弹幕id':[comment_id]})
            df = pd.concat([df,cache])
    
    df.to_csv('tengxun_danmu.csv',encoding = 'utf-8-sig') # 设置编码,避免乱码
    print(df.shape)
    
    
    cs