当前位置 主页 > 网站技术 > 代码类 >

    Scrapy框架实现的登录网站操作示例

    栏目:代码类 时间:2020-02-06 12:10

    本文实例讲述了Scrapy框架实现的登录网站操作。分享给大家供大家参考,具体如下:

    一、使用cookies登录网站

    import scrapy
    class LoginSpider(scrapy.Spider):
      name = 'login'
      allowed_domains = ['xxx.com']
      start_urls = ['https://www.xxx.com/xx/']
      cookies = ""
      def start_requests(self):
        for url in self.start_urls:
          yield scrapy.Request(url, cookies=self.cookies, callback=self.parse)
      def parse(self, response):
        with open("01login.html", "wb") as f:
          f.write(response.body)
    
    

    二、发送post请求登录, 要手动解析网页获取登录参数

    import scrapy
    class LoginSpider(scrapy.Spider):
      name='login_code'
      allowed_domains = ['xxx.com']
      #1. 登录页面
      start_urls = ['https://www.xxx.com/login/']
      def parse(self, response):
        #2. 代码登录
        login_url='https://www.xxx.com/login'
        formdata={
          "username":"xxx",
          "pwd":"xxx",
          "formhash":response.xpath("//input[@id='formhash']/@value").extract_first(),
          "backurl":response.xpath("//input[@id='backurl']/@value").extract_first()
        }
        #3. 发送登录请求post
        yield scrapy.FormRequest(login_url, formdata=formdata, callback=self.parse_login)
      def parse_login(self, response):
        #4.访问目标页面
        member_url="https://www.xxx.com/member"
        yield scrapy.Request(member_url, callback=self.parse_member)
      def parse_member(self, response):
        with open("02login.html",'wb') as f:
          f.write(response.body)
    
    

    三、发送post请求登录, 自动解析网页获取登录参数

    import scrapy
    class LoginSpider(scrapy.Spider):
      name='login_code2'
      allowed_domains = ['xxx.com']
      #1. 登录页面
      start_urls = ['https://www.xxx.com/login/']
      def parse(self, response):
        #2. 代码登录
        login_url='https://www.xxx.com/login'
        formdata={
          "username":"xxx",
          "pwd":"xxx"
        }
        #3. 发送登录请求post
        yield scrapy.FormRequest.from_response(
          response,
          formxpath="//*[@id='login_pc']",
          formdata=formdata,
          method="POST", #覆盖之前的get请求
          callback=self.parse_login
        )
      def parse_login(self, response):
        #4.访问目标页面
        member_url="https://www.xxx.com/member"
        yield scrapy.Request(member_url, callback=self.parse_member)
      def parse_member(self, response):
        with open("03login.html",'wb') as f:
          f.write(response.body)
    
    

    更多相关内容可查看本站专题:《Python Socket编程技巧总结》、《Python正则表达式用法总结》、《Python数据结构与算法教程》、《Python函数使用技巧总结》、《Python字符串操作技巧汇总》、《Python入门与进阶经典教程》及《Python文件与目录操作技巧汇总》

    希望本文所述对大家基于Scrapy框架的Python程序设计有所帮助。