跳至主要内容

python 之 网页自动化测试


https://opensourceforu.com/2017/10/splinter-easy-way-test-web-applications/

https://splinter.readthedocs.io/en/latest/index.html

https://sites.google.com/a/chromium.org/chromedriver/home
下载谷歌浏览器版本对应的chromedriver.exe并将其加入环境变量path中


Anaconda3 (64-bit)
打开Anaconda Prompt,运行以下命令
jupyter nbconvert --to html your_notebook_name.ipynb
可将ipynb格式文件转换为html格式
(base) C:\Users\DZL>jupyter nbconvert --to html Untitled40.ipynb
[NbConvertApp] Converting notebook Untitled40.ipynb to html
[NbConvertApp] Writing 361412 bytes to Untitled40.html


当我打开Jupyter Notebook(以前为IPython)时,默认为C:\Users\USERNAME
打开当前工作目录,中间有空格
(base) C:\Users\DZL>start .

或者在Jupyter Notebook开python页面中输入pwd 或者cd

jupyter notebook --help-all


txt = ",,,,,rrttgg.....banana....rrr"
x = txt.strip(",.grt")
print(x)

>>> banana


  1. from selenium import webdriver
  2. # 添加驱动
  3. driver = webdriver.Chrome(r'D:\chromedriver')
  4. driver.get('https://www.kuaikanmanhua.com/web/topic/1338/')
  5. #driver.page_source
  6. #TopicItem cls
  7. from bs4 import BeautifulSoup
  8. import requests
  9. import time
  10. import os
  11. soup = BeautifulSoup(driver.page_source, 'lxml')
  12. #x=soup.select_one('.TopicList').get('TopicItem cls')
  13. for i in soup.find_all("div", class_="title fl"):
  14.     if i.a['href'] !='javascript:void(0);':
  15.         name = i.text.strip()
  16.         url = 'https://www.kuaikanmanhua.com/'+ i.a['href']
  17.         os.mkdir(r'C:\Users\DANG\Pictures\海贼王\{}'.format(name))
  18.         driver.get(url)
  19.         soup = BeautifulSoup(driver.page_source, 'lxml')
  20. #soup.find_all('img')
  21.         imglist = soup.select_one(".imgList")
  22.         # enumerate返回可迭代对象,元素为两元元组,进而可以用切片获取
  23.         for img in enumerate(imglist, start=1):
  24.             id_num = img[0]
  25.             imgurl = img[1].get('data-src')
  26.             headers ={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.75 Safari/537.36'}
  27.             res = requests.get(imgurl, headers=headers)
  28.             with open(r'C:\Users\DANG\Pictures\海贼王\{}\{}.jpg'.format(name, id_num), 'wb') as f:
  29.                 f.write(res.content)
  30.             time.sleep(3)

# 打开图片
from PIL import Image
Image.open("test.jpg")

    for i in soup.find_all(class_="ListPicM"):
        img_name = re.sub('[?:\\\/\n]', ' ', i.text)
        img_url = i.img.get('src')
#      img_name = img_url.split('/')[-2]
        headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/86.0.4240.111 Safari/537.36'}
        try:
            res = requests.get(img_url, headers=headers, timeout=500)
        except requests.exceptions.ConnectionError:
            print('图片{}因超时无法下载,链接为:{}'.format(img_name, img_url))
            continue


  1. from splinter import Browser
  2. # 如果您没有为该Browser功能提供任何驱动程序参数,则默认firefox将使用
  3. with Browser('chrome') as browser:
  4.     # Visit URL
  5.     url = "http://www.google.com"
  6.     browser.visit(url)
  7.     browser.fill('q', 'splinter - python acceptance testing for web applications')
  8.     # Find and click the 'search' button
  9.     button = browser.find_by_name('btnG')
  10.     # Interact with elements
  11.     button.click()
  12.     if browser.is_text_present('splinter.readthedocs.io'):
  13.         print("Yes, the official website was found!")
  14.     else:
  15.         print("No, it wasn't found... We need to improve our SEO techniques")
  16.  
  17.  
  18.  
  19. # imports the Browser library for Splinter
  20. from splinter import Browser
  21.  
  22. # takes the email address from user as input to login to his/her Facebook account
  23. user_email = input("enter users email address:")
  24.  
  25. # takes the password from user as input to login to his/her Facebook account
  26. user_pass = input("enter users password:")
  27.  
  28. # loads the Chrome browser browser= Browser('chrome')
  29. # stores the URL for Facebook in url variable
  30. url = "https://www.facebook.com/"
  31.  
  32. # navigates to facebook website and load that in the Firefox browser
  33. browser.visit(url)
  34.  
  35. # checks if Facebook web page is loaded else prints an error message
  36. if browser.is_text_present('www.facebook.com'):
  37.  
  38.     # fills the user’s email ID and password in the email and password field of the facebook login section
  39.     # Inbuilt function browser.fill uses the tag name for Email and Password input box i.e. email and pass respectively to identify it    
  40.     browser.fill('email', user_email)    
  41.     browser.fill('pass', user_pass)
  42.      
  43.     # selects the login button using its id value present on the Facebook page to click and log in with the given details    
  44.     button = browser.find_by_id('u_0_d')    
  45.     button.click()
  46.  
  47. else:
  48.     print("Facebook web application NOT FOUND")

评论

此博客中的热门博文

自动发送消息

  # https://pyperclip.readthedocs.io/en/latest/ import pyperclip while True :     # pyperclip.copy('Hello, world!')     # pyperclip.paste()     # pyperclip.waitForPaste()     print ( pyperclip. waitForNewPaste ( ) )     # 获取要输入新的坐标,也可以通过autohotkey import time import pyautogui  as pag import os   try :     while True :         print ( "Press Ctrl-C to end" )         x , y = pag. position ( )   # 返回鼠标的坐标         posStr = "Position:" + str ( x ) . rjust ( 4 ) + ',' + str ( y ) . rjust ( 4 )         print ( posStr )   # 打印坐标         time . sleep ( 0.2 )         os . system ( 'cls' )   # 清楚屏幕 except KeyboardInterrupt :     print ( 'end....' )     # 打印消息 import pyautogui import time import pyperclip   content = """   呼叫龙叔! 第二遍! 第三遍! 第四遍...

学习地址

清华大学计算机系课程攻略 https://github.com/PKUanonym/REKCARC-TSC-UHT 浙江大学课程攻略共享计划 https://github.com/QSCTech/zju-icicles https://home.unicode.org/ 世界上的每个人都应该能够在手机和电脑上使用自己的语言。 http://codecanyon.net   初次看到这个网站,小伙伴们表示都惊呆了。原来代码也可以放在网上卖的?!! 很多coder上传了各种代码,每个代码都明码标价。看了下销售排行,有的19刀的卖了3万多份,额di神啊。可以看到代码的演示效果,真的很漂亮。代码以php、wordpress主题、Javascript、css为主,偏前台。 https://www.lintcode.com/ 算法学习网站,上去每天刷两道算法题,走遍天下都不怕。 https://www.codecademy.com/ 包含在线编程练习和课程视频 https://www.reddit.com/ 包含有趣的编程挑战题,即使不会写,也可以查看他人的解决方法。 https://ideone.com/ 在线编译器,可运行,可查看代码示例。 http://it-ebooks.info/ 大型电子图书馆,可即时免费下载书籍。 刷题 https://github.com/jackfrued/Python-100-Days https://github.com/kenwoodjw/python_interview_question 面试问题 https://github.com/kenwoodjw/python_interview_question https://www.journaldev.com/15490/python-interview-questions#python-interpreter HTTP 身份验证 https://developer.mozilla.org/zh-CN/docs/Web/HTTP/Authentication RESTful 架构详解 https://www.runoob.com/w3cnote/restful-architecture.html https://www.rosettacode.org/wiki/Rosetta_C...

mysql 入门

资料 https://dinfratechsource.com/2018/11/10/how-to-install-latest-mysql-5-7-21-on-rhel-centos-7/ https://dev.mysql.com/doc/refman/5.7/en/linux-installation-yum-repo.html https://www.runoob.com/mysql/mysql-create-database.html https://www.liquidweb.com/kb/install-java-8-on-centos-7/ 工具 https://www.heidisql.com/ HeidiSQL是免费软件,其目标是易于学习。 “ Heidi”使您可以从运行数据库系统MariaDB,MySQL,Microsoft SQL或PostgreSQL的计算机上查看和编辑数据和结构 MySQL 连接时尽量使用 127.0.0.1 而不是 localhost localhost 使用的 Linux socket,127.0.0.1 使用的是 tcp/ip 为什么我使用 localhost 一直没出问题 因为你的本机中只有一个 mysql 进程, 如果你有一个 node1 运行在 3306, 有一个 node2 运行在 3307 mysql -u root -h localhost -P 3306 mysql -u root -h localhost -P 3307 都会连接到同一个 mysql 进程, 因为 localhost 使用 Linux socket, 所以 -P 字段直接被忽略了, 等价于 mysql -u root -h localhost mysql -u root -h localhost 而 -h 默认是 localhost, 又等价于 mysql -u root mysql -u root 为了避免这种情况(比如你在本地开发只有一个 mysql 进程,线上或者 qa 环境有多个 mysql 进程)最好的方式就是使用 IP mysql -u root -h 127 .0 .0 .1 -P 3307 strac...