作者LP9527 ()
看板Python
标题Re: [问题] 邮局网路爬虫输出csv问题
时间Sun Aug 1 14:16:41 2021
我也睡完午觉了 真不知道为啥废文不自D
以下给原PO参考
---
import bs4
import urllib.request as req
import pandas as pd
from pathlib import Path
url = "
https://www.post.gov.tw/post/internet/I_location/index_all.html"
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
temp_file = 'tmp.html'
if not Path(temp_file).is_file():
request = req.Request(url, headers=headers)
with req.urlopen(request) as response:
data = response.read().decode("utf-8")
with open(temp_file, mode='w') as f:
f.write(data)
else:
with open(temp_file) as f:
data = f.read()
root = bs4.BeautifulSoup(data, "html.parser")
rows = root.find(id="table").find_all('tr')[1:] # 表头不取
output = []
for row in rows:
addr = row.find('td', class_="detail").string
row_output = {
'地址': addr,
'县市': addr[:3],
'局名': row.find('a', class_="rwd-close").string,
}
output.append(row_output)
# 请你看看此时 output 长啥样子
# 最好是熟悉 python 的 list, dict 这两个资料型态的基本操作之後再去用 df
output_df = pd.DataFrame(output)
csv_file = "test.csv"
output_df.to_csv(csv_file, encoding="utf-8-sig")
※ 引述《Leo33012 (羊圈里的狼)》之铭言:
: 我等等试完来告诉你,
: 但是要先睡个午觉,
: 你可以等一等
: 你可以留意一下这篇帖子等等的留言
--
※ 发信站: 批踢踢实业坊(ptt.cc), 来自: 223.140.234.255 (台湾)
※ 文章网址: https://webptt.com/cn.aspx?n=bbs/Python/M.1627798606.A.2F3.html
# 如果你要整个表格直接转换可以这样做
root = bs4.BeautifulSoup(data, "html.parser")
table = root.find(id="table").prettify()
output_df = pd.read_html(table, flavor='html5lib')[0] # pip install
csv_file = "test.csv"
output_df.to_csv(csv_file, encoding="utf-8-sig")
※ 编辑: LP9527 (223.140.234.255 台湾), 08/01/2021 14:31:38
1F:→ jerrycurry: 感谢感谢 08/01 23:13
2F:推 jerrycurry: 补个推 08/01 23:16