在前一篇測試中我們已了解如何使用 Scrapy 的 Selector 物件來剖析網頁來取得目標資料, 本篇要用 Selector 物件的 xpath() 與 css() 方法從台銀匯率網站中取得匯率資料.
本系列之前的筆記參考 :
八. 使用 Selector 物件剖析台銀牌告匯率網頁 :
台銀牌告匯率網站網址如下 :
關於其網頁結構的分析參考下面這兩篇 :
要抓取的目標資料是牌告匯率表格的第 1 (幣別) 與第 3 欄 (本行賣出) 的內容 :
使用 Chrome 開發人員工具 (按 F12) 的 Elements 頁籤搜尋, 可知目標資料放在一個表格的 tbody 元素內, 每一個 td 代表一欄, 第一欄的幣別 td 內是一個兩層的 div 結構, 第二層含有三個 div 元素, 我們要擷取的是第二層的第二個 div 內容. 匯率資料則是位於第三欄的 td 元素內 :
在 Element 頁籤中點選 table 元素, 按滑鼠右鍵, 點選 "Copy/Copy element", 然後開啟文字編輯器, 將複製之 HTML 程式碼貼上另存成例如 test3.htm 檔備用 :
接下來在 Element 頁籤中點選幣別欄位 (第一個 td 內層 div 的第二個 div 元素), 按滑鼠右鍵點選 Copy/Copy xpath" :
貼到記事本如下 :
//*[@id="ie11andabove"]/div/table/tbody/tr[1]/td[1]/div/div[2]
但不是直接拿這個 XPath 字串去用, 而是需要修改, 因為其中的索引表示一個特定的定位, 必須去除; 其次, 因為此網頁中只有一個 table 元素, 因此也只有一個 tbody, 因此可以用從 tbody 開始的所有後代元素 //tbody 開始以使 XPath 字串變短, 第二個 div 可以用 position()=2 函式表示, 所以化簡後的幣別欄位 XPath 字串如下 :
xpath='//tbody/tr/td/div/div[position()=2]/text()'
然後是位於第三欄的本行賣出現金匯率, 其 XPath 如下 :
xpath='//tbody/tr/td[position()=3]/text()'
注意, 這兩個 XPath 字串最後面都呼叫 text() 函式來傳回 SelectorList 物件中每個 Selector 物件的文字內容 (也就是搜尋到的元素內容) 組成之串列.
接下來就可以用上面建立的 test.htm 來測試這兩個 XPath 字串是否能正確取得目標資料, 程式碼如下 :
>>> from scrapy.selector import Selector
>>> with open('test3.htm', 'r', encoding='utf-8') as f:
text=f.read()
selector=Selector(text=text)
xpath='//tbody/tr/td/div/div[position()=2]/text()' # 幣別欄位的 XPath 字串
currency=selector.xpath(xpath).getall()
print(currency)
currency=[c.strip() for c in currency] # 去除前後空格與跳行
print(currency)
xpath='//tbody/tr/td[position()=3]/text()' # 匯率欄位的 XPath 字串
rate=selector.xpath(xpath).getall()
print(rate)
['\n 美金 (USD)\n ', '\n 港幣 (HKD)\n ', '\n 英鎊 (GBP)\n ', '\n 澳幣 (AUD)\n ', '\n 加拿大幣 (CAD)\n ', '\n 新加坡幣 (SGD)\n ', '\n 瑞士法郎 (CHF)\n ', '\n 日圓 (JPY)\n ', '\n 南非幣 (ZAR)\n ', '\n 瑞典幣 (SEK)\n ', '\n 紐元 (NZD)\n ', '\n 泰幣 (THB)\n ', '\n 菲國比索 (PHP)\n ', '\n 印尼幣 (IDR)\n ', '\n 歐元 (EUR)\n ', '\n 韓元 (KRW)\n ', '\n 越南盾 (VND)\n ', '\n 馬來幣 (MYR)\n ', '\n 人民幣 (CNY)\n ']
['美金 (USD)', '港幣 (HKD)', '英鎊 (GBP)', '澳幣 (AUD)', '加拿大幣 (CAD)', '新加坡幣 (SGD)', '瑞士法郎 (CHF)', '日圓 (JPY)', '南非幣 (ZAR)', '瑞典幣 (SEK)', '紐元 (NZD)', '泰幣 (THB)', '菲國比索 (PHP)', '印尼幣 (IDR)', '歐元 (EUR)', '韓元 (KRW)', '越南盾 (VND)', '馬來幣 (MYR)', '人民幣 (CNY)']
['32.895', '4.229', '43.24', '22.35', '24.27', '24.61', '36.8', '0.2093', '-', '3.21', '20.13', '0.9657', '0.6246', '0.00238', '36.09', '0.02573', '0.00145', '7.477', '4.542']
這樣便確認 XPath 可正確取得目標資料了. 注意, 由於原網頁中幣別欄位含有國旗, 文字跳一行顯示, 因此 XPath 的 text() 函式會傳回跳行字符 '\n', 此處在串列生成式中利用 strip() 方法將頭尾的空格與跳行等字符都清除了.
得到 currency 與 rate 串列
>>> with open('test3.htm', 'r', encoding='utf-8') as f:
text=f.read()
selector=Selector(text=text)
xpath='//tbody/tr/td/div/div[position()=2]/text()' # 幣別欄位的 XPath 字串
currency=selector.xpath(xpath).getall()
print(currency)
currency=[c.strip() for c in currency] # 去除前後空格與跳行
print(currency)
xpath='//tbody/tr/td[position()=3]/text()' # 匯率欄位的 XPath 字串
rate=selector.xpath(xpath).getall()
print(rate)
result={c: r for c, r in zip(currency, rate)} # 用 zip() 將幣別與匯率串列配對綁定
print(result)
['美金 (USD)', '港幣 (HKD)', '英鎊 (GBP)', '澳幣 (AUD)', '加拿大幣 (CAD)', '新加坡幣 (SGD)', '瑞士法郎 (CHF)', '日圓 (JPY)', '南非幣 (ZAR)', '瑞典幣 (SEK)', '紐元 (NZD)', '泰幣 (THB)', '菲國比索 (PHP)', '印尼幣 (IDR)', '歐元 (EUR)', '韓元 (KRW)', '越南盾 (VND)', '馬來幣 (MYR)', '人民幣 (CNY)']
['32.895', '4.229', '43.24', '22.35', '24.27', '24.61', '36.8', '0.2093', '-', '3.21', '20.13', '0.9657', '0.6246', '0.00238', '36.09', '0.02573', '0.00145', '7.477', '4.542']
{'美金 (USD)': '32.895', '港幣 (HKD)': '4.229', '英鎊 (GBP)': '43.24', '澳幣 (AUD)': '22.35', '加拿大幣 (CAD)': '24.27', '新加坡幣 (SGD)': '24.61', '瑞士法郎 (CHF)': '36.8', '日圓 (JPY)': '0.2093', '南非幣 (ZAR)': '-', '瑞典幣 (SEK)': '3.21', '紐元 (NZD)': '20.13', '泰幣 (THB)': '0.9657', '菲國比索 (PHP)': '0.6246', '印尼幣 (IDR)': '0.00238', '歐元 (EUR)': '36.09', '韓元 (KRW)': '0.02573', '越南盾 (VND)': '0.00145', '馬來幣 (MYR)': '7.477', '人民幣 (CNY)': '4.542'}
此處我們在字典生成式中使用 zip() 函式將幣別 currency 與匯率 rate 這兩個串列配對綁定, 就會得到以幣別為鍵, 匯率為值的字典了.
這樣就可以來寫爬蟲程式了, 用 scrapy startproject project2 建立一個名為 project2 的專案 :
D:\python\test>cd scrapy_projects
D:\python\test\scrapy_projects>scrapy startproject project2
New Scrapy project 'project2', using template directory 'C:\Users\tony1\AppData\Local\Programs\Thonny\Lib\site-packages\scrapy\templates\project', created in:
D:\python\test\scrapy_projects\project2
You can start your first spider with:
cd project2
scrapy genspider example example.com
切換到第一層專案目錄下以下列指令建立爬蟲程式 :
scrapy genspider <爬蟲程式主檔名> <目標網域>
此處第一個參數用來指定要在爬蟲目錄 spiders 底下依照內建 basic 模板建立的爬蟲程式主檔名, 這是自訂的, 第二參數則是指定要爬的目標網站網域, 以台銀牌告匯率網站而言即 rate.bot.com.tw, 例如 :
D:\python\test\scrapy_projects>cd project2
D:\python\test\scrapy_projects\project2>scrapy genspider bot_rate_spider rate.bot.com.tw
Created spider 'bot_rate_spider' using template 'basic' in module:
project2.spiders.bot_rate_spider
genspider 指令會在 spiders 目錄下自動建立爬蟲程式 bot_rate_spider.py, 不過此自動生成的爬蟲程式只是一個模板而已, 必須進行改寫才能執行. 當然這程式也可以手動建立, 我們在前一篇測試中即是自行在 spiders 資料夾下建立 bot_rate_spider.py.
用 tree 指令顯示專案目錄結構樹狀圖 :
D:\python\test\scrapy_projects\project2>tree project2 /f
列出磁碟區 新增磁碟區 的資料夾 PATH
磁碟區序號為 1258-16B8
D:\PYTHON\TEST\SCRAPY_PROJECT\PROJECT2\PROJECT2
│ items.py
│ middlewares.py
│ pipelines.py
│ settings.py
│ __init__.py
│
├─spiders
│ │ bot_rate_spider.py
│ │ __init__.py
│ │
│ └─__pycache__
│ __init__.cpython-310.pyc
│
└─__pycache__
settings.cpython-310.pyc
__init__.cpython-310.pyc
然後開啟 bot_rate_spiders.py, 其預設內容如下 :
import scrapy
class BotRateSpiderSpider(scrapy.Spider):
name = "bot_rate_spider"
allowed_domains = ["rate.bot.com.tw"]
start_urls = ["https://rate.bot.com.tw"]
def parse(self, response):
pass
這裡需要手動修改之處有三個 :
- 類別名稱 :
可以改為與上一篇測試相同的 RateSpider. - name 變數 :
需自訂一個爬蟲名稱例如 'bot_rate_spider', 此名稱為執行爬蟲指令時要用, 例如 :
scrapy crawl bot_rate_spider - start_urls 變數 :
須改為我們要爬的目標網頁網址.
另外 allowed_domains 變數用來限制爬取的網域, 但此變數可有可無.
修改後結果如下 :
import scrapy
class RateSpider(scrapy.Spider):
name='bot_rate_spider'
allowed_domains=['rate.bot.com.tw']
start_urls=['https://rate.bot.com.tw/xrt?Lang=zh-TW']
def parse(self, response):
pass
接下來利用上面的測試結果來改寫 parse() 方法如下 :
# bot_rate_spider.py
import scrapy
class RateSpider(scrapy.Spider):
name='bot_rate_spider'
allowed_domains=['rate.bot.com.tw']
start_urls=['https://rate.bot.com.tw/xrt?Lang=zh-TW']
def parse(self, response):
xpath='//tbody/tr/td/div/div[position()=2]/text()'
currency=response.xpath(xpath).getall()
currency=[c.strip() for c in currency]
xpath='//tbody/tr/td[position()=3]/text()'
rate=response.xpath(xpath).getall()
result={c: r for c, r in zip(currency, rate)}
yield result
與上面呼叫 Selector 物件不同的是, 這裡是呼叫傳入 parse() 的第二參數 response 的 xpath() 或 css() 方法來定位與剖析元素, response 是代表 HTTP 回應的 Response 物件, Scrapy 已經將 HTTP 回應轉成與 Selector 一樣的 Response 物件, 因此同樣擁有 xpath() 或 css() 方法.
執行結果如下 :
scrapy crawl bot_rate_spider
D:\python\test\scrapy_projects\project2>scrapy crawl bot_rate_spider
2024-07-18 15:03:12 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: project2)
2024-07-18 15:03:12 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Windows-10-10.0.22631-SP0
2024-07-18 15:03:12 [scrapy.addons] INFO: Enabled addons:
[]
2024-07-18 15:03:12 [asyncio] DEBUG: Using selector: SelectSelector
2024-07-18 15:03:12 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-18 15:03:12 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-07-18 15:03:12 [scrapy.extensions.telnet] INFO: Telnet Password: f5e3ef96e7107731
2024-07-18 15:03:12 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-07-18 15:03:12 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'project2',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'project2.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['project2.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-18 15:03:12 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-18 15:03:12 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-18 15:03:12 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-07-18 15:03:12 [scrapy.core.engine] INFO: Spider opened
2024-07-18 15:03:12 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-18 15:03:12 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-18 15:03:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/robots.txt> (referer: None)
2024-07-18 15:03:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/xrt?Lang=zh-TW> (referer: None)
2024-07-18 15:03:12 [scrapy.core.scraper] DEBUG: Scraped from <200 https://rate.bot.com.tw/xrt?Lang=zh-TW>
{'美金 (USD)': '32.865', '港幣 (HKD)': '4.223', '英鎊 (GBP)': '43.27', '澳幣 (AUD)': '22.35', '加拿大幣 (CAD)': '24.25', '新加坡幣 (SGD)': '24.66', '瑞士法郎 (CHF)': '37.26', '日圓 (JPY)': '0.2124', '南非幣 (ZAR)': '-', '瑞典幣 (SEK)': '3.22', '紐元 (NZD)': '20.16', '泰幣 (THB)': '0.969', '菲國比索 (PHP)': '0.626', '印尼幣 (IDR)': '0.00238', '歐元 (EUR)': '36.15', '韓元 (KRW)': '0.02577', '越南盾 (VND)': '0.00147', '馬來幣 (MYR)': '7.486', '人民幣 (CNY)': '4.55'}
2024-07-18 15:03:12 [scrapy.core.engine] INFO: Closing spider (finished)
2024-07-18 15:03:12 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 745,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 137689,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 0.525864,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 18, 7, 3, 12, 844747, tzinfo=datetime.timezone.utc),
'item_scraped_count': 1,
'log_count/DEBUG': 6,
'log_count/INFO': 10,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2024, 7, 18, 7, 3, 12, 318883, tzinfo=datetime.timezone.utc)}
2024-07-18 15:03:12 [scrapy.core.engine] INFO: Spider closed (finished)
可見有抓到目標資料, 可用 -o 參數將擷取到的目標資料儲存為 data.json 檔 :
scrapy crawl bot_rate_spider -o data.json
D:\python\test\scrapy_projects\project2>scrapy crawl bot_rate_spider -o data.json
2024-07-18 15:26:23 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: project2)
2024-07-18 15:26:23 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Windows-10-10.0.22631-SP0
2024-07-18 15:26:23 [scrapy.addons] INFO: Enabled addons:
[]
2024-07-18 15:26:23 [asyncio] DEBUG: Using selector: SelectSelector
2024-07-18 15:26:23 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-18 15:26:23 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-07-18 15:26:23 [scrapy.extensions.telnet] INFO: Telnet Password: 74fb77af1ba91f2e
2024-07-18 15:26:23 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2024-07-18 15:26:23 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'project2',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'project2.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['project2.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-18 15:26:23 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-18 15:26:23 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-18 15:26:23 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-07-18 15:26:23 [scrapy.core.engine] INFO: Spider opened
2024-07-18 15:26:23 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-18 15:26:23 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-18 15:26:23 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/robots.txt> (referer: None)
2024-07-18 15:26:24 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/xrt?Lang=zh-TW> (referer: None)
2024-07-18 15:26:24 [scrapy.core.scraper] DEBUG: Scraped from <200 https://rate.bot.com.tw/xrt?Lang=zh-TW>
{'美金 (USD)': '32.88', '港幣 (HKD)': '4.225', '英鎊 (GBP)': '43.31', '澳幣 (AUD)': '22.36', '加拿大幣 (CAD)': '24.26', '新加坡幣 (SGD)': '24.67', '瑞士法郎 (CHF)': '37.3', '日圓 (JPY)': '0.2124', '南非幣 (ZAR)': '-', '瑞典幣 (SEK)': '3.22', '紐元 (NZD)': '20.17', '泰幣 (THB)': '0.9697', '菲國比索 (PHP)': '0.6264', '印尼幣 (IDR)': '0.00238', '歐元 (EUR)': '36.17', '韓元 (KRW)': '0.02577', '越南盾 (VND)': '0.00147', '馬來幣 (MYR)': '7.491', '人民幣 (CNY)': '4.553'}
2024-07-18 15:26:24 [scrapy.core.engine] INFO: Closing spider (finished)
2024-07-18 15:26:24 [scrapy.extensions.feedexport] INFO: Stored json feed (1 items) in: data.json
2024-07-18 15:26:24 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 745,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 137683,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 0.535213,
'feedexport/success_count/FileFeedStorage': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 18, 7, 26, 24, 160255, tzinfo=datetime.timezone.utc),
'item_scraped_count': 1,
'log_count/DEBUG': 6,
'log_count/INFO': 11,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2024, 7, 18, 7, 26, 23, 625042, tzinfo=datetime.timezone.utc)}
2024-07-18 15:26:24 [scrapy.core.engine] INFO: Spider closed (finished)
此 data.json 會輸出在第一層專案目錄 project2 下, 開啟檢視內容如下 :
[
{"美金 (USD)": "32.88", "港幣 (HKD)": "4.225", "英鎊 (GBP)": "43.31", "澳幣 (AUD)": "22.36", "加拿大幣 (CAD)": "24.26", "新加坡幣 (SGD)": "24.67", "瑞士法郎 (CHF)": "37.3", "日圓 (JPY)": "0.2124", "南非幣 (ZAR)": "-", "瑞典幣 (SEK)": "3.22", "紐元 (NZD)": "20.17", "泰幣 (THB)": "0.9697", "菲國比索 (PHP)": "0.6264", "印尼幣 (IDR)": "0.00238", "歐元 (EUR)": "36.17", "韓元 (KRW)": "0.02577", "越南盾 (VND)": "0.00147", "馬來幣 (MYR)": "7.491", "人民幣 (CNY)": "4.553"}
]
此 project2 專案 XPath 版壓縮檔可從 GitHub 下載 :
也可以改成用 CSS 選擇器來定位與剖析網頁, 在 Element 頁籤中點選幣別欄位 (第一個 td 內層 div 的第二個 div 元素), 按滑鼠右鍵點選 Copy/Copy selector" :
這樣會複製到如下的 CSS 選擇器 :
#ie11andabove > div > table > tbody > tr:nth-child(1) > td.currency.phone-small-font > div > div.visible-phone.print_hide
但其實不用這麼長, 因為 tbody 元素在網頁中只有一個, 因此可從 tbody 開始取, tr 後面的 nth-child(1) 表示限定為第一列拿掉, 可以得到這第一欄的 CSS 選擇器字串 :
這樣會複製到如下的 CSS 選擇器 :
#ie11andabove > div > table > tbody > tr:nth-child(1) > td:nth-child(3)
同樣也不需要這麼長, 從 tbody 開始即可, tr 後面的 nth-child(1) 表示限定為第一列拿掉, 可以得到這第三欄的 CSS 選擇器字串 :
css='tbody > tr > td:nth-child(3)::text'
用上面的測試網頁 test3.htm 來驗證正確無誤 :
>>> with open('test3.htm', 'r', encoding='utf-8') as f:
text=f.read()
selector=Selector(text=text)
css='tbody > tr > td:first-child > div > div.visible-phone.print_hide::text'
currency=selector.css(css).getall()
currency=[c.strip() for c in currency]
print(currency)
css='tbody > tr > td:nth-child(3)::text'
rate=selector.css(css).getall()
print(rate)
['美金 (USD)', '港幣 (HKD)', '英鎊 (GBP)', '澳幣 (AUD)', '加拿大幣 (CAD)', '新加坡幣 (SGD)', '瑞士法郎 (CHF)', '日圓 (JPY)', '南非幣 (ZAR)', '瑞典幣 (SEK)', '紐元 (NZD)', '泰幣 (THB)', '菲國比索 (PHP)', '印尼幣 (IDR)', '歐元 (EUR)', '韓元 (KRW)', '越南盾 (VND)', '馬來幣 (MYR)', '人民幣 (CNY)']
['32.895', '4.229', '43.24', '22.35', '24.27', '24.61', '36.8', '0.2093', '-', '3.21', '20.13', '0.9657', '0.6246', '0.00238', '36.09', '0.02573', '0.00145', '7.477', '4.542']
接下來用 CSS 選擇器取代 XPath 修改 parse() 方法為如下 :
# bot_rate_spider.py
import scrapy
class RateSpider(scrapy.Spider):
name='bot_rate_spider'
allowed_domains=['rate.bot.com.tw']
start_urls=['https://rate.bot.com.tw/xrt?Lang=zh-TW']
def parse(self, response):
css='tbody > tr > td:first-child > div > ' +\
'div.visible-phone.print_hide::text'
currency=response.css(css).getall()
currency=[c.strip() for c in currency]
css='tbody > tr > td:nth-child(3)::text'
rate=response.css(css).getall()
result={c: r for c, r in zip(currency, rate)}
yield result
重新執行爬蟲結果與使用 XPath 的相同 :
scrapy crawl bot_rate_spider
D:\python\test\scrapy_projects\project2>scrapy crawl bot_rate_spider
2024-07-18 17:29:53 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: project2)
2024-07-18 17:29:53 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Windows-10-10.0.22631-SP0
2024-07-18 17:29:53 [scrapy.addons] INFO: Enabled addons:
[]
2024-07-18 17:29:53 [asyncio] DEBUG: Using selector: SelectSelector
2024-07-18 17:29:53 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-18 17:29:53 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2024-07-18 17:29:53 [scrapy.extensions.telnet] INFO: Telnet Password: a17254ea89aa610b
2024-07-18 17:29:53 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2024-07-18 17:29:53 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'project2',
'FEED_EXPORT_ENCODING': 'utf-8',
'NEWSPIDER_MODULE': 'project2.spiders',
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'ROBOTSTXT_OBEY': True,
'SPIDER_MODULES': ['project2.spiders'],
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-18 17:29:53 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-18 17:29:53 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-18 17:29:53 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-07-18 17:29:53 [scrapy.core.engine] INFO: Spider opened
2024-07-18 17:29:53 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-18 17:29:53 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-18 17:29:54 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/robots.txt> (referer: None)
2024-07-18 17:29:54 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://rate.bot.com.tw/xrt?Lang=zh-TW> (referer: None)
2024-07-18 17:29:54 [scrapy.core.scraper] DEBUG: Scraped from <200 https://rate.bot.com.tw/xrt?Lang=zh-TW>
{'美金 (USD)': '32.875', '港幣 (HKD)': '4.225', '英鎊 (GBP)': '43.28', '澳幣 (AUD)': '22.35', '加拿大幣 (CAD)': '24.27', '新加坡幣 (SGD)': '24.66', '瑞士法郎 (CHF)': '37.27', '日圓 (JPY)': '0.2122', '南非幣 (ZAR)': '-', '瑞典幣 (SEK)': '3.22', '紐元 (NZD)': '20.17', '泰幣 (THB)': '0.9683', '菲國比索 (PHP)': '0.6255', '印尼幣 (IDR)': '0.00238', '歐元 (EUR)': '36.17', '韓元 (KRW)': '0.02574', '越南盾 (VND)': '0.00145', '馬來幣 (MYR)': '7.485', '人民幣 (CNY)': '4.549'}
2024-07-18 17:29:54 [scrapy.core.engine] INFO: Closing spider (finished)
2024-07-18 17:29:54 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 745,
'downloader/request_count': 2,
'downloader/request_method_count/GET': 2,
'downloader/response_bytes': 137752,
'downloader/response_count': 2,
'downloader/response_status_count/200': 2,
'elapsed_time_seconds': 0.556616,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 18, 9, 29, 54, 357538, tzinfo=datetime.timezone.utc),
'item_scraped_count': 1,
'log_count/DEBUG': 6,
'log_count/INFO': 10,
'response_received_count': 2,
'robotstxt/request_count': 1,
'robotstxt/response_count': 1,
'robotstxt/response_status_count/200': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2024, 7, 18, 9, 29, 53, 800922, tzinfo=datetime.timezone.utc)}
2024-07-18 17:29:54 [scrapy.core.engine] INFO: Spider closed (finished)
此 project2 專案 CSS 版壓縮檔可從 GitHub 下載 :
沒有留言 :
張貼留言