我正在寻找一个公共列表的服务在欧洲:我不需要完整的地址—但名称和网站.我想到数据… XML,CSV.使用这些字段:名称,国家—和一些附加字段将是很好的每个国家的记录. btw:.欧洲志愿服务是青年的绝佳 Select
我找到了一个很棒的页面,非常非常全面; 我想从欧洲网站上托管的european volunteering services个网站中收集数据:
请参阅:https://youth.europa.eu/go-abroad/volunteering/opportunities_en
@hedgehog向我展示了正确的方法以及如何找到正确的 Select 器 在此帖子中:BeatuifulSoup iterate over 10 k pages & fetch data, parse: European Volunteering-Services: a tiny scraper that collects opportunities from EU-Site
# Extracting relevant data
title = soup.h1.get_text(', ',strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ',strip=True)
start_date,end_date = (e.get_text(strip=True)for e in soup.select('span.extra strong')[-2:])
但我们在那里有数百个志愿服务机会--这些机会存储在如下网站中:
https://youth.europa.eu/solidarity/placement/39020_en
https://youth.europa.eu/solidarity/placement/38993_en
https://youth.europa.eu/solidarity/placement/38973_en
https://youth.europa.eu/solidarity/placement/38972_en
https://youth.europa.eu/solidarity/placement/38850_en
https://youth.europa.eu/solidarity/placement/38633_en
idea:
我认为这将是可怕的收集数据—即使用一个基于BS4
和requests
的刮刀—解析数据并随后打印数据在dataframe
嗯-我想我们可以遍历所有的URL:
placement/39020_en
placement/38993_en
placement/38973_en
placement/38850_en
idea:我认为我们可以从0到idea00存储,以获取所有存储在放置中的结果. 但这个 idea 并没有得到代码的支持.换句话说,目前我不知道如何在如此大的范围内迭代这个特殊的 idea :
目前我认为-这是一个基本的方法,从这个开始:
import requests
from bs4 import BeautifulSoup
import pandas as pd
# Function to generate placement URLs based on a range of IDs
def generate_urls(start_id, end_id):
base_url = "https://youth.europa.eu/solidarity/placement/"
urls = [base_url + str(id) + "_en" for id in range(start_id, end_id+1)]
return urls
# Function to scrape data from a single URL
def scrape_data(url):
response = requests.get(url)
if response.status_code == 200:
soup = BeautifulSoup(response.content, 'html.parser')
title = soup.h1.get_text(', ', strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
website_tag = soup.find("a", class_="btn__link--website")
website = website_tag.get("href") if website_tag else None
return {
"Title": title,
"Location": location,
"Start Date": start_date,
"End Date": end_date,
"Website": website,
"URL": url
}
else:
print(f"Failed to fetch data from {url}. Status code: {response.status_code}")
return None
# Set the range of placement IDs we want to scrape
start_id = 1
end_id = 100000
# Generate placement URLs
urls = generate_urls(start_id, end_id)
# Scrape data from all URLs
data = []
for url in urls:
placement_data = scrape_data(url)
if placement_data:
data.append(placement_data)
# Convert data to DataFrame
df = pd.DataFrame(data)
# Print DataFrame
print(df)
这给我带来了以下内容
Failed to fetch data from https://youth.europa.eu/solidarity/placement/154_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/156_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/157_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/159_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/161_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/162_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/163_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/165_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/166_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/169_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/170_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/171_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/173_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/174_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/176_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/177_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/178_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/179_en. Status code: 404
Failed to fetch data from https://youth.europa.eu/solidarity/placement/180_en. Status code: 404
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-5-d6272ee535ef> in <cell line: 42>()
41 data = []
42 for url in urls:
---> 43 placement_data = scrape_data(url)
44 if placement_data:
45 data.append(placement_data)
<ipython-input-5-d6272ee535ef> in scrape_data(url)
16 title = soup.h1.get_text(', ', strip=True)
17 location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
---> 18 start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
19 website_tag = soup.find("a", class_="btn__link--website")
20 website = website_tag.get("href") if website_tag else None
ValueError: not enough values to unpack (expected 2, got 0)
有什么 idea 吗?
查看bas—url:https://youth.europa.eu/go-abroad/volunteering/opportunities_en