我正在寻找一个公共列表的服务在欧洲:我不需要完整的地址—但名称和网站.我想到数据… XML,CSV.使用这些字段:名称,国家—和一些附加字段将是很好的每个国家的记录. btw:.欧洲志愿服务是青年的绝佳 Select

我找到了一个很棒的页面,非常非常全面; 我想从欧洲网站上托管的european volunteering services个网站中收集数据:

请参阅:https://youth.europa.eu/go-abroad/volunteering/opportunities_en

@hedgehog向我展示了正确的方法以及如何找到正确的 Select 器 在此帖子中:BeatuifulSoup iterate over 10 k pages & fetch data, parse: European Volunteering-Services: a tiny scraper that collects opportunities from EU-Site

# Extracting relevant data
title = soup.h1.get_text(', ',strip=True)
location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ',strip=True)
start_date,end_date = (e.get_text(strip=True)for e in soup.select('span.extra strong')[-2:])

但我们在那里有数百个志愿服务机会--这些机会存储在如下网站中:

 https://youth.europa.eu/solidarity/placement/39020_en 

https://youth.europa.eu/solidarity/placement/38993_en 

https://youth.europa.eu/solidarity/placement/38973_en 

https://youth.europa.eu/solidarity/placement/38972_en 

https://youth.europa.eu/solidarity/placement/38850_en 

https://youth.europa.eu/solidarity/placement/38633_en

idea:

我认为这将是可怕的收集数据—即使用一个基于BS4requests的刮刀—解析数据并随后打印数据在dataframe

嗯-我想我们可以遍历所有的URL:

placement/39020_en 
placement/38993_en 
placement/38973_en 
placement/38850_en 

idea:我认为我们可以从0到idea00存储,以获取所有存储在放置中的结果. 但这个 idea 并没有得到代码的支持.换句话说,目前我不知道如何在如此大的范围内迭代这个特殊的 idea :

目前我认为-这是一个基本的方法,从这个开始:

import requests
from bs4 import BeautifulSoup
import pandas as pd

# Function to generate placement URLs based on a range of IDs
def generate_urls(start_id, end_id):
    base_url = "https://youth.europa.eu/solidarity/placement/"
    urls = [base_url + str(id) + "_en" for id in range(start_id, end_id+1)]
    return urls

# Function to scrape data from a single URL
def scrape_data(url):
    response = requests.get(url)
    if response.status_code == 200:
        soup = BeautifulSoup(response.content, 'html.parser')
        title = soup.h1.get_text(', ', strip=True)
        location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
        start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
        website_tag = soup.find("a", class_="btn__link--website")
        website = website_tag.get("href") if website_tag else None
        return {
            "Title": title,
            "Location": location,
            "Start Date": start_date,
            "End Date": end_date,
            "Website": website,
            "URL": url
        }
    else:
        print(f"Failed to fetch data from {url}. Status code: {response.status_code}")
        return None

# Set the range of placement IDs we want to scrape
start_id = 1
end_id = 100000

# Generate placement URLs
urls = generate_urls(start_id, end_id)

# Scrape data from all URLs
data = []
for url in urls:
    placement_data = scrape_data(url)
    if placement_data:
        data.append(placement_data)

# Convert data to DataFrame
df = pd.DataFrame(data)

# Print DataFrame
print(df)

这给我带来了以下内容

 Failed to fetch data from https://youth.europa.eu/solidarity/placement/154_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/156_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/157_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/159_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/161_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/162_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/163_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/165_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/166_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/169_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/170_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/171_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/173_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/174_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/176_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/177_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/178_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/179_en. Status code: 404
    Failed to fetch data from https://youth.europa.eu/solidarity/placement/180_en. Status code: 404
    ---------------------------------------------------------------------------
    ValueError                                Traceback (most recent call last)
    <ipython-input-5-d6272ee535ef> in <cell line: 42>()
         41 data = []
         42 for url in urls:
    ---> 43     placement_data = scrape_data(url)
         44     if placement_data:
         45         data.append(placement_data)
    
    <ipython-input-5-d6272ee535ef> in scrape_data(url)
         16         title = soup.h1.get_text(', ', strip=True)
         17         location = soup.select_one('p:has(i.fa-location-arrow)').get_text(', ', strip=True)
    ---> 18         start_date, end_date = (e.get_text(strip=True) for e in soup.select('span.extra strong')[-2:])
         19         website_tag = soup.find("a", class_="btn__link--website")
         20         website = website_tag.get("href") if website_tag else None
    
    ValueError: not enough values to unpack (expected 2, got 0)

有什么 idea 吗?

查看bas—url:https://youth.europa.eu/go-abroad/volunteering/opportunities_en

推荐答案

与其自己创建ID,我宁愿 Select API方法并检索已经 struct 化为JSON的信息.这可以转换成dataframepandas.json_normalize()

Example
import requests
import pandas as pd

data = requests.get('https://youth.europa.eu/d8/api/rest/eyp/v1/search_en?type=Opportunity&size=100&from=0&filters%5Bstatus%5D=open&filters%5Bdate_end%5D%5Boperator%5D=%3E%3D&filters%5Bdate_end%5D%5Bvalue%5D=2024-03-14&filters%5Bdate_end%5D%5Btype%5D=must').json().get('hits').get('hits')
pd.json_normalize(data)

只需判断浏览器dev tools的网络选项卡,就可以了解数据来自哪里,以及如何根据有效负载过滤结果:

type: Opportunity
size: 1000
from: 0
filters[status]: open
filters[date_end][operator]: >=
filters[date_end][value]: 2024-03-14
filters[date_end][type]: must

Python相关问答推荐

配置Sweetviz以分析对象类型列,而无需转换

2D空间中的反旋算法

Pandas—合并数据帧,在公共列上保留非空值,在另一列上保留平均值

driver. find_element无法通过class_name找到元素'""

Scrapy和Great Expectations(great_expectations)—不合作

如何在FastAPI中为我上传的json文件提供索引ID?

判断solve_ivp中的事件

如何在Python中使用Pandas将R s Tukey s HSD表转换为相关矩阵''

如何在PySide/Qt QColumbnView中删除列

为什么我的sundaram筛这么低效

Cython无法识别Numpy类型

如何使用正则表达式修改toml文件中指定字段中的参数值

使用Python TCP套接字发送整数并使用C#接收—接收正确数据时出错

Js的查询结果可以在PC Chrome上显示,但不能在Android Chrome、OPERA和EDGE上显示,而两者都可以在Firefox上运行

为什么dict. items()可以快速查找?

如何设置nan值为numpy数组多条件

Seaborn散点图使用多个不同的标记而不是点

用0填充没有覆盖范围的垃圾箱

通过对列的其余部分进行采样,在Polars DataFrame中填充_null`?

正则表达式反向查找