Problem Description:个
该网站https://www.asos.com/us/women/dresses/cat/?cid=8799上的每个产品具有若干图像.例如,这是一个黑色连衣裙https://www.asos.com/us/asos-design/asos-design-super-soft-volume-sleeve-turtle-neck-mini-sweater-dress-in-black/prd/204910824#colourWayId-204910828的产品URL,如果你点击它,你可以看到这个黑色连衣裙有4个图像.此外,还有两个其他 colored颜色 版本的这件衣服(驼色和粉红色).对于每一种 colored颜色 ,还有另外3-4个图像.我想收集所有这些图像(每个图像的黑色,驼色,和粉红色版本的这个产品).
What I tried (code below):到目前为止,我已经设法从主页收集了所有的产品URL,例如上面提供的第二个链接.但是,一旦我访问了每个产品URL,我就不知道如何访问该URL中的所有图像.在实现这一下一步的过程中,如果有任何指导,我将不胜感激.
来自谷歌Colab的代码:
# Upload google drive files
from google.colab import drive
drive.mount('/content/drive')
# Import libraries
import urllib
import urllib.request
from bs4 import BeautifulSoup
import re
import requests
import matplotlib.pyplot as plt
from io import BytesIO
# Make Soup function
user_agent = 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.7) Gecko/2009021910 Firefox/3.0.7'
headers={'User-Agent':user_agent,}
def make_soup(url):
request= urllib.request.Request(url, None,headers)
thepage = urllib.request.urlopen(request)
soupdata = BeautifulSoup(thepage, "html.parser")
return soupdata
# Find total page #s
site = 'https://www.asos.com/us/women/dresses/cat/?cid=8799'
soup = make_soup(site)
element = soup.find('p', class_='label_Ph1fi')
element = element.text
numbers = re.findall(r'\d{1,3}(?:,\d{3})*', element)
if len(numbers) >= 2:
offset = int(numbers[0].replace(',', ''))
num_images = int(numbers[1].replace(',', ''))
num_pages = int(num_images / offset)
print(f"Images Per Page: {offset}")
print(f"Total Images: {num_images}")
print(f"Total Pages:{num_pages}")
else:
print("Numbers not found")
#num_images = int(element.replace(',', '').split(' ')[0])
# Get all product urls
product_urls = []
for i in range(num_pages):
site = 'https://www.asos.com/us/women/dresses/cat/?cid=8799&page='
site = site + str(i)
soup = make_soup(site)
a = soup.find_all('a',class_='productLink_E9Lfb',href=True)
for link in soup.find_all('a', class_='productLink_E9Lfb', href=True):
href = link.get('href')
if href:
product_urls.append(href)
print('Page ', i, ' done')
print(product_urls)
# Get all images per product url