The async example是有用的,但对于Rust和Tokio来说是新手,我正在努力解决如何一次处理N个请求,使用来自向量的URL,并 for each URL创建一个响应HTML的迭代器作为字符串.
这怎么可能呢?
The async example是有用的,但对于Rust和Tokio来说是新手,我正在努力解决如何一次处理N个请求,使用来自向量的URL,并 for each URL创建一个响应HTML的迭代器作为字符串.
这怎么可能呢?
从0.10开始:
use futures::{stream, StreamExt}; // 0.3.5
use reqwest::Client; // 0.10.6
use tokio; // 0.2.21, features = ["macros"]
const CONCURRENT_REQUESTS: usize = 2;
#[tokio::main]
async fn main() {
let client = Client::new();
let urls = vec!["https://api.ipify.org"; 2];
let bodies = stream::iter(urls)
.map(|url| {
let client = &client;
async move {
let resp = client.get(url).send().await?;
resp.bytes().await
}
})
.buffer_unordered(CONCURRENT_REQUESTS);
bodies
.for_each(|b| async {
match b {
Ok(b) => println!("Got {} bytes", b.len()),
Err(e) => eprintln!("Got an error: {}", e),
}
})
.await;
}
stream::iter(urls)
取一组字符串并将其转换为Stream
.
.map(|url| {
对流中的每个元素运行异步函数,并将元素转换为新类型.
let client = &client; async move {
获取对Client
的显式引用,并将引用(而不是原始Client
)移动到匿名异步块中.
let resp = client.get(url).send().await?;
使用Client
的连接池启动异步GET请求,并等待请求.
resp.bytes().await
请求并等待响应的字节数.
.buffer_unordered(N);
将期货流转换为期货价值流,同时执行期货.
bodies .for_each(|b| { async { match b { Ok(b) => println!("Got {} bytes", b.len()), Err(e) => eprintln!("Got an error: {}", e), } } }) .await;
将流转换回单个future ,打印出沿途接收的数据量,然后等待future 完成.
另见:
如果愿意,还可以将迭代器转换为future 迭代器,并使用future::join_all
:
use futures::future; // 0.3.4
use reqwest::Client; // 0.10.1
use tokio; // 0.2.11
#[tokio::main]
async fn main() {
let client = Client::new();
let urls = vec!["https://api.ipify.org"; 2];
let bodies = future::join_all(urls.into_iter().map(|url| {
let client = &client;
async move {
let resp = client.get(url).send().await?;
resp.bytes().await
}
}))
.await;
for b in bodies {
match b {
Ok(b) => println!("Got {} bytes", b.len()),
Err(e) => eprintln!("Got an error: {}", e),
}
}
}
我鼓励您使用第一个示例,因为您通常希望限制并发性,buffer
和buffer_unordered
有助于提高并发性.
并发请求通常已经足够好了,但有时您需要处理need个并行请求.在这种情况下,您需要生成一个任务.
use futures::{stream, StreamExt}; // 0.3.8
use reqwest::Client; // 0.10.9
use tokio; // 0.2.24, features = ["macros"]
const PARALLEL_REQUESTS: usize = 2;
#[tokio::main]
async fn main() {
let urls = vec!["https://api.ipify.org"; 2];
let client = Client::new();
let bodies = stream::iter(urls)
.map(|url| {
let client = client.clone();
tokio::spawn(async move {
let resp = client.get(url).send().await?;
resp.bytes().await
})
})
.buffer_unordered(PARALLEL_REQUESTS);
bodies
.for_each(|b| async {
match b {
Ok(Ok(b)) => println!("Got {} bytes", b.len()),
Ok(Err(e)) => eprintln!("Got a reqwest::Error: {}", e),
Err(e) => eprintln!("Got a tokio::JoinError: {}", e),
}
})
.await;
}
主要区别在于:
tokio::spawn
在单独的tasks中执行工作.reqwest::Client
.作为recommended,我们克隆了一个共享客户机以利用连接池.另见: