How to read subprocess output asynchronously - asynchronous

I want to implement a futures::Stream for reading and parsing the standard output of a child subprocess.
What I'm doing at the moment:
spawn subprocess and obtain its stdout via std::process methods: let child = Command::new(...).stdout(Stdio.pipe()).spawn().expect(...)
add AsyncRead and BufRead to stdout:
let stdout = BufReader::new(tokio_io::io::AllowStdIo::new(
child.stdout.expect("Failed to open stdout"),
));
declare a wrapper struct for stdout:
struct MyStream<Io: AsyncRead + BufRead> {
io: Io,
}
implement Stream:
impl<Io: AsyncRead + BufRead> Stream for MyStream<Io> {
type Item = Message;
type Error = Error;
fn poll(&mut self) -> Poll<Option<Message>, Error> {
let mut line = String::new();
let n = try_nb!(self.io.read_line(&mut line));
if n == 0 {
return Ok(None.into());
}
//...read & parse further
}
}
The problem is that AllowStdIo doesn't make ChildStdout magically asynchronous and the self.io.read_line call still blocks.
I guess I need to pass something different instead of Stdio::pipe() to have it asynchronous, but what? Or is there a different solution for that?
This question is different from What is the best approach to encapsulate blocking I/O in future-rs? because I want to get asynchronous I/O for the specific case of a subprocess and not solve the problem of encapsulation of synchronous I/O.
Update: I'm using tokio = "0.1.3" to leverage its runtime feature and using tokio-process is not an option at the moment (https://github.com/alexcrichton/tokio-process/issues/27)

The tokio-process crate provides you with a CommandExt trait that allows you to spawn a command asynchronously.
The resulting Child has a getter for ChildStdout which implements Read and is non-blocking.
Wrapping tokio_process::ChildStdout into AllowStdIo as you did in your example should make it work!

Here is my version using tokio::process
let mut child = match Command::new(&args.run[0])
.args(parameters)
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.kill_on_drop(true)
.spawn()
{
Ok(c) => c,
Err(e) => panic!("Unable to start process `{}`. {}", args.run[0], e),
};
let stdout = child.stdout.take().expect("child did not have a handle to stdout");
let stderr = child.stderr.take().expect("child did not have a handle to stderr");
let mut stdout_reader = BufReader::new(stdout).lines();
let mut stderr_reader = BufReader::new(stderr).lines();
loop {
tokio::select! {
result = stdout_reader.next_line() => {
match result {
Ok(Some(line)) => println!("Stdout: {}", line),
Err(_) => break,
_ => (),
}
}
result = stderr_reader.next_line() => {
match result {
Ok(Some(line)) => println!("Stderr: {}", line),
Err(_) => break,
_ => (),
}
}
result = child.wait() => {
match result {
Ok(exit_code) => println!("Child process exited with {}", exit_code),
_ => (),
}
break // child process exited
}
};
}

Related

How to run multiple Tokio async tasks in a loop without using tokio::spawn?

I built a LED clock that also displays weather. My program does a couple of different things in a loop, each thing with a different interval:
updates the LEDs every 50ms,
checks the light level (to adjust the brightness) every 1 second,
fetches weather every 10 minutes,
actually some more, but that's irrelevant.
Updating the LEDs is the most critical: I don't want this to be delayed when e.g. weather is being fetched. This should not be a problem as fetching weather is mostly an async HTTP call.
Here's the code that I have:
let mut measure_light_stream = tokio::time::interval(Duration::from_secs(1));
let mut update_weather_stream = tokio::time::interval(WEATHER_FETCH_INTERVAL);
let mut update_leds_stream = tokio::time::interval(UPDATE_LEDS_INTERVAL);
loop {
tokio::select! {
_ = measure_light_stream.tick() => {
let light = lm.get_light();
light_smooth.sp = light;
},
_ = update_weather_stream.tick() => {
let fetched_weather = weather_service.get(&config).await;
// Store the fetched weather for later access from the displaying function.
weather_clock.weather = fetched_weather.clone();
},
_ = update_leds_stream.tick() => {
// Some code here that actually sets the LEDs.
// This code accesses the weather_clock, the light level etc.
},
}
}
I realised the code doesn't do what I wanted it to do - fetching the weather blocks the execution of the loop. I see why - the docs of tokio::select! say the other branches are cancelled as soon as the update_weather_stream.tick() expression completes.
How do I do this in such a way that while fetching the weather is waiting on network, the LEDs are still updated? I figured out I could use tokio::spawn to start a separate non-blocking "thread" for fetching weather, but then I have problems with weather_service not being Send, let alone weather_clock not being shareable between threads. I don't want this complication, I'm fine with everything running in a single thread, just like what select! does.
Reproducible example
use std::time::Duration;
use tokio::time::{interval, sleep};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
let mut slow_stream = interval(Duration::from_secs(3));
let mut fast_stream = interval(Duration::from_millis(200));
// Note how access to this data is straightforward, I do not want
// this to get more complicated, e.g. care about threads and Send.
let mut val = 1;
loop {
tokio::select! {
_ = fast_stream.tick() => {
println!(".{}", val);
},
_ = slow_stream.tick() => {
println!("Starting slow operation...");
// The problem: During this await the dots are not printed.
sleep(Duration::from_secs(1)).await;
val += 1;
println!("...done");
},
}
}
}
You can use tokio::join! to run multiple async operations concurrently within the same task.
Here's an example:
async fn measure_light(halt: &Cell<bool>) {
while !halt.get() {
let light = lm.get_light();
// ....
tokio::time::sleep(Duration::from_secs(1)).await;
}
}
async fn blink_led(halt: &Cell<bool>) {
while !halt.get() {
// LED blinking code
tokio::time::sleep(UPDATE_LEDS_INTERVAL).await;
}
}
async fn poll_weather(halt: &Cell<bool>) {
while !halt.get() {
let weather = weather_service.get(&config).await;
// ...
tokio::time::sleep(WEATHER_FETCH_INTERVAL).await;
}
}
// example on how to terminate execution
async fn terminate(halt: &Cell<bool>) {
tokio::time::sleep(Duration::from_secs(10)).await;
halt.set(true);
}
async fn main() {
let halt = Cell::new(false);
tokio::join!(
measure_light(&halt),
blink_led(&halt),
poll_weather(&halt),
terminate(&halt),
);
}
If you're using tokio::TcpStream or other non-blocking IO, then it should allow for concurrent execution.
I've added a Cell flag for halting execution as an example. You can use the same technique to share any mutable state between join branches.
EDIT: Same thing can be done with tokio::select!. The main difference with your code is that the actual "business logic" is inside the futures awaited by select.
select allows you to drop unfinished futures instead of waiting for them to exit on their own (so halt termination flag is not necessary).
async fn main() {
tokio::select! {
_ = measure_light() => {},
_ = blink_led() = {},
_ = poll_weather() => {},
}
}
Here's a concrete solution, based on the second part of stepan's answer:
use std::time::Duration;
use tokio::time::sleep;
#[tokio::main]
async fn main() {
// Cell is an acceptable complication when accessing the data.
let val = std::cell::Cell::new(1);
tokio::select! {
_ = async {loop {
println!(".{}", val.get());
sleep(Duration::from_millis(200)).await;
}} => {},
_ = async {loop {
println!("Starting slow operation...");
// The problem: During this await the dots are not printed.
sleep(Duration::from_secs(1)).await;
val.set(val.get() + 1);
println!("...done");
sleep(Duration::from_secs(3)).await;
}} => {},
}
}
Playground link

Not noticing any performance improvement when using async

I have a small program that executes the aws s3 cli commands but with different arguments. I'm using the Command crate and the the command makes a network call and returns some response. At first I have this synchronous & single-threaded implementation:
fn make_call<'a>(_name: &'a str, _bucket_poll: &mut BucketPoll<'a>) -> Option<BucketDetails<'a>> {
let invoke_result = invoke_network_call(_name);
let mut bucket = BucketDetails::new(_name);
match invoke_result {
Ok(invoke_str) => {
bucket.output = invoke_str;
_bucket_poll.insert_bucket(bucket.clone());
_bucket_poll.successful_count += 1;
Some(bucket)
}
Err(_) => {
_bucket_poll.insert_bucket(bucket);
None
}
}
}
// I invoke this function in sequential order, something like
make_call('name_1');
make_call('name_2');
make_call('name_3');
Because I don't really care at which order this function is executed, I decided to learn Tokio to help with performance. I changed the make_call function to be async:
async fn make_call_race() -> ExecutionResult {
let bucket_poll = BucketPoll::new();
let bucket_poll_guard = Arc::new(Mutex::new(bucket_poll));
loop {
let bucket_details = tokio::select! {
Some(bucket_details) = make_call_async("name_1", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_2", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_3", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_4", &bucket_poll_guard) => bucket_details,
else => { break }
};
success_printer(bucket_details);
}
// more printing, no more network calls
ExecutionResult::Success
}
make_call_async is essentially the same as make_call:
async fn make_call_async<'a>(
_name: &'a str,
_bucket_poll_guard: &'a Arc<Mutex<BucketPoll<'a>>>,
) -> Option<BucketDetails<'a>> {
{
if let Ok(bucket_poll_guard) = _bucket_poll_guard.lock() {
if bucket_poll_guard.has_polled(_name) {
return None;
}
}
}
let invoke_result = invoke_network_call(_name);
let mut bucket = BucketDetails::new(_name);
match invoke_result {
Ok(invoke_str) => {
bucket.output = invoke_str;
{
if let Ok(mut bucket_poll_guard) = _bucket_poll_guard.lock() {
bucket_poll_guard.insert_bucket(bucket.clone());
bucket_poll_guard.successful_count += 1;
}
}
Some(bucket)
}
Err(_) => {
{
if let Ok(mut bucket_poll_guard) = _bucket_poll_guard.lock() {
bucket_poll_guard.insert_bucket(bucket);
}
}
None
}
}
}
When I run the async version, I do see that my network calls are made a random order but I do not notice any speedups. I increased the number of network calls to ~50ish invocations but the runtime is nearly the same if not slightly worse. As I am new to async programming and Rust in general, I would like to understand why my async implementation does not seem to offer any improvement.
Extra:
Here is the invoke_network_call method:
fn invoke_network_call(_name: &str) -> core::result::Result<String, AwsCliError> {
let output = Command::new("aws")
.arg("s3")
.arg("ls")
.arg(_name)
.output()
.expect("Could not list s3 objects");
if !output.status.success() {
err_printer(format!("Failed to list s3 objects for bucket {}.", _name));
return Err(AwsCliError);
}
let output_str = get_stdout_string_from_output(&output);
Ok(output_str)
}
EDIT: yorodm's comment makes sense. What I did was use Tokio's Command instead of std::process's Command and made the invoke_network_call async. This reduced my runtime by half. Thank you!
You could rewrite invoke_network_call using an async version of Command.
async fn invoke_network_call(_name: &str) -> core::result::Result<String, AwsCliError> {
let output = tokio::process::Command::new("aws")
.arg("s3")
.arg("ls")
.arg(_name)
.output()
.await
.expect("Could not list s3 objects");
if !output.status.success() {
err_printer(format!("Failed to list s3 objects for bucket {}.", _name));
return Err(AwsCliError);
}
let output_str = get_stdout_string_from_output(&output);
Ok(output_str)
}
Thus removing the blocking std::process::Command call. However I would say that if you're going to access AWS services you should go with rusoto

Why does tokio::spawn have a delay when called next to crossbeam_channel::select?

I'm creating a task which will spawn other tasks. Some of them will take some time, so they cannot be awaited, but they can run in parallel:
src/main.rs
use crossbeam::crossbeam_channel::{bounded, select};
#[tokio::main]
async fn main() {
let (s, r) = bounded::<usize>(1);
tokio::spawn(async move {
let mut counter = 0;
loop {
let loop_id = counter.clone();
tokio::spawn(async move { // why this one was not fired?
println!("inner task {}", loop_id);
}); // .await.unwrap(); - solves issue, but this is long task which cannot be awaited
println!("loop {}", loop_id);
select! {
recv(r) -> rr => {
// match rr {
// Ok(ee) => {
// println!("received from channel {}", loop_id);
// tokio::spawn(async move {
// println!("received from channel task {}", loop_id);
// });
// },
// Err(e) => println!("{}", e),
// };
},
// more recv(some_channel) ->
}
counter = counter + 1;
}
});
// let s_clone = s.clone();
// tokio::spawn(async move {
// s_clone.send(2).unwrap();
// });
loop {
// rest of the program
}
}
I've noticed strange behavior. This outputs:
loop 0
I was expecting it to also output inner task 0.
If I send a value to channel, the output will be:
loop 0
inner task 0
loop 1
This is missing inner task 1.
Why is inner task spawned with one loop of delay?
The first time I noticed such behavior with 'received from channel task' delayed one loop, but when I reduced code to prepare sample this started to happen with 'inner task'. It might be worth mentioning that if I write second tokio::spawn right to another, only the last one will have this issue. Is there something I should be aware when calling tokio::spawn and select!? What causes this one loop of delay?
Cargo.toml dependencies
[dependencies]
tokio = { version = "0.2", features = ["full"] }
crossbeam = "0.7"
Rust 1.46, Windows 10
select! is blocking, and the docs for tokio::spawn say:
The spawned task may execute on the current thread, or it may be sent to a different thread to be executed.
In this case, the select! "future" is actually a blocking function, and spawn doesn't use a new thread (either in the first invocation or the one inside the loop).
Because you don't tell tokio that you are going to block, tokio doesn't think another thread is needed (from tokio's perspective, you only have 3 futures which should never block, so why would you need another thread anyway?).
The solution is to use the tokio::task::spawn_blocking for the select!-ing closure (which will no longer be a future, so async move {} is now move || {}).
Now tokio will know that this function actually blocks, and will move it to another thread (while keeping all the actual futures in other execution threads).
use crossbeam::crossbeam_channel::{bounded, select};
#[tokio::main]
async fn main() {
let (s, r) = bounded::<usize>(1);
tokio::task::spawn_blocking(move || {
// ...
});
loop {
// rest of the program
}
}
Link to playground
Another possible solution is to use a non-blocking channel like tokio::sync::mpsc, on which you can use await and get the expected behavior, like this playground example with direct recv().await or with tokio::select!, like this:
use tokio::sync::mpsc;
#[tokio::main]
async fn main() {
let (mut s, mut r) = mpsc::channel::<usize>(1);
tokio::spawn(async move {
loop {
// ...
tokio::select! {
Some(i) = r.recv() => {
println!("got = {}", i);
}
}
}
});
loop {
// rest of the program
}
}
Link to playground

How can I mutate the HTML inside a hyper::Response? [duplicate]

I want to write a server using the current master branch of Hyper that saves a message that is delivered by a POST request and sends this message to every incoming GET request.
I have this, mostly copied from the Hyper examples directory:
extern crate futures;
extern crate hyper;
extern crate pretty_env_logger;
use futures::future::FutureResult;
use hyper::{Get, Post, StatusCode};
use hyper::header::{ContentLength};
use hyper::server::{Http, Service, Request, Response};
use futures::Stream;
struct Echo {
data: Vec<u8>,
}
impl Echo {
fn new() -> Self {
Echo {
data: "text".into(),
}
}
}
impl Service for Echo {
type Request = Request;
type Response = Response;
type Error = hyper::Error;
type Future = FutureResult<Response, hyper::Error>;
fn call(&self, req: Self::Request) -> Self::Future {
let resp = match (req.method(), req.path()) {
(&Get, "/") | (&Get, "/echo") => {
Response::new()
.with_header(ContentLength(self.data.len() as u64))
.with_body(self.data.clone())
},
(&Post, "/") => {
//self.data.clear(); // argh. &self is not mutable :(
// even if it was mutable... how to put the entire body into it?
//req.body().fold(...) ?
let mut res = Response::new();
if let Some(len) = req.headers().get::<ContentLength>() {
res.headers_mut().set(ContentLength(0));
}
res.with_body(req.body())
},
_ => {
Response::new()
.with_status(StatusCode::NotFound)
}
};
futures::future::ok(resp)
}
}
fn main() {
pretty_env_logger::init().unwrap();
let addr = "127.0.0.1:12346".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(Echo::new())).unwrap();
println!("Listening on http://{} with 1 thread.", server.local_addr().unwrap());
server.run().unwrap();
}
How do I turn the req.body() (which seems to be a Stream of Chunks) into a Vec<u8>? I assume I must somehow return a Future that consumes the Stream and turns it into a single Vec<u8>, maybe with fold(). But I have no clue how to do that.
Hyper 0.13 provides a body::to_bytes function for this purpose.
use hyper::body;
use hyper::{Body, Response};
pub async fn read_response_body(res: Response<Body>) -> Result<String, hyper::Error> {
let bytes = body::to_bytes(res.into_body()).await?;
Ok(String::from_utf8(bytes.to_vec()).expect("response was not valid utf-8"))
}
I'm going to simplify the problem to just return the total number of bytes, instead of echoing the entire stream.
Futures 0.3
Hyper 0.13 + TryStreamExt::try_fold
See euclio's answer about hyper::body::to_bytes if you just want all the data as one giant blob.
Accessing the stream allows for more fine-grained control:
use futures::TryStreamExt; // 0.3.7
use hyper::{server::Server, service, Body, Method, Request, Response}; // 0.13.9
use std::convert::Infallible;
use tokio; // 0.2.22
#[tokio::main]
async fn main() {
let addr = "127.0.0.1:12346".parse().expect("Unable to parse address");
let server = Server::bind(&addr).serve(service::make_service_fn(|_conn| async {
Ok::<_, Infallible>(service::service_fn(echo))
}));
println!("Listening on http://{}.", server.local_addr());
if let Err(e) = server.await {
eprintln!("Error: {}", e);
}
}
async fn echo(req: Request<Body>) -> Result<Response<Body>, hyper::Error> {
let (parts, body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::POST, "/") => {
let entire_body = body
.try_fold(Vec::new(), |mut data, chunk| async move {
data.extend_from_slice(&chunk);
Ok(data)
})
.await;
entire_body.map(|body| {
let body = Body::from(format!("Read {} bytes", body.len()));
Response::new(body)
})
}
_ => {
let body = Body::from("Can only POST to /");
Ok(Response::new(body))
}
}
}
Unfortunately, the current implementation of Bytes is no longer compatible with TryStreamExt::try_concat, so we have to switch back to a fold.
Futures 0.1
hyper 0.12 + Stream::concat2
Since futures 0.1.14, you can use Stream::concat2 to stick together all the data into one:
fn concat2(self) -> Concat2<Self>
where
Self: Sized,
Self::Item: Extend<<Self::Item as IntoIterator>::Item> + IntoIterator + Default,
use futures::{
future::{self, Either},
Future, Stream,
}; // 0.1.25
use hyper::{server::Server, service, Body, Method, Request, Response}; // 0.12.20
use tokio; // 0.1.14
fn main() {
let addr = "127.0.0.1:12346".parse().expect("Unable to parse address");
let server = Server::bind(&addr).serve(|| service::service_fn(echo));
println!("Listening on http://{}.", server.local_addr());
let server = server.map_err(|e| eprintln!("Error: {}", e));
tokio::run(server);
}
fn echo(req: Request<Body>) -> impl Future<Item = Response<Body>, Error = hyper::Error> {
let (parts, body) = req.into_parts();
match (parts.method, parts.uri.path()) {
(Method::POST, "/") => {
let entire_body = body.concat2();
let resp = entire_body.map(|body| {
let body = Body::from(format!("Read {} bytes", body.len()));
Response::new(body)
});
Either::A(resp)
}
_ => {
let body = Body::from("Can only POST to /");
let resp = future::ok(Response::new(body));
Either::B(resp)
}
}
}
You could also convert the Bytes into a Vec<u8> via entire_body.to_vec() and then convert that to a String.
See also:
How do I convert a Vector of bytes (u8) to a string
hyper 0.11 + Stream::fold
Similar to Iterator::fold, Stream::fold takes an accumulator (called init) and a function that operates on the accumulator and an item from the stream. The result of the function must be another future with the same error type as the original. The total result is itself a future.
fn fold<F, T, Fut>(self, init: T, f: F) -> Fold<Self, F, Fut, T>
where
F: FnMut(T, Self::Item) -> Fut,
Fut: IntoFuture<Item = T>,
Self::Error: From<Fut::Error>,
Self: Sized,
We can use a Vec as the accumulator. Body's Stream implementation returns a Chunk. This implements Deref<[u8]>, so we can use that to append each chunk's data to the Vec.
extern crate futures; // 0.1.23
extern crate hyper; // 0.11.27
use futures::{Future, Stream};
use hyper::{
server::{Http, Request, Response, Service}, Post,
};
fn main() {
let addr = "127.0.0.1:12346".parse().unwrap();
let server = Http::new().bind(&addr, || Ok(Echo)).unwrap();
println!(
"Listening on http://{} with 1 thread.",
server.local_addr().unwrap()
);
server.run().unwrap();
}
struct Echo;
impl Service for Echo {
type Request = Request;
type Response = Response;
type Error = hyper::Error;
type Future = Box<futures::Future<Item = Response, Error = Self::Error>>;
fn call(&self, req: Self::Request) -> Self::Future {
match (req.method(), req.path()) {
(&Post, "/") => {
let f = req.body()
.fold(Vec::new(), |mut acc, chunk| {
acc.extend_from_slice(&*chunk);
futures::future::ok::<_, Self::Error>(acc)
})
.map(|body| Response::new().with_body(format!("Read {} bytes", body.len())));
Box::new(f)
}
_ => panic!("Nope"),
}
}
}
You could also convert the Vec<u8> body to a String.
See also:
How do I convert a Vector of bytes (u8) to a string
Output
When called from the command line, we can see the result:
$ curl -X POST --data hello http://127.0.0.1:12346/
Read 5 bytes
Warning
All of these solutions allow a malicious end user to POST an infinitely sized file, which would cause the machine to run out of memory. Depending on the intended use, you may wish to establish some kind of cap on the number of bytes read, potentially writing to the filesystem at some breakpoint.
See also:
How do I apply a limit to the number of bytes read by futures::Stream::concat2?
Most of the answers on this topic are outdated or overly complicated. The solution is pretty simple:
/*
WARNING for beginners!!! This use statement
is important so we can later use .data() method!!!
*/
use hyper::body::HttpBody;
let my_vector: Vec<u8> = request.into_body().data().await.unwrap().unwrap().to_vec();
let my_string = String::from_utf8(my_vector).unwrap();
You can also use body::to_bytes as #euclio answered. Both approaches are straight-forward! Don't forget to handle unwrap properly.

Unable to use SQLite3 in Rust via FFI due to a misused prepared statement [duplicate]

This question already has answers here:
Raw pointer turns null passing from Rust to C
(1 answer)
How to stop memory leaks when using `as_ptr()`?
(1 answer)
Closed 5 years ago.
I am porting some code to Rust that reads a database path from stdin, opens the database and loops for queries. I have done something similar in C so I am pretty sure that the problem is my non-understanding of Rust FFI.
I am using the sqlite3 binding provided by libsqlite3-sys, the one from rusqlite. The whole code is here.
open_connection initializes a pointer and passes it to sqlite3_open_v2 and checks if everything went well.
// Line 54 of complete code
fn open_connection(s: String) -> Result<RawConnection, SQLite3Error> {
unsafe {
let mut db: *mut sqlite3::sqlite3 = mem::uninitialized();
let r = sqlite3::sqlite3_open_v2(CString::new(s).unwrap().as_ptr(),
&mut db,
sqlite3::SQLITE_OPEN_CREATE |
sqlite3::SQLITE_OPEN_READWRITE,
ptr::null());
match r {
sqlite3::SQLITE_OK => Ok(RawConnection { db: db }),
_ => return Err(SQLite3Error::OpenError),
}
}
}
I create an SQL statement by converting the query from a Rust String to a C String, creating another pointer for the location of the statement itself and I go on creating a statement and checking the output:
// Line 35 of complete code
fn create_statement(conn: &RawConnection, query: String) -> Result<Statement, SQLite3Error> {
let len = query.len();
let raw_query = CString::new(query).unwrap().as_ptr();
unsafe {
let mut stmt: *mut sqlite3::sqlite3_stmt = mem::uninitialized();
if stmt.is_null() {
println!("Now it is null!");
}
match sqlite3::sqlite3_prepare_v2(conn.db,
raw_query,
len as i32,
&mut stmt,
ptr::null_mut()) {
sqlite3::SQLITE_OK => Ok(Statement { stmt: stmt }),
_ => Err(SQLite3Error::StatementError),
}
}
}
I try to execute a statement
// Line 81 of complete code
fn execute_statement(conn: &RawConnection, stmt: Statement) -> Result<Cursor, SQLite3Error> {
match unsafe { sqlite3::sqlite3_step(stmt.stmt) } {
sqlite3::SQLITE_OK => Ok(Cursor::OKCursor),
sqlite3::SQLITE_DONE => Ok(Cursor::DONECursor),
sqlite3::SQLITE_ROW => {
let n_columns = unsafe { sqlite3::sqlite3_column_count(stmt.stmt) } as i32;
let mut types: Vec<EntityType> = Vec::new();
for i in 0..n_columns {
types.push(match unsafe { sqlite3::sqlite3_column_type(stmt.stmt, i) } {
sqlite3::SQLITE_INTEGER => EntityType::Integer,
sqlite3::SQLITE_FLOAT => EntityType::Float,
sqlite3::SQLITE_TEXT => EntityType::Text,
sqlite3::SQLITE_BLOB => EntityType::Blob,
sqlite3::SQLITE_NULL => EntityType::Null,
_ => EntityType::Null,
})
}
Ok(Cursor::RowsCursor {
stmt: stmt,
num_columns: n_columns,
types: types,
previous_status: sqlite3::SQLITE_ROW,
})
}
x => {
println!("{}", x);
return Err(SQLite3Error::ExecuteError);
}
}
}
This always fails with the error code 21 MISUSE. Usually, this error happens when you try to execute a NULL statement but I have no idea how to figure it out.
Do you see any other problem that may cause a code 21 MISUSE?

Resources