Unable to use SQLite3 in Rust via FFI due to a misused prepared statement [duplicate] - sqlite

This question already has answers here:
Raw pointer turns null passing from Rust to C
(1 answer)
How to stop memory leaks when using `as_ptr()`?
(1 answer)
Closed 5 years ago.
I am porting some code to Rust that reads a database path from stdin, opens the database and loops for queries. I have done something similar in C so I am pretty sure that the problem is my non-understanding of Rust FFI.
I am using the sqlite3 binding provided by libsqlite3-sys, the one from rusqlite. The whole code is here.
open_connection initializes a pointer and passes it to sqlite3_open_v2 and checks if everything went well.
// Line 54 of complete code
fn open_connection(s: String) -> Result<RawConnection, SQLite3Error> {
unsafe {
let mut db: *mut sqlite3::sqlite3 = mem::uninitialized();
let r = sqlite3::sqlite3_open_v2(CString::new(s).unwrap().as_ptr(),
&mut db,
sqlite3::SQLITE_OPEN_CREATE |
sqlite3::SQLITE_OPEN_READWRITE,
ptr::null());
match r {
sqlite3::SQLITE_OK => Ok(RawConnection { db: db }),
_ => return Err(SQLite3Error::OpenError),
}
}
}
I create an SQL statement by converting the query from a Rust String to a C String, creating another pointer for the location of the statement itself and I go on creating a statement and checking the output:
// Line 35 of complete code
fn create_statement(conn: &RawConnection, query: String) -> Result<Statement, SQLite3Error> {
let len = query.len();
let raw_query = CString::new(query).unwrap().as_ptr();
unsafe {
let mut stmt: *mut sqlite3::sqlite3_stmt = mem::uninitialized();
if stmt.is_null() {
println!("Now it is null!");
}
match sqlite3::sqlite3_prepare_v2(conn.db,
raw_query,
len as i32,
&mut stmt,
ptr::null_mut()) {
sqlite3::SQLITE_OK => Ok(Statement { stmt: stmt }),
_ => Err(SQLite3Error::StatementError),
}
}
}
I try to execute a statement
// Line 81 of complete code
fn execute_statement(conn: &RawConnection, stmt: Statement) -> Result<Cursor, SQLite3Error> {
match unsafe { sqlite3::sqlite3_step(stmt.stmt) } {
sqlite3::SQLITE_OK => Ok(Cursor::OKCursor),
sqlite3::SQLITE_DONE => Ok(Cursor::DONECursor),
sqlite3::SQLITE_ROW => {
let n_columns = unsafe { sqlite3::sqlite3_column_count(stmt.stmt) } as i32;
let mut types: Vec<EntityType> = Vec::new();
for i in 0..n_columns {
types.push(match unsafe { sqlite3::sqlite3_column_type(stmt.stmt, i) } {
sqlite3::SQLITE_INTEGER => EntityType::Integer,
sqlite3::SQLITE_FLOAT => EntityType::Float,
sqlite3::SQLITE_TEXT => EntityType::Text,
sqlite3::SQLITE_BLOB => EntityType::Blob,
sqlite3::SQLITE_NULL => EntityType::Null,
_ => EntityType::Null,
})
}
Ok(Cursor::RowsCursor {
stmt: stmt,
num_columns: n_columns,
types: types,
previous_status: sqlite3::SQLITE_ROW,
})
}
x => {
println!("{}", x);
return Err(SQLite3Error::ExecuteError);
}
}
}
This always fails with the error code 21 MISUSE. Usually, this error happens when you try to execute a NULL statement but I have no idea how to figure it out.
Do you see any other problem that may cause a code 21 MISUSE?

Related

How do I call an async function in a match statement under a non-async main function in Rust? [duplicate]

This question already has answers here:
How to use async/await in Rust when you can't make main function async
(4 answers)
How do I synchronously return a value calculated in an asynchronous Future?
(3 answers)
Closed 3 months ago.
I have a program that does various simple things based on user selection.
fn main() {
let mut user_input = String::new(); // Initialize variable to store user input
println!("Select an option:\r\n[1] Get sysinfo\r\n[2] Read/Write File\r\n[3] Download file\r\n[4] Exit"); // Print options
io::stdin().read_line(&mut user_input).expect("You entered something weird you donkey!"); // Get user input and store in variable
let int_input = user_input.trim().parse::<i32>().unwrap(); // Convert user input to int (i32 means signed integer 32 bits)
match int_input { // If Else statement
1 => getsysinfo(), // If int_input == 1, call getsysinfo()
2 => readwritefile(),
3 => downloadfile(),
4 => process::exit(1), // end program
_ => println!("You didn't choose one of the given options, you donkey!") // input validation
}
}
My function downloadfile() looks like this, referenced from the Rust cookbook on downloading files.
error_chain! {
foreign_links {
Io(std::io::Error);
HttpRequest(reqwest::Error);
}
}
async fn downloadfile() -> Result<()> {
let tmp_dir = Builder::new().prefix("example").tempdir()?;
let target = "localhost:8000/downloaded.txt";
let response = reqwest::get(target).await?;
let mut dest = {
let fname = response
.url()
.path_segments()
.and_then(|segments| segments.last())
.and_then(|name| if name.is_empty() { None } else { Some(name) })
.unwrap_or("tmp.bin");
println!("File to download: {}", fname);
let fname = tmp_dir.path().join(fname);
println!("Will be located under: {:?}", fname);
File::create(fname)?
};
let content = response.text().await?;
copy(&mut content.as_bytes(), &mut dest)?;
Ok(())
}
I get the following error:
`match` arms have incompatible types
expected unit type `()`
found opaque type `impl Future<Output = std::result::Result<(), Error>>`
I presume its because the async function returns a Future type, so how can I make this code work?
You need to use the block_on function.
Add futures as a dependency in your cargo.toml for the following example to work.
use futures::executor::block_on;
async fn hello() -> String {
return String::from("Hello world!");
}
fn main() {
let output = block_on(hello());
println!("{output}");
}

Not noticing any performance improvement when using async

I have a small program that executes the aws s3 cli commands but with different arguments. I'm using the Command crate and the the command makes a network call and returns some response. At first I have this synchronous & single-threaded implementation:
fn make_call<'a>(_name: &'a str, _bucket_poll: &mut BucketPoll<'a>) -> Option<BucketDetails<'a>> {
let invoke_result = invoke_network_call(_name);
let mut bucket = BucketDetails::new(_name);
match invoke_result {
Ok(invoke_str) => {
bucket.output = invoke_str;
_bucket_poll.insert_bucket(bucket.clone());
_bucket_poll.successful_count += 1;
Some(bucket)
}
Err(_) => {
_bucket_poll.insert_bucket(bucket);
None
}
}
}
// I invoke this function in sequential order, something like
make_call('name_1');
make_call('name_2');
make_call('name_3');
Because I don't really care at which order this function is executed, I decided to learn Tokio to help with performance. I changed the make_call function to be async:
async fn make_call_race() -> ExecutionResult {
let bucket_poll = BucketPoll::new();
let bucket_poll_guard = Arc::new(Mutex::new(bucket_poll));
loop {
let bucket_details = tokio::select! {
Some(bucket_details) = make_call_async("name_1", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_2", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_3", &bucket_poll_guard) => bucket_details,
Some(bucket_details) = make_call_async("name_4", &bucket_poll_guard) => bucket_details,
else => { break }
};
success_printer(bucket_details);
}
// more printing, no more network calls
ExecutionResult::Success
}
make_call_async is essentially the same as make_call:
async fn make_call_async<'a>(
_name: &'a str,
_bucket_poll_guard: &'a Arc<Mutex<BucketPoll<'a>>>,
) -> Option<BucketDetails<'a>> {
{
if let Ok(bucket_poll_guard) = _bucket_poll_guard.lock() {
if bucket_poll_guard.has_polled(_name) {
return None;
}
}
}
let invoke_result = invoke_network_call(_name);
let mut bucket = BucketDetails::new(_name);
match invoke_result {
Ok(invoke_str) => {
bucket.output = invoke_str;
{
if let Ok(mut bucket_poll_guard) = _bucket_poll_guard.lock() {
bucket_poll_guard.insert_bucket(bucket.clone());
bucket_poll_guard.successful_count += 1;
}
}
Some(bucket)
}
Err(_) => {
{
if let Ok(mut bucket_poll_guard) = _bucket_poll_guard.lock() {
bucket_poll_guard.insert_bucket(bucket);
}
}
None
}
}
}
When I run the async version, I do see that my network calls are made a random order but I do not notice any speedups. I increased the number of network calls to ~50ish invocations but the runtime is nearly the same if not slightly worse. As I am new to async programming and Rust in general, I would like to understand why my async implementation does not seem to offer any improvement.
Extra:
Here is the invoke_network_call method:
fn invoke_network_call(_name: &str) -> core::result::Result<String, AwsCliError> {
let output = Command::new("aws")
.arg("s3")
.arg("ls")
.arg(_name)
.output()
.expect("Could not list s3 objects");
if !output.status.success() {
err_printer(format!("Failed to list s3 objects for bucket {}.", _name));
return Err(AwsCliError);
}
let output_str = get_stdout_string_from_output(&output);
Ok(output_str)
}
EDIT: yorodm's comment makes sense. What I did was use Tokio's Command instead of std::process's Command and made the invoke_network_call async. This reduced my runtime by half. Thank you!
You could rewrite invoke_network_call using an async version of Command.
async fn invoke_network_call(_name: &str) -> core::result::Result<String, AwsCliError> {
let output = tokio::process::Command::new("aws")
.arg("s3")
.arg("ls")
.arg(_name)
.output()
.await
.expect("Could not list s3 objects");
if !output.status.success() {
err_printer(format!("Failed to list s3 objects for bucket {}.", _name));
return Err(AwsCliError);
}
let output_str = get_stdout_string_from_output(&output);
Ok(output_str)
}
Thus removing the blocking std::process::Command call. However I would say that if you're going to access AWS services you should go with rusoto

Getting deadlock inside match of async function

I'm getting a deadlock on the following example:
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};
use futures::lock::Mutex;
use std::sync::Arc;
struct A{
}
impl A {
pub async fn do_something(&self) -> std::result::Result<(), ()>{
Err(())
}
}
async fn lock_and_use(a: Arc<Mutex<A>>) {
match a.clone().lock().await.do_something().await {
Ok(()) => {
},
Err(()) => {
//try again on error
println!("trying again");
a.clone().lock().await.do_something().await.unwrap();
}
}
}
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("begin");
let a = Arc::new(Mutex::new(A{}));
lock_and_use(a.clone()).await;
println!("end");
Ok(())
}
Playground
If I did this:
a.clone().lock().await.do_something().await;
a.clone().lock().await.do_something().await;
there would be no problem, the lock() dies on the same line it's created. I thought this principle would be the same for match. If you think about match, it locks the value, calls do_something, awaits on it, and then compares the value. It's true that do_something returns a future which capture the self, but when we await on it, it should discard the self. Why it still holds self? How can I solve this without cloning the result?
Yes:
Temporaries live for the entire statement, never shorter.
Cause code could be:
{
match self.cache.read() { // <-- direct pattern matching
Ok(ref data) => Some(data)
_ => None,
}
}.map(|data| {
// use `data` - the lock better be held
})
Read this issue for more detail.
So you need to lock outside your match statement:
let x = a.clone().lock().await.do_something().await;
match x {
Ok(()) => {}
Err(()) => {
a.clone().lock().await.do_something().await.unwrap();
}
}

How to read subprocess output asynchronously

I want to implement a futures::Stream for reading and parsing the standard output of a child subprocess.
What I'm doing at the moment:
spawn subprocess and obtain its stdout via std::process methods: let child = Command::new(...).stdout(Stdio.pipe()).spawn().expect(...)
add AsyncRead and BufRead to stdout:
let stdout = BufReader::new(tokio_io::io::AllowStdIo::new(
child.stdout.expect("Failed to open stdout"),
));
declare a wrapper struct for stdout:
struct MyStream<Io: AsyncRead + BufRead> {
io: Io,
}
implement Stream:
impl<Io: AsyncRead + BufRead> Stream for MyStream<Io> {
type Item = Message;
type Error = Error;
fn poll(&mut self) -> Poll<Option<Message>, Error> {
let mut line = String::new();
let n = try_nb!(self.io.read_line(&mut line));
if n == 0 {
return Ok(None.into());
}
//...read & parse further
}
}
The problem is that AllowStdIo doesn't make ChildStdout magically asynchronous and the self.io.read_line call still blocks.
I guess I need to pass something different instead of Stdio::pipe() to have it asynchronous, but what? Or is there a different solution for that?
This question is different from What is the best approach to encapsulate blocking I/O in future-rs? because I want to get asynchronous I/O for the specific case of a subprocess and not solve the problem of encapsulation of synchronous I/O.
Update: I'm using tokio = "0.1.3" to leverage its runtime feature and using tokio-process is not an option at the moment (https://github.com/alexcrichton/tokio-process/issues/27)
The tokio-process crate provides you with a CommandExt trait that allows you to spawn a command asynchronously.
The resulting Child has a getter for ChildStdout which implements Read and is non-blocking.
Wrapping tokio_process::ChildStdout into AllowStdIo as you did in your example should make it work!
Here is my version using tokio::process
let mut child = match Command::new(&args.run[0])
.args(parameters)
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.kill_on_drop(true)
.spawn()
{
Ok(c) => c,
Err(e) => panic!("Unable to start process `{}`. {}", args.run[0], e),
};
let stdout = child.stdout.take().expect("child did not have a handle to stdout");
let stderr = child.stderr.take().expect("child did not have a handle to stderr");
let mut stdout_reader = BufReader::new(stdout).lines();
let mut stderr_reader = BufReader::new(stderr).lines();
loop {
tokio::select! {
result = stdout_reader.next_line() => {
match result {
Ok(Some(line)) => println!("Stdout: {}", line),
Err(_) => break,
_ => (),
}
}
result = stderr_reader.next_line() => {
match result {
Ok(Some(line)) => println!("Stderr: {}", line),
Err(_) => break,
_ => (),
}
}
result = child.wait() => {
match result {
Ok(exit_code) => println!("Child process exited with {}", exit_code),
_ => (),
}
break // child process exited
}
};
}

Unable to return a vector of string slices: borrowed value does not live long enough

I'm new to Rust and I'm having some trouble with the borrow checker. I don't understand why this code won't compile. Sorry if this is close to a previously answered question but I can't seem to find a solution in the other questions I've looked at.
I understand the similarity to Return local String as a slice (&str) but in that case it is just one string being returned and not enough for me to reason with my code in which I am trying to return a vector. From what I understand, I am trying to return references to str types that will go out of scope at the end of the function block and so should I be mapping that vector of &str into a vector of String? I am not so concerned about the performance effects of converting &str to String. First I'd just like to get it working.
This is the code, the error is in the lex function.
use std::io::prelude::*;
use std::fs::File;
use std::env;
fn open(mut s: &mut String, filename: &String) {
let mut f = match File::open(&filename) {
Err(_) => panic!("Couldn't open file"),
Ok(file) => file,
};
match f.read_to_string(&mut s) {
Err(_) => panic!("Couldn't read file"),
Ok(_) => println!("File read successfully"),
};
}
fn lex(s: &String) -> Vec<&str> {
let token_string: String = s.replace("(", " ( ")
.replace(")", " ) ");
let token_list: Vec<&str> = token_string.split_whitespace()
.collect();
token_list
}
fn main() {
let args: Vec<_> = env::args().collect();
if args.len() < 2 {
panic!("Please provide a filename");
} else {
let ref filename = args[1];
let mut s = String::new();
open(&mut s, filename);
let token_list: Vec<&str> = lex(&s);
println!("{:?}", token_list);
}
}
Here is the error message
error: borrowed value does not live long enough
self.0.borrow().values.get(idx)
^~~~~~~~~~~~~~~
reference must be valid for the anonymous lifetime #1 defined on the block at 23:54...
pub fn value(&self, idx: usize) -> Option<&Value> {
^
note: ...but borrowed value is only valid for the block at 23:54
pub fn value(&self, idx: usize) -> Option<&Value> {
^
I'm finding it hard to reason with this code because with my level of experience with Rust I can't visualise the lifetimes of these variables. Any help would be appreciated as I've spent an hour or two trying to figure this out.
The problem is that you're allocating a new String (token_string) inside the lex function and then returning an array of references to it, but token_string will get dropped (and the memory freed) as soon as it falls out of scope at the end of the function.
fn lex(s: &String) -> Vec<&str> {
let token_string: String = s.replace("(", " ( ") // <-- new String allocated
.replace(")", " ) ");
let token_list: Vec<&str> = token_string.split_whitespace()
.collect();
token_list // <-- this is just an array of wide pointers into token_string
} // <-- token_string gets freed here, so the returned pointers
// would be pointing to memory that's already been dropped!
There's a couple of ways to address this. One would be to force the caller of lex to pass in the buffer that you want to use to collect into. This would change the signature to fn lex<'a>(input: &String, buffer: &'a mut String) -> Vec<&'a str> This signature would specify that the lifetimes of the returned &strs will be at least as long as the lifetime of the buffer that's passed in.
Another way would be to just return a Vec<String> instead of Vec<&str> if you can tolerate the extra allocations.

Resources