Iterating along a Dictionary in Swift 3 - dictionary

I am trying to iterate along a Dictionary in order to prune unconfirmed entries. The Swift 3 translation of the following Objective-C code does not work:
[[self sharingDictionary] enumerateKeysAndObjectsUsingBlock: ^(id key, id obj, BOOL *stop) {
SharingElement* element=[[self sharingDictionary] objectForKey:key];
if (!element.confirmed){
dispatch_async(dispatch_get_main_queue(), ^{
[element deleteMe];
});
[[self sharingDictionary] performSelector:#selector(removeObjectForKey:) withObject:key
afterDelay:.2];
} else{
element.confirmed=NO;
}];
And so I tried using the following compact enumerated() method in this way:
for (key, element) in self.sharingDictionary.enumerated(){
if (!element.confirmed){
element.deleteMe()
self.perform(#selector(self.removeSharingInArray(key:)), with:key, afterDelay:0.2);
} else{
element.confirmed=false
}
}
Yet the compiler reports the following error while processing the usage of variable 'element':
Value of tuple type '(key: Int, value: SharingElement)' has no member
'confirmed'
Like 'element' took the full tuple father than the part of its competence.
Is the problem in the use of enumerated() or in the processing of the dictionary and how may I fix it?

Use element.value.confirmed. element is a tuple that contains both key and value.
But you probably just want to remove enumerated():
for (key, element) in self.sharingDictionary {
...
}
enumerated() takes the iteration and adds indices starting with zero. That's not very common to use with dictionaries.

This should do the trick,
localDictionary.enumerateKeysAndObjects ({ (key, value, stop) -> Void in
})

I ended up implementing the thing as:
DispatchQueue.global(attributes: .qosBackground).async{
for (key, element) in self.sharingDictionary{
if !element.confirmed{
DispatchQueue.main.async({
element.deleteMe()
self.removeSharingInArray(key:key)
})
} else{
element.confirmed=false
}
}
}
So to hopefully delete the object without changing the Dictionary while it is browsed, what used to crash the app, even if I do not know if it still the case.

Related

TryGetInt32 throws System.InvalidOperationException

Call me crazy, but I was under the impression that there was a convention of try which meant give it a go, but if you can't then get back to me and let me know that it was a "no-go".
I have recently started a new project and I have decided to use System.Text.Json instead of Newtonsoft since it has been out for a while now and I like to play with new things.
I have the following bit of code in a JsonConverter:
using (var jsonDoc = JsonDocument.ParseValue(ref reader))
{
if (jsonDoc.RootElement.TryGetInt32(out int number))
{
}
}
When it is a number, it works awesomely, but when it is anything but a number, it throws as if I was calling GetInt32().
In the custom converter, I sometimes get a number back, but I also can get an object back which contains the number that I am expecting as well as a string. I thought that I would be able to test for this using the TryGetInt32 method.
I have two questions:
How could I test if I am getting the number back, or getting the number AND the string?, and
What is the difference between TryGetInt32(out int number) and GetInt32()?
TryGetInt32 throws exception if the value is not a number type.
It does not throw and returns false if the value is a number type but not a number kind convertible to int32.
I hope the following additional check helps:
using (var jsonDoc = JsonDocument.ParseValue(ref reader))
{
if(jsonDoc.RootElement.ValueKind == JsonValueKind.Number &&
jsonDoc.RootElement.TryGetInt32(out int number))
{
}
}
first question :
using int.TryParse(variable,result) : this return bool and store variable if integer in result
example:
string json = "5";
int result;
if (int.TryParse(json, out result))
{
Console.WriteLine(result);
}
second question:
TryGetInt32(Int32) :Attempts to represent the current JSON number as an Int32 and return bool .
Getint32(): get specific value as int32 in this case you must be sure that the value is integer

Is there a better functional way to process a vector with error checking?

I'm learning Rust and would like to know how I can improve the code below.
I have a vector of tuples of form (u32, String). The u32 values represent line numbers and the Strings are the text on the corresponding lines. As long as all the String values can be successfully parsed as integers, I want to return an Ok<Vec<i32>> containing the just parsed String values, but if not I want to return an error of some form (just an Err<String> in the example below).
I'm trying to learn to avoid mutability and use functional styles where appropriate, and the above is straightforward to do functionally if that was all that was needed. Here's what I came up with in this case:
fn data_vals(sv: &Vec<(u32, String)>) -> Result<Vec<i32>, String> {
sv.iter()
.map(|s| s.1.parse::<i32>()
.map_err(|_e| "*** Invalid data.".to_string()))
.collect()
}
However, the small catch is that I want to print an error message for every invalid value (and not just the first one), and the error messages should contain both the line number and the string values in the offending tuple.
I've managed to do it with the following code:
fn data_vals(sv: &Vec<(u32, String)>) -> Result<Vec<i32>, String> {
sv.iter()
.map(|s| (s.0, s.1.parse::<i32>()
.or_else(|e| {
eprintln!("ERROR: Invalid data value at line {}: '{}'",
s.0, s.1);
Err(e)
})))
.collect::<Vec<(u32, Result<i32, _>)>>() // Collect here to avoid short-circuit
.iter()
.map(|i| i.1
.clone()
.map_err(|_e| "*** Invalid data.".to_string()))
.collect()
}
This works, but seems rather messy and cumbersome - especially the typed collect() in the middle to avoid short-circuiting so all the errors are printed. The clone() call is also annoying, and I'm not really sure why it's needed - the compiler says I'm moving out of borrowed content otherwise, but I'm not really sure what's being moved. Is there a way it can be done more cleanly? Or should I go back to a more procedural style? When I tried, I ended up with mutable variables and a flag to indicate success and failure, which seems less elegant:
fn data_vals(sv: &Vec<(u32, String)>) -> Result<Vec<i32>, String> {
let mut datavals = Vec::new();
let mut success = true;
for s in sv {
match s.1.parse::<i32>() {
Ok(v) => datavals.push(v),
Err(_e) => {
eprintln!("ERROR: Invalid data value at line {}: '{}'",
s.0, s.1);
success = false;
},
}
}
if success {
return Ok(datavals);
} else {
return Err("*** Invalid data.".to_string());
}
}
Can someone advise me on the best way to do this? Should I stick to the procedural style here, and if so can that be improved? Or is there a cleaner functional way to do it? Or a blend of the two? Any advice appreciated.
I think that's what partition_map() from itertools is for:
use itertools::{Either, Itertools};
fn data_vals<'a>(sv: &[&'a str]) -> Result<Vec<i32>, Vec<(&'a str, std::num::ParseIntError)>> {
let (successes, failures): (Vec<_>, Vec<_>) =
sv.iter().partition_map(|s| match s.parse::<i32>() {
Ok(v) => Either::Left(v),
Err(e) => Either::Right((*s, e)),
});
if failures.len() != 0 {
Err(failures)
} else {
Ok(successes)
}
}
fn main() {
let numbers = vec!["42", "aaaezrgggtht", "..4rez41eza", "55"];
println!("{:#?}", data_vals(&numbers));
}
In a purely functional style, you have to avoid side-effects.
Printing errors is a side-effect. The preferred style would be to return an object of the style:
Result<Vec<i32>, Vec<String>>
and print the list after the data_vals function returns.
So, essentially, you want your processing to collect a list of integers, and a list of strings:
fn data_vals(sv: &Vec<(u32, String)>) -> Result<Vec<i32>, Vec<String>> {
let (ok, err): (Vec<_>, Vec<_>) = sv
.iter()
.map(|(i, s)| {
s.parse()
.map_err(|_e| format!("ERROR: Invalid data value at line {}: '{}'", i, s))
})
.partition(|e| e.is_ok());
if err.len() > 0 {
Err(err.iter().filter_map(|e| e.clone().err()).collect())
} else {
Ok(ok.iter().filter_map(|e| e.clone().ok()).collect())
}
}
fn main() {
let input = vec![(1, "0".to_string())];
let r = data_vals(&input);
assert_eq!(r, Ok(vec![0]));
let input = vec![(1, "zzz".to_string())];
let r = data_vals(&input);
assert_eq!(r, Err(vec!["ERROR: Invalid data value at line 1: 'zzz'".to_string()]));
}
Playground Link
This uses partition which does not depend on an external crate.
Side effects (eprintln!) in an iterator adapter are definitely not "functional". You should accumulate and return the errors and let the caller deal with them.
I would use fold here. The goal of fold is to reduce a list to a single value, starting from an initial value and augmenting the result with every item. This "single value" can very well be a list, though. Here, though, there are two possible lists we might want to return: a list of i32 if all values are valid, or a list of errors if there are any errors (I've chosen to return Strings for errors here, for simplicity.)
fn data_vals(sv: &[(u32, String)]) -> Result<Vec<i32>, Vec<String>> {
sv.iter().fold(
Ok(Vec::with_capacity(sv.len())),
|acc, (line_number, data)| {
let data = data
.parse::<i32>()
.map_err(|_| format!("Invalid data value at line {}: '{}'", line_number, data));
match (acc, data) {
(Ok(mut acc_data), Ok(this_data)) => {
// No errors yet; push the parsed value to the values vector.
acc_data.push(this_data);
Ok(acc_data)
}
(Ok(..), Err(this_error)) => {
// First error: replace the accumulator with an `Err` containing the first error.
Err(vec![this_error])
}
(Err(acc_errors), Ok(..)) => {
// There have been errors, but this item is valid; ignore it.
Err(acc_errors)
}
(Err(mut acc_errors), Err(this_error)) => {
// One more error: push it to the error vector.
acc_errors.push(this_error);
Err(acc_errors)
}
}
},
)
}
fn main() {
println!("{:?}", data_vals(&[]));
println!("{:?}", data_vals(&[(1, "123".into())]));
println!("{:?}", data_vals(&[(1, "123a".into())]));
println!("{:?}", data_vals(&[(1, "123".into()), (2, "123a".into())]));
println!("{:?}", data_vals(&[(1, "123a".into()), (2, "123".into())]));
println!("{:?}", data_vals(&[(1, "123a".into()), (2, "123b".into())]));
}
The initial value is Ok(Vec::with_capacity(sv.len())) (this is an optimization to avoid reallocating the vector as we push items to it; a simpler version would be Ok(vec![])). If the slice is empty, this will be fold's result; the closure will never be called.
For each item, the closure checks 1) whether there were any errors so far (indicated by the accumulator value being an Err) or not and 2) whether the current item is valid or not. I'm matching on two Result values simultaneously (by combining them in a tuple) to handle all 4 cases. The closure then returns an Ok if there are no errors so far (with all the parsed values so far) or an Err if there are any errors so far (with every invalid value found so far).
You'll notice I used the push method to add an item to a Vec. This is, strictly speaking, mutation, which is not considered "functional", but because we are moving the Vecs here, we know there are no other references to them, so we know we aren't affecting any other use of these Vecs.

Removing an object from a dictionary in lua

I am using aerospike for storage and its UDF's in Lua. While executing the udf's through both NodeJS and Python- I need to delete a key-value pair from the dictionary being passed as a parameter.
Below is the code snippets:
function deleteProduct(rec, prod_id, isodate)
map.remove(rec, prod_id)
aerospike:update(rec)
return 0
end
And the rec structure is:
{
meta.num_prod: 4
s.10000006: {
prod_id: "10000006"
qty: "4"
}
I do understand that pythonic dictionary is not same as Lua maps- but I am stuck with this. I get the error message as:
/opt/aerospike/usr/udf/lua/update.lua:14: bad argument #1 to \'remove\' (Map expected, got userdata)
The rec is the aerospike record being invoked in the below manner:
var udf = { module:'update', funcname: 'deleteProductFromCart', args: [prod_key, isoDate]}
sails.aerospike.execute(cart_key, udf, function(err, result) {
if(err.code!=status.AEROSPIKE_OK){
console.log(err)
defer.resolve(false)
}
else{
defer.resolve(true)
}
});
According to the provided error message, you should call it this way (through colon):
map:remove(rec, prod_id)
I am sure you know what is the difference.
Below works just fine!
map[key] = nil

Specman e: How to constrain 'all_different' to list of structs?

I have my_list that defined this way:
struct my_struct {
comparator[2] : list of int(bits:16);
something_else[2] : list of uint(bits:16);
};
...
my_list[10] : list of my_struct;
It is forbidden to comparators at the same index (0 or 1) to be the same in all the list. When I constrain it this way (e.g. for index 0):
keep my_list.all_different(it.comparator[0]);
I get compilation error:
*** Error: GEN_NO_GENERATABLE_NOTIF:
Constraint without any generatable element.
...
keep my_list.all_different(it.comparator[0]);
How can I generate them all different? Appreciate any help
It also works in one go:
keep for each (elem) in my_list {
elem.comparator[0] not in my_list[0..max(0, index-1)].apply(.comparator[0]);
elem.comparator[1] not in my_list[0..max(0, index-1)].apply(.comparator[1]);
};
When you reference my_list.comparator it doesn't do what you think it does. What happens is that it concatenates all comparator lists into one bit 20 element list. Try it out by removing your constraint and printing it:
extend sys {
my_list[10] : list of my_struct;
run() is also {
print my_list.comparator;
};
};
What you can do in this case is construct your own list of comparator[0] elements:
extend sys {
comparators0 : list of int;
keep comparators0.size() == my_list.size();
keep for each (comp) in comparators0 {
comp == my_list.comparator[index * 2];
};
keep comparators0.all_different(it);
// just to make sure that we've sliced the appropriate elements
run() is also {
print my_list[0].comparator[0], comparators0[0];
print my_list[1].comparator[0], comparators0[1];
print my_list[2].comparator[0], comparators0[2];
};
};
You can apply an all_different() constraint on this new list. To make sure it's working, adding the following constraint should cause a contradiction:
extend sys {
// this constraint should cause a contradiction
keep my_list[0].comparator[0] == my_list[1].comparator[0];
};

D: Strange behaviour from std.container.BinaryHeap with custom function for comparison

I've written the following code for a heap of Node*s, which are found in module node:
import std.exception, std.container;
public import node;
alias NodeArray = Array!(const (Node)*);
alias NodeHeap = BinaryHeap!(NodeArray, cmp_node_ptr);
auto make_heap() {
return new NodeHeap(NodeArray(cast(const(Node)*)[]));
}
void insert(NodeHeap* heap, in Node* u) {
enforce(heap && u);
heap.insert(u);
}
pure bool cmp_node_ptr(in Node* a, in Node* b) {
enforce(a && b);
return (a.val > b.val);
}
I then tried running the following unit tests on it, where make_leaf returns a Node* initialized with the argument given:
unittest {
auto u = make_leaf(10);
auto heap = make_heap();
insert(heap, u); //bad things happen here
assert(heap.front == u);
auto v = make_leaf(20);
insert(heap, v);
assert(heap.front == u); //assures heap property
}
The tests make it to the line I comment-marked, and then throw an enforcement error on the line enforce(a && b) in cmp_node_ptr. I'm totally lost as to why this is happening.
you are doing wrong thing in this operator:
NodeArray(cast(const(Node)*)[])
you obviously want to create empty NodeArray, but what you really got is NodeArray with one null item. NodeArray constructor takes list of values for new array as arguments, and you passing one "empty array" (which is essentially null), thus creating NodeArray with one null element.
the correct way is just:
NodeArray()
i.e.:
auto make_heap() {
return new NodeHeap();
}
make this change and everything will be fine.
p.s. it seems that D notation for multiple arguments of type U (U[] values...) made you think that constructor accepts another array as initialiser.
p.p.s. sorry, fixed make_heap() code: accidentally forgot to write "NodeArray()" in it. and edited it again, as empty NodeArray() call is not necessary there. double fault!

Resources