Failure in executeAsync with SQLite - sqlite

I've created this code, to send some insert statements to a SQLite DB:
new_db.executeAsync(stmts,stmts.length, {
handleResult: function(aResultSet) {
Firebug.Console.log("insert_data -> handleResult (" + aResultSet + ")");
},
handleError: function(aError) {
Firebug.Console.log("insert_data -> handleError (" + aError.result + "," + aError.message + ")");
},
handleCompletion: function(aReason) {
Firebug.Console.log("insert_data -> completed");
Firebug.Console.log(aReason);
}
});
In the output, I find:
insert_data -> completed
65535
I cannot figure what this 65535 means. This shouldn't be an error (otherwise I would have insert_data -> handleError), but why the returned value isn't zero? Is there a way to obtain a description of the code?
In any case, no value is being inserted by my statements, so it should be a failure error code.
Thanks,
Livio

I've found the explanation... the stmts array was empty (stmts.length=0). That's the meaning of this strange return code... that should be somehow documented, in my opinion.

Related

Changing an Elmish.WPF model from inside an async function

What is the accepted way in Elmish.WPF to have a Binding.cmd calling an async computation expression (CE) allow the delayed result of the async CE change the shared top-level model?
I want to do this without causing the UI thread to hang or starve (though having it somehow show busy is fine).
I tried having a part of the Model be mutable and mutating just that part of the record inside the async CE. This failed probably because the async CE is operating on its own copy of the Model.
Is there a way to pass a message with the delayed new value up to the top-level update function and update the global shared Model?
The functioning test code is in a repo here: make 'FandCo_SingleCounter` the Startup Project to run and test in VS2022
The important bits of my code is:
MainWindow.XAML fragment:
...begin of <Window>...
<StackPanel Orientation="Horizontal" HorizontalAlignment="Center" VerticalAlignment="Top" Margin="0,10,0,0">
<TextBlock Text="{Binding DISPLAY_state}" Width="110" Margin="0,5,10,5" />
<Button Command="{Binding CMD_get_state}" Content="DO IT!" Width="50" Margin="0,5,10,5" />
</StackPanel>
...to end of </Window>...
Program.fs fragment(s):
type Model =
{ mutable DISPLAY_state: string
}
type Msg =
| CMD_get_state
let init =
{ DISPLAY_state = "foo"
}
let ASYNC_get_state (m: Model) :Model =
async {
Console.WriteLine( "async 0: "
+ System.DateTime.Now.Millisecond.ToString()
+ " - "
+ m.DISPLAY_url_state)
m.DISPLAY_url_state <- "bar"
Console.WriteLine( "async +: "
+ System.DateTime.Now.Millisecond.ToString()
+ " - "
+ m.DISPLAY_state)
Threading.Thread.Sleep(5000)
Console.WriteLine( "async ++: "
+ System.DateTime.Now.Millisecond.ToString()
+ " - "
+ m.DISPLAY_state)
}
|> Async.StartImmediate
Console.WriteLine( "m.Display_url_state 0: "
+ System.DateTime.Now.Millisecond.ToString()
+ " - "
+ m.DISPLAY_state)
Console.WriteLine( "m.Display_state 1: "
+ System.DateTime.Now.Millisecond.ToString()
+ " - "
+ m.DISPLAY_state)
{ m with DISPLAY_state = "baz" }
let update msg m =
| CMD_get_state => ASYNC_get_state m
let bindings () : Binding<Model, Msg> list = [
"DISPLAY_url_state" |> Binding.oneWay (fun m -> m.DISPLAY_url_state)
"CMD_get_url" |> Binding.cmd CMD_get_url
]
...etc...to end of Elmish.WPF Core F# file.
The result of this is:
async 0: 275 - foo
async +: 591 - bar
async ++: 160 - True
m.Display_state 0: 164 - True
m.Display_state 1: 167 - True
[13:26:18 VRB] New message: CMD_get_state
Updated state:
{ DISPLAY_url_state = "baz" }
[13:26:18 VRB] [main] PropertyChanged DISPLAY_state
[13:26:18 VRB] [main] TryGetMember DISPLAY_state
At the end DISPLAY_state binding in the MainWindow.XAML updates to the value baz and not the desired value bar.
How is this supposed to be done?
I am the maintainer of Elmish.WPF.
Is there a way to pass a message with the delayed new value up to the top-level update function and update the global shared Model?
This is not exactly a question specific to Elmish.WPF in the sense that all derivates of Elmish (and in fact all MVU architectures) have to provide a way to appropriately execute async code that returns a message.
There are two ways to do this with Elimsh (and thus Elmish.WPF):
A subscription. See the SubModel sample and especially this line.
An (Elmish) command. See the FileDialogs sample and especially these lines.
The Elmish.WPF binding named cmd has nothing to do with this. This binding is named after the WPF interface ICommand.

Extracting sql data in ionic 5 without getting a ZoneAwarePromise

Using the ionic 5 framework I am trying to extract data from my SQLite database using a function:
getGlobal(name, db) {
console.log('... getting global');
var result;
sqlQuery = 'SELECT `value` FROM globals WHERE `name` = "' + name + '"';
console.log('sql query: ' + sqlQuery);
result = db.executeSql(sqlQuery, []).then(value => {
return JSON.parse(value.rows.item(0).value);
});
console.log('return: ', result);
return result;
}
I have tried even further ".then(data => {})" chaining to extract the correct result but withoug success. Always produces a ZoneAwarePromise.
Not sure where I am going wrong
The problem is not with the function itslef. Even converting it to an async function with await will return a fullfilled promise.
At the receiving end, eg
getGlobal('variable', db)
extract the value by adding
.then(value => console.log(value));
This is me learning more about promises. Maybe this answer is helpful to others.

How to use setSortOrderProvider in Grid Vaadin 8?

Im trying to use Grid Component. I need to define the order of a column, I'm using this proyect:
https://github.com/vaadin/tutorial/tree/v8-step4
And I add this code:
Column name = grid.addColumn(customer -> customer.getFirstName() + " " + customer.getLastName())
.setCaption("Name")
.setSortOrderProvider(direction -> Stream.of(
new QuerySortOrder("lastName", direction)
));
grid.setSortOrder(GridSortOrder.asc(name));
But I'm not getting the expected results, I'm getting ordered by firstName and then by lastName but i need the results ordered by lastName.
Have you had the same problem? How have you solved it?
Thank you.
I digged into the code and found out that you need you need to call setComparator instead of the setSortOrderProvider. The former is intended for in-memory data providers. Unfortunately, it's a little bit confusing and not really well documented.
I use this implementation of setComparator and it's working. : )
Column name = grid.addColumn(customer -> customer.getFirstName() + " " + customer.getLastName())
.setCaption("Name")
.setComparator(new SerializableComparator<Customer>() {
#Override
public int compare(Customer arg0, Customer arg1) {
return arg0.getLastName().compareTo(arg1.getLastName());
}
});
With Lambda:
.setComparator((customer0, customer1) -> {
return customer0.getLastName().compareTo(customer1.getLastName());
});
and this other option:
Column name = grid.addColumn(customer -> customer.getFirstName() + " " + customer.getLastName())
.setCaption("Name")
.setComparator(grid.getColumn("lastName").getComparator(SortDirection.ASCENDING));

Display result of a query that calculates the total of a field as the value of a label in a list

I have two Datasource tables Projects and tt_records with a hours number field. There is a one to many relation between the Project and tt_records. I would like to display the total number of hours per project in a table. I am able to compute the total hours in server side function, how do I bind the total with a label on the UI. I am attempting to use the following in the binding on the field. I see the function is called through info statements in the console logs, however the value does not display on the UI clientgetProjHours(#datasource.item._key); following is the Client Script
function clientgetProjHours(key){
return (google.script.run.withSuccessHandler(function (key) {
console.info("received result");
}).getProjHours(key));
}
Following is the server side script
function getProjHours(key){
console.log ("In the function getProjHours (" + key +")");
var pRecords = app.models.Projects.getRecord(key);
console.log("Contents of " + pRecords);
var tRecords =pRecords.tt_record;
console.log("Contents of t Records" + tRecords);
var total = 0;
tRecords.forEach (function (item){
total += item.actuals;
});
console.log ("The result is: " + total);
return total;
}
Could you please suggest the best way to achieve this fuction.
Thank you very much for your help
key parameter in function (key) { is the result of the Server Script.
So you just need to replace:
function (key) {
With:
function (result)
Also replace:
console.info("received result");
With:
app.pages.[PageName].descendants.[LabelName].text = result;
But as it mentioned already Calculated Model should fit such use case better.

Failing to write offset data to zookeeper in kafka-storm

I was setting up a storm cluster to calculate real time trending and other statistics, however I have some problems introducing the "recovery" feature into this project, by allowing the offset that was last read by the kafka-spout (the source code for kafka-spout comes from https://github.com/apache/incubator-storm/tree/master/external/storm-kafka) to be remembered. I start my kafka-spout in this way:
BrokerHosts zkHost = new ZkHosts("localhost:2181");
SpoutConfig kafkaConfig = new SpoutConfig(zkHost, "test", "", "test");
kafkaConfig.forceFromStart = false;
KafkaSpout kafkaSpout = new KafkaSpout(kafkaConfig);
TopologyBuilder builder = new TopologyBuilder();
builder.setSpout("test" + "spout", kafkaSpout, ESConfig.spoutParallelism);
The default settings should be doing this, but I think it is not doing so in my case, every time I start my project, the PartitionManager tries to look for the file with the offsets, then nothing is found:
2014-06-25 11:57:08 INFO PartitionManager:73 - Read partition information from: /storm/partition_1 --> null
2014-06-25 11:57:08 INFO PartitionManager:86 - No partition information found, using configuration to determine offset
Then it starts reading from the latest possible offset. Which is okay if my project never fails, but not exactly what I wanted.
I also looked a bit more into the PartitionManager class which uses Zkstate class to write the offsets, from this code snippet:
PartitionManeger
public void commit() {
long lastCompletedOffset = lastCompletedOffset();
if (_committedTo != lastCompletedOffset) {
LOG.debug("Writing last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
Map<Object, Object> data = (Map<Object, Object>) ImmutableMap.builder()
.put("topology", ImmutableMap.of("id", _topologyInstanceId,
"name", _stormConf.get(Config.TOPOLOGY_NAME)))
.put("offset", lastCompletedOffset)
.put("partition", _partition.partition)
.put("broker", ImmutableMap.of("host", _partition.host.host,
"port", _partition.host.port))
.put("topic", _spoutConfig.topic).build();
_state.writeJSON(committedPath(), data);
_committedTo = lastCompletedOffset;
LOG.debug("Wrote last completed offset (" + lastCompletedOffset + ") to ZK for " + _partition + " for topology: " + _topologyInstanceId);
} else {
LOG.debug("No new offset for " + _partition + " for topology: " + _topologyInstanceId);
}
}
ZkState
public void writeBytes(String path, byte[] bytes) {
try {
if (_curator.checkExists().forPath(path) == null) {
_curator.create()
.creatingParentsIfNeeded()
.withMode(CreateMode.PERSISTENT)
.forPath(path, bytes);
} else {
_curator.setData().forPath(path, bytes);
}
} catch (Exception e) {
throw new RuntimeException(e);
}
}
I could see that for the first message, the writeBytes method gets into the if block and tries to create a path, then for the second message it goes into the else block, which seems to be ok. But when I start the project again, the same message as mentioned above shows up. No partition information can be found.
I had the same problem. Turned out I was running in local mode which uses an in memory zookeeper and not the zookeeper that Kafka is using.
To make sure that KafkaSpout doesn't use Storm's ZooKeeper for the ZkState that stores the offset, you need to set the SpoutConfig.zkServers, SpoutConfig.zkPort, and SpoutConfig.zkRoot in addition to the ZkHosts. For example
import org.apache.zookeeper.client.ConnectStringParser;
import storm.kafka.SpoutConfig;
import storm.kafka.ZkHosts;
import storm.kafka.KeyValueSchemeAsMultiScheme;
...
final ConnectStringParser connectStringParser = new ConnectStringParser(zkConnectStr);
final List<InetSocketAddress> serverInetAddresses = connectStringParser.getServerAddresses();
final List<String> serverAddresses = new ArrayList<>(serverInetAddresses.size());
final Integer zkPort = serverInetAddresses.get(0).getPort();
for (InetSocketAddress serverInetAddress : serverInetAddresses) {
serverAddresses.add(serverInetAddress.getHostName());
}
final ZkHosts zkHosts = new ZkHosts(zkConnectStr);
zkHosts.brokerZkPath = kafkaZnode + zkHosts.brokerZkPath;
final SpoutConfig spoutConfig = new SpoutConfig(zkHosts, inputTopic, kafkaZnode, kafkaConsumerGroup);
spoutConfig.scheme = new KeyValueSchemeAsMultiScheme(inputKafkaKeyValueScheme);
spoutConfig.zkServers = serverAddresses;
spoutConfig.zkPort = zkPort;
spoutConfig.zkRoot = kafkaZnode;
I think you are hitting this bug:
https://community.hortonworks.com/questions/66524/closedchannelexception-kafka-spout-cannot-read-kaf.html
And the comment from the colleague above fixed my issue. I added some newer libraries to.

Resources