I call json-RPC through the terminal and get the data as below.
extrinsics is raw byte, but I'm using polkadot-js(SCALE codec?) to decode it.
But I don't know which method to call.
"block": {
"extrinsics": [
"0x280402000be1da78d37e01","0xd91f..(too long haha)..580"
],
"header": { "digest": { ... }
#polkadot/crypto-util? #polkadot/util? Which module and which method should I use? I want input string(raw data) and get string(json or human data)
please help..
Related
My Watson Conversation bots typically have a node where I load some data into context. This usually contains all possible answers, strings, various other data.
So one of my first nodes in any bot looks like this:
{
"type": "standard",
"title": "Load Messages",
"output": {
"text": {
"values": [
""
],
"selection_policy": "sequential"
}
},
"context": {
// A whole bunch of data here
}
...
Is there a limit on how much data I can put there? Currently I have around 70 kilobytes, but potentially I can put a few megabytes there just for the convenience of running the logic inside Conversation. (Yes I am aware that this entire data will be sent back to the client, which is not very efficient)
There is no documented limit. You are more likely to hit network issues before Watson Assistant has any issues.
But storing your whole applications logic in the context object is considered an anti-pattern.
Your context object should only store what is required in Watson Assistant, and then if possible only for the related portion of the conversation.
For one time context values you can store them in the output object.
{
"context": {
},
"output": {
...
"one_time_var": "abc"
}
}
This will be discarded on your next call.
If you have a large volume of data that could be used at different times, then one pattern to use is a context request object.
For example:
"context": {
"request": "name,address,id"
}
Your next response from the application layer would send this:
"context": {
"name" : "Bob",
"address": "123 street",
"id": "1234"
}
You have your returning response update those variables, then clear the context variables again. If you have other context variables that need to stay, then store those in an object and erase just that object.
I have a Spring Cloud Contract DSL that looks like this:
package contracts.someconsumer.messaging
import org.springframework.cloud.contract.spec.Contract
Contract.make {
label 'my_label'
// input to the contract
input {
// the contract will be triggered by a method
triggeredBy('someMethodThatSendsMessage()')
}
// output message of the contract
outputMessage {
// destination to which the output message will be sent
sentTo 'Consumer.contractTest.VirtualTopic.some_destination'
// the body of the output message
body([
id: value(consumer('11111111-2222-3333-4444-555555555555'),producer(regex(uuid()))),
correlationId: value(producer(regex(uuid()))),
service: 'MY_SERVICE',
payload:
[
email: 'test#example.com'
]
])
}
}
Without the "payload" part everything works great. With the payload, I encounter this exception:
com.jayway.jsonpath.InvalidPathException: Filter: [?] can not be applied to primitives. Current context is: {"email":"test#example.com","legalName":"ACME Inc"}
at com.jayway.jsonpath.internal.path.PredicatePathToken.evaluate(PredicatePathToken.java:66) ~[json-path-2.2.0.jar:2.2.0]
at com.jayway.jsonpath.internal.path.PathToken.handleObjectProperty(PathToken.java:81) ~[json-path-2.2.0.jar:2.2.0]
at com.jayway.jsonpath.internal.path.PropertyPathToken.evaluate(PropertyPathToken.java:79) ~[json-path-2.2.0.jar:2.2.0]
at com.jayway.jsonpath.internal.path.RootPathToken.evaluate(RootPathToken.java:62) ~[json-path-2.2.0.jar:2.2.0]
The relevant line from the generated test:
assertThatJson(parsedJson).field("['payload']").field("['email']").isEqualTo("test#example.com");
Just a little more info, this is what the serialized message looks like:
2017-09-21 08:32:03.721 INFO 10716 --- [ main] c.v.sccdemo.producer.InviteServiceImpl : Event: {"id":"e63de44e-6e1a-4c4e-b98b-3c49a49efc9c","destination":"VirtualTopic.some_destination","correlationId":"8efb9740-5651-4068-8a6e-574ae7759552","service":"MY_SERVICE","payload":"{\"email\":\"test#example.com\",\"legalName\":\"ACME Inc\"}","timestamp":1505997123576,"version":"v1"}
Am I doing something wrong in the DSL? Is the 'payload' part of the body expressed correctly?
The payload looks wrong... Notice that it's considering payload as a String value instead of a Map. I guess it's enough to change the payload to the proper one and things should work again!
Let's say I have an object in the test bucket in my Riak installation with the following structure:
{
"animals": {
"dog": "woof",
"cat: "miaow",
"cow": "moo"
}
}
When performing a search request for this object, the structure of the search results is as follows:
{
"responseHeader": {
"status": 0,
"QTime": 3,
"params": {
"q": "animals_cow:moo",
"q.op": "or",
"filter":"",
"wt": "json"
}
},
"response": {
"numFound": 1,
"start": 0,
"maxScore": "0.353553",
"docs": [
{
"id": "test",
"index": "test",
"fields": {
"animals_cat": "miaow",
"animals_cow": "moo",
"animals_dog": "woof"
},
"props": {}
}
]
}
}
As you can see, the way the object is stored, the cat, cow and dog keys are nested within animals. However, when the search results come back, none of the keys are nested, and are simply separated by _.
My question is this: Is there any way provided by Riak to "reverse format" the search, and return the fields of the object in the correct (nested) format? This becomes a problem when storing and returning user data that might possibly contain _.
I do see that the latest version of Riak (beta release) provides a search schema, but I can't seem to see whether my question would be answered by this.
What you receive back in the search result is what the object looked like after passing through the json analyzer. If you need the data formatted differently, you can use a custom analyzer. However, this will only affect newly put data.
For existing data, you can use the id field and issue a get request for the original object, or use the solr query as input to a MapReduce job.
Im using a cheap Sim900 GPRS shield with arduino and hopefully xively. I'm able to connect to xively with tcp but when i send in the data i get this response "status":400,"body":"Syntax Error: parse error: ".
I'm using the sample from http://www.seeedstudio.com/wiki/GPRS_Shield_V1.0#SoftwareSerial_library_Notes and I've tried some modifications without any luck. I don't find documentation on how this string should look like.
My serial string looks like this:
{"method": "put","resource": "/feeds/feednumber-removed/","params": {},"headers": {"X-PachubeApiKey":"device key removed"},"body": {"version": "1.0.0","datastreams": {"id": "Sensor1","current_value": "1031"}]},"token": "123"}
Can someone please help me on this subject?
Looks like your JSON may be missing some brackets. You also have some arguments that you dont need. Try something like this instead:
{
"method": "put",
"resource": "/feeds/FEED_ID_HERE",
"params": {},
"headers": {"X-ApiKey":"API_KEY_HERE"},
"body": {
"version" : "1.0.0",
"datastreams": [
{
"id": "Sensor1",
"current_value": "1031"
}
]
}
}
I have tried to make the bracketing as verbose and aligned as possible so you can see where the brackets need to be in order to conform to the Xively JSON format, and just correct JSON in general. I also updated the header name which has changed since the Pachube days.
I'm building a dynamic ExtJS form based on JSON data loaded from an ASP.NET web service. The problem I find is that ExtJS expects the JSON in a specific format, ie.
{ "metaData": { "title": "Testing" }, "data": [], "success": true }
When using an ASP.NET web service to return an object as JSON it returns with the first element "d", ie.
{ "d": { "metaData": { "title": "Testing" }, "data": [], "success": true } }
Is it possible to tell the ExtJS form to use "d" as the root node?
After some more testing I've found that the ExtJS form does load the JSON from my web service but because the response text doesn't have "success": true in the root it is handled by the 'failed' handler. Fortunately this handler accepts the same parameters as the 'success' handler so can be manipulated the same.
Here's an example of my form load handler:
this.form.load({
url: "myservice.asmx/getUser",
headers: {'Content-Type': 'application/json'},
success: function(form, action) {
//not fired
},
failure: function(form, action){
if (action.result.d){
//process data
}
}
});
You don't have to call form.load() of course. I bypass it, and simply call my ASMX web method directly by calling the AJAX function that links to my webmethod, as provided by the ScriptManager. MS AJAX does all the JSON decoding, and factors out the 'd' property, etc.
Your web method doesn't even have to return an object with the 'success' and 'data' objects, as required by form.load(), although it's a useful format and I stick to it.
With the 'data' object returned by the web method (with name/value pairs, where name == field name) you can now call ExtJs's form.setValues(data); to write the values into the fields.
This is a perfectly valid case for bypassing ExtJS's code.
--
As with loading, so with submitting. To get around the 'd' property problem in the object that has to be returned by the submission web method, handle the click event of the Submit button, and push the data to the server by directly calling your web method. Your web method can/should return an object in the same format as is required by ExtJs. But you get back this object, and if not successful, call form.markInvalid() yourself, passing in the 'errors' property. Peasy easy and works well.
Again, since ExtJs doesn't play nice with the 'd' property it's perfectly valid to bypass it and do things yourself.
--
I tend to use the ScriptManager-provided functions for calling my web methods more and more, and bypass ExtJs's AJAX method invoking code. The former are much simpler to use, know about the 'd' property and also know how to deserialize Microsoft's JSON format for serialized DateTime objects.