I have set up an IotHub that receives messages from a device. The Hub is getting the messages, and I am able to see the information reaching and being processed in TSI.
Metrics from TSI Azure
However, when trying to view the data in the TSI enviroment I get an error message saying there is no data.
I think the problem might have to do with setting up the model. I have created an hierarchy, types, and an instance.
model view - instance
As I understand it the instance fields are what is need to reference the set of data. In my case, the Json message being pushed thru the IOT HUb has a field called dvcid, in which "1" is the name of the only device sending values.
Am I doing something wrong?
How can i check the data being stored in TSI, like the rows and columns?
Is there an tutorial or example online where I can see the raw data going in and the model creation based on that data?
Thanks in advance
I also had a similar issue when I first tried using TSI. My problem was due to the timestamp I sent that was not in a proper format (the formatter sent things like "/Date(1547048015593+0100)/", which is not a typical way of encoding dates). When I specified the 'o' date to string format, it worked fine afterwards:
message.Timestamp = DateTime.UtcNow.ToString("o");
Hope this helps
f
Related
I have a firestore database with data like this:
Now, I access this data with doc('mydoc').get().data() and it returns the data. But, even without the data changing, if I make the same call again and again, I get different response. I mean, the data is the same, but the order of the fields is different each time.
Here's my logs with two calls, see how the field order is random? Not just between objects in the same request, but between the same object in different requests.
I'm accessing this data in a Cloud Function and serving it as an API endpoint. I want to cache the response if the data (in the database) hasn't changed, but I can't, because the data (as returned by doc.get().data()) is constantly changing.
From what I could find, this might stem from ProtoBuf encoding.
My question: is there any way to get a consistent response to a firebase query when the underlying data isn't changing?
And if no, is my only option to JSON.stringify() the whole object before putting it into firestore? (I don't need to query within document objects.)
Edit for clarity: I am not expecting to know in advance the order of the fields being returned. I am expecting (hoping) that the order will be the same each time.
JSON object fields are unordered as per the JSON spec. Individual implementations of JSON are free to rearrange order however they see fit, and there's no surefire way to guarantee an order. See e.g. this answer.
This isn't a Firestore-specific problem, this is just generally how JSON objects work. You cannot and should not depend on the order of fields for any parsing or representation.
If display order is extremely important to you, you might want to investigate libraries like ordered-json.
I am trying to use a calculated model in Google App Maker to store data from an external API. I am able to load the data to a model and render it in a table. But now I want to filter the data in the table without calling the external API again.
For example if I use the Weather (Call REST services) sample code, after the weather is rendered on screen I want to click a button to only show the days with the temperature is below 32F. How would I do that without calling the external APIs again to reload the model.
After talking out loud, I believe I can answered my own question. I hope this helps others and please correct me if I am wrong.
What I am asking isn't possible with Calculated Models. How a Calculated Model works is the server script (query script) calls the external database, receives the data and formats (cleans up) the data following a Datasource. The datasource cleans up the data to fit within the Model before returning the data as Records to the
Client. Once the data is on the Client everything on the serve is forgotten.
So to search the data 2 options I can think of:
Create different Datasources, Models or Parameters that calls the external database every time and returning the filtered data to the Client.
Or use the javascript filter() method on the records already loaded to client and some extra code and ui to show the filtered results. The filter() method doesn't modify the records on the client but the results can shown on another table.
We want to perform data analysis on IOT data which is stored into our SQL Server database. The data itself is generated by IOT devices and some are using hysteresis based logging for data compression. Which means that it only logs a value when the data for that particular property has changed.
As an example, here's how it looks inside the database:
The Float and Timestamp are actually the interesting values we're looking for. The rest is meta data. AssetTypePropertyId is linked to the name of a certain property. Which describes what the value is actually about.
We can reshape this data into a 2d matrix, making it already more useable. However, since the data is compressed with hysteresis logging we need to 'recreate' the missing values.
To give an example we want to go from this 2d dataset:
To a set which has all the gaps filled in:
This is generated under the assumption that the previous value is valid as long as no new value has been logged for it.
My question: How can this transformation be done in R?
Most examples of Flux use a todo or chat example. In all those examples, the data set you are storing is somewhat small and and be kept locally so not exactly sure if my planned use of stores falls in line with the flux "way".
The way I intend to use stores are somewhat like ORM repositories. A way to access data in multiple ways and persist data to the data service, whatever that might be.
Lets say I am building a project management system. I would probably have methods like these for data retrieval:
getIssueById
getIssuesByProject
getIssuesByAssignedUser
getIssueComments
getIssueCommentById
etc...
I would also have methods like this for persisting data to the data service:
addIssue
updateIssue
removeIssue
addIssueComment
etc...
The one main thing I would not do is locally store any issue data (and for that matter most store data that related to a data store). Most of the data is important to have fresh because maybe the issue status has updated since I last retrieved that issue. All my data retrieval method would probably always make an API requests to the the latest data.
Is this against the flux "way"? Are there any issue with going about flux in this way?
I wouldn't get too hung up on the term "store". You need create application state in some way if you want your components to render something. If you need to clear that state every time a different request is made, no problem. Here's how things would flow with getIssueById(), as an example:
component calls store.getIssueById(id)
returns empty object since issue isn't in store's cache
the store calls action.fetchIssue(id)
component renders empty state
server responds with issue data and calls action.receiveIssue(data)
store caches that data and dispatches a change event
component responds to event by calling store.getIssueById(id)
the issue data is returned
component renders data
Persisting changes would be similar, with only the most recent server response being held in the store.
user interaction in component triggers action.updateIssue(modifiedIssue)
store handles action, sending changes to server
server responds with updated issue and calls action.receiveIssue(data)
...and so on with the last 4 steps from above.
As you can see, it's not really about modeling your data, just controlling how it comes and goes.
I have an orchestration which is receiving messages from the message box of type XmlDocument. The messages have promoted properties and I am including the property schema in my project so that I may filter on them (a separate application is creating these messages). I am then assigning the untyped message to a typed message (I am not altering the name space) via a standard message assignment shape e.g.
MsgAgressoNewStarters = MsgXmldoc;
I am then outputting the message to a file location. However when I do this the property schema is also outputted.
How can I prevent this? I have tried filters etc.
Thanks
10th May 2012
I think I am possibly going about this the wrong way perhaps if I describe the full scenario you may be able to spot my deliberate mistake ;)
We are using BizTalk 2010.
I have a BizTalk application which talks to a 3rd party generic webservice that returns reports from one of our systems. This application is activated via the scheduled adapter which sends an XML document containing two values, the report name and the interface it is for. The web service returns the report as a string on a single XML node, this string in its self is an XML document. I then load this string into a message of type System.Xml.XmlDocument. There is no way of telling from the format of the data what report or for what interface this message is for. I need to send this message to the messagebox for it to be picked up by any number of related biztalk applications. So far I have tried creating a correlation set with the two values (from a property schema) & used that as the initialising correlation set on the send shape. I have then used the same property schema on another BT application to filter the message. This works but for some reason I get two messages, one being the XML which activities the orchestration which has the same fields as the property schema & correlation set. BizTalk doesn't seem to be able to tell the difference between them although they are structurally different and this is where my problem starts.
I am now thinking of creating a multipart message in the report application one part being the XmlDocument and the other being a header with the values I wish to route on.
Hope that makes some kind of sense.
I've actually now answered my own question, because both messages have the same properties I am inadvertently subscribing to both, d'oh!