Telegraf MQTT consumer with multiple topics and json data - telegraf

We use Telegraf to connect to an MQTT broker and subscribe to several topics. The data send through is all in JSON, but with different configurations.
[[inputs.mqtt_consumer]]
name_override = "devices"
topics = [
"devices/+/control",
]
servers = ["${MQTT_SERVER_URL}"]
tagexclude = ["host", "topic"]
data_format = "json"
json_name_key = ""
json_time_key = "ts"
json_time_format = "unix_ms"
tag_keys = ["site"]
json_string_fields = ["mode", "is_online"]
Do we need multiple different mqtt_consumer input plugins for different json structures, or can that be handled with the topic parser somehow? I'm struggling to find real world examples for this kind of setup.

Do we need multiple different mqtt_consumer input plugins for different json structures,
It depends on just how different the structure is. If the JSON is relatively flat, then you may not, but if it has different objects defined, I would suggest you use different inputs. Generally, if you have different structures then you probably have different tags and your ultimate time series metric will be different.

Related

Most efficient method of pulling in weather data for multiple locations

I'm working on a meteor mobile app that displays information about local places of interest and one of the things that I want to show is the weather in each location. I've currently got my locations stored with latlng coordinates and they're searchable by radius. I'd like to use the openweathermap api to pull in some useful 'current conditions' information so that when a user looks at an entry they can see basic weather data. Ideally I'd like to limit the number of outgoing requests to keep the pages snappy (and API requests down)
I'm wondering if I can create a server collection of weather data that I update regularly, server-side (hourly?) that my clients then query (perhaps using a mongo $near lookup?) - that way all of my data is being handled within meteor, rather than each client going out to grab the latest data from the API. I don't want to have to iterate through all of the locations in my list and do a separate call out to the api for each as I have approx. 400 locations(!). I'm afraid I'm new to API requests (and meteor itself) so apologies if this is a poorly phrased question.
I'm not entirely sure if this is doable, or if it's even the best approach - any advice (and links to any useful code snippets!) would be greatly appreciated!
EDIT / UPDATE!
OK I haven't managed to get this working yet but I some more useful details on the data!
If I make a request to the openweather API I can get data back for all of their locations (which I would like to add/update to a collection). I could then do regular lookup, instead of making a client request straight out to them every time a user looks at a location. The JSON data looks like this:
{
"message":"accurate",
"cod":"200",
"count":50,
"list":[
{
"id":2643076,
"name":"Marazion",
"coord":{
"lon":-5.47505,
"lat":50.125561
},
"main":{
"temp":292.15,
"pressure":1016,
"humidity":68,
"temp_min":292.15,
"temp_max":292.15
},
"dt":1403707800,
"wind":{
"speed":8.7,
"deg":110,
"gust":13.9
},
"sys":{
"country":""
},
"clouds":{
"all":75
},
"weather":[
{
"id":721,
"main":"Haze",
"description":"haze",
"icon":"50d"
}
]
}, ...
Ideally I'd like to build my own local 'weather' collection that I can search using mongo's $near (to keep outbound requests down, and speed up), but I don't know if this will be possible because the format that the data comes back in - I think I'd need to structure my location data like this in order to use a geo search:
"location": {
"type": "Point",
"coordinates": [-5.47505,50.125561]
}
My questions are:
How can I build that collection (I've seen this - could I do something similar and update existing entries in the collection on a regular basis?)
Does it just need to live on the server, or client too?
Do I need to manipulate the data in order to get a geo search to work?
Is this even the right way to approach it??
EDIT/UPDATE2
Is this question too long/much? It feels like it. Maybe I should split it out.
Yes easily possible. Because your question is so large I'll give you a high level explanation of what I think you need to do.
You need to create a collection where you're gonna save the weather data in.
A request worker that requests new data and updates the collection on a set interval. Use something like cron-tick for scheduling the interval.
Requesting data should only happen server side and I can recommend the request npm package for that.
Meteor.publish the weather collection and have the client subscribe to that, with optionally a filter for it's location.
You should now be getting the weather data on your client and should be able to get freaky with it.

Which optional parameters do improve the accuracy of a google geolocation request result?

I am using gsm cell data to get the current device position. To do this I use the Google Maps Geolocation API. All fields seem to be optional in the first part of the needed JSON parameters (URL: https://www.googleapis.com/geolocation/v1/geolocate?key=API_key):
{
"homeMobileCountryCode": 310,
"homeMobileNetworkCode": 410,
"radioType": "gsm",
"carrier": "Vodafone",
"cellTowers": [
// See the Cell Tower Objects section below.
],
"wifiAccessPoints": [
// See the WiFi Access Point Objects section below.
]
}
Do the first 4 parameters homeMCC, homeMNC, radio Type and carrier have any influences on the accuracy? or the response time? I could not make out any differences.
I can believe that the Google database of cell IDs is organised by carrier and network type, so there might in theory be a quicker response if you supply these parameters, but it would surely be negligible. Can't think of a technical reason why it would need to know your home operator details too. The only information these parameters would give them is (1) whether you're roaming or not and (2) any stored information that they might be holding about individual operators. Does Google have special agreements with any operators?

how to send two HTTP Requests at the same time

I created a method, which gets data from Foursquare(for venues), and Google(for streets), based on a latitude,longitude address I provide. It is intended to show locations around you. The name of the results from the two apis are then put together in a list
It looks something like this:
var httpFoursquareResponse = (HttpWebResponse)httpFoursquareWebRequest.GetResponse();
var httpGoogleResponse = (HttpWebResponse)httpGoogleWebRequest.GetResponse();
List<string> foursquarePlacesNames=getNames(httpFoursquareResponse);
List<string> googleStreetsNames=getStreets(httpFoursquareResponse);
var result=foursquarePlacesNames.Union(googleStreetsNames);
return result;
Problem:
1) I want the requests to be sent at the same time(so I don't have to wait for the first to get the second). But if I use AsyncCallbacks, how can I do that union, given the fact that I will have two responses handled by two different threads.
2) If more than 5 seconds pass and I have no response from one of the services, I want to return whatever data I have, and not care about the response anymore
How would you go about implementing something like this? I think it is a quite common scenario, but I didn't find any solutions on the web.
Thank you
You can use Asyncmanager for this thing.
Please get thetwo link for this. This would help you.
1.http://www.devproconnections.com/article/aspnet2/calling-web-services-asynchronously
2.http://www.c-sharpcorner.com/UploadFile/rmcochran/multithreadasyncwebservice05262007094719AM/multithreadasyncwebservice.aspx

What is the best method to measure site visits and page views in real time?

I currently use Adobe Omniture SiteCatalyst, Google Analytics, and New Relic. All three offer visit and page view metrics. SiteCatalyst has no API that I'm aware of, and their data is often hours behind. Google Analytics and New Relic both offer realtime APIs, but I find that the metrics offered differ wildly across vendors.
What's the best method (API) for measuring realtime visits (page views, unique visitors, etc.)?
Ultimately, I intend to use this data to present realtime conversion rates to my business customers.
Adobe SiteCatalyst does have a realtime api that you can use. It functions in a similar way that reports in SiteCatalyst work.
Here is python example request:
import requests
import sha
import binascii
import time
your_report_suite="ReportSuiteId" #The name of the report suite
what_you_are_looking = "someValue" #value of a the prop that you want to find in the realtime stream
def getRealTimeUsers():
if mobile:
url = 'https://api.omniture.com/admin/1.3/rest/?method='
headers = {'X-WSSE': self.generateHeader()}
method = 'Report.GetRealTimeReport'
report_url = url + method
payload = {
"reportDescription": {
"reportSuiteID": your_report_suite,
"metrics": [
{
"id": "instances"
}
],
"elements": [
{
"id": "prop4",
"search": {
"type": "string",
"keywords": what_you_are_looking
}
}
]
}
}
response = requests.post(url=report_url, headers=headers, data=json.dumps(payload))
data = response.json().get('report').get('data')
def generateHeader():
# Generates the SC headers for the request
nonce = str(time.time())
base64nonce = binascii.b2a_base64(binascii.a2b_qp(nonce))
created_date = time.strftime("%Y-%m-%dT%H:%M:%SZ", time.localtime())
sha_object = sha.new(nonce + created_date + self.sc_key)
password_64 = binascii.b2a_base64(sha_object.digest())
return 'UsernameToken Username="%s", PasswordDigest="%s", Nonce="%s", Created="%s"' % (
self.sc_user, password_64.strip(), base64nonce.strip(), created_date)
Note: Realtime reporting requires that the realtime feature is turned on in your report suite. Also the realtime reports are limited in their dimensionality. There is not a whole lot of documentation on the particular requests required but there is this: https://marketing.adobe.com/developer/documentation/sitecatalyst-reporting/c-real-time
Also I highly recommend experimentation by using the api explorer: https://marketing.adobe.com/developer/api-explorer#Report.GetRealTimeReport
What kind of delay is acceptable? What about accuracy and detail? Script-based systems like Google Analytics require Javascript to be enabled and provide plenty of details about the visitor's demographic and technical information, but raw webserver logfiles give you details about every single request (which is better for technical insight, as you get details on requested images, hotlinking, referrers and other files).
Personally, I'd just use Google Analytics because I'm familar with it, and also because their CDN servers mean that my site won't load slowly; but otherwise I just run typical logfile analysis software on my raw webserver logs, however depending on your software this file analysis can take time to generate a report.

Dynamics GP Web Service -- Returning list of sales order based on specific criteria

For a web application, I need to get a list or collection of all SalesOrders that meet the folowing criteria:
Have a WarehouseKey.ID equal to "test", "lucmo" or "Inno"
Have Lines that have a QuantityToBackorder greater than 0
Have Lines that have a RequestedShipDate greater than current day.
I've succesfully used these two methods to retrieve documents, but I can't figure out how return only the ones that meet above criteria.
http://msdn.microsoft.com/en-us/library/cc508527.aspx
http://msdn.microsoft.com/en-us/library/cc508537.aspx
Please help!
Short answer: your query isn't possible through the GP Web Services. Even your warehouse key isn't an accepted criteria for GetSalesOrderList. To do what you want, you'll need to drop to eConnect or direct table access. eConnect has come a long way in .Net if you use the Microsoft.Dynamics.GP.eConnect and Microsoft.Dynamics.GP.eConnect.Serialization libraries (which I highly recommend). Even in eConnect, you're stuck with querying based on the document header rather than line item values, though, so direct table access may be the only way you're going to make it work.
In eConnect, the key piece you'll need is generating a valid RQeConnectOutType. Note the "ForList = 1" part. That's important. Since I've done something similar, here's what it might start out as (you'd need to experiment with the capabilities of the WhereClause, I've never done more than a straightforward equal):
private RQeConnectOutType getRequest(string warehouseId)
{
eConnectOut outDoc = new eConnectOut()
{
DOCTYPE = "Sales_Transaction",
OUTPUTTYPE = 1,
FORLIST = 1,
INDEX1FROM = "A001",
INDEX1TO = "Z001",
WhereClause = string.Format("WarehouseId = '{0}'", warehouseId)
};
RQeConnectOutType outType = new RQeConnectOutType()
{
eConnectOut = outDoc
};
return outType;
}
If you have to drop to direct table access, I recommend going through one of the built-in views. In this case, it looks like ReqSOLineView has the fields you need (LOCNCODE for the warehouseIds, QTYBAOR for backordered quantity, and ReqShipDate for requested ship date). Pull the SOPNUMBE and use them in a call to GetSalesOrderByKey.
And yes, hybrid solutions kinda suck rocks, but I've found you really have to adapt if you're going to use GP Web Services for anything with any complexity to it. Personally, I isolate my libraries by access type and then use libraries specific to whatever process I'm using to coordinate them. So I have Integration.GPWebServices, Integration.eConnect, and Integration.Data libraries that I use practically everywhere and then my individual process libraries coordinate on top of those.

Resources