Microsoft Band Web Tile Not Refreshing - microsoft-band

This post is similar to Microsoft Band Web Tile not Updating, but the response marked as an answer to that question didn't really solve my issue, so I thought I'd start a new post.
I recently purchased a Band 2 and am trying to set up a web tile that will pull data from a service that provides data in JSON format (not an rss feed). So, I created a single-page non-feed tile using the 5-step authoring tool. When I first deployed the tile to my band, it successfully polled the service and displayed data; however, since that point, the data displayed on the web tile has not updated, even though the refresh interval is set (the default of 30 minutes).
The service that's being called is an ASP.Net Web API service. It is setting the following cache-related headers:
Cache-Control: no-cache
Pragma: no-cache
Expires: -1
Last-Modified:
ETag:
If I review the HTTP logs for the site, I can see where my service endpoint is getting called from my band/phone, roughly every 30 minutes, and the server responds with a 200 OK response on every call - I'm not seeing a 304 Not Modified response on the server side of the transaction.
My band is paired with an Android device (Samsung GS5). I've also tried pairing with an iPhone 6, as well, with the same result. Other tiles on the band seem to work fine (i.e., the standard ones that come with the MS Health app). as part of pairing/re-pairing, I've done a factory reset twice, and that didn't seem to help. I've tried re-starting both phones (when they were paired), as well. That doesn't help, either.
What am I missing?
For reference, here is what the web tile's manifest.json file contains (with placeholders for some data points:
{
"manifestVersion": 1,
"name": "<Name Here>",
"description": "<Description here>",
"version": 1,
"versionString": "1",
"author": "<Author Here>",
"organization": "",
"contactEmail": "",
"tileIcon": {
"46": "icons/tileIcon.png"
},
"icons": {},
"refreshIntervalMinutes": 30,
"resources": [
{
"url": "<URL Here>",
"style": "Simple",
"content": {
"_1_bg": "BG",
"_1_datestring": "DateString",
"_1_trend": "Trend",
"_1_direction": "Direction"
}
}
],
"pages": [
{
"layout": "MSBand_MetricsWithIcons",
"condition": "true",
"textBindings": [
{
"elementId": "12",
"value": "BG: {{_1_bg}}"
},
{
"elementId": "22",
"value": "{{_1_datestring}}"
},
{
"elementId": "32",
"value": "Trend: {{_1_trend}}, {{_1_direction}}"
}
]
}
],
"notifications": [
{
"condition": "{{_1_bg}} >= 250",
"title": "HIGH BG: {{_1_bg}}",
"body": "{{_1_datestring}}"
},
{
"condition": "{{_1_bg}} <= 80",
"title": "Low BG: {{_1_bg}}",
"body": "{{_1_datestring}}"
},
{
"condition": "{{_1_bg}} <= 55",
"title": "REALLY LOW: {{_1_bg}}",
"body": "{{_1_datestring}}"
}
]
}

Can you supply the URL for the resource? If so I can take a look at your server responses and see why the tile is not refreshing.
Better yet, can you share the webtile and I can try that to see why it is not refreshing. You can build your WebTile at https://developer.microsofthealth.com/WebTile/ and choose to submit it. Reply here with the name of it and I will take a look.
By the way, here is how we handle refresh on a simple tile:
If Etag was in the last response then use that with next request to let the server decide if there is something new to provide.
If Etag was not supplied, then look for Last-Modified and use that when available.
Else, process the downloaded data and send to the tile.
So, if you have Etag or Last-Modified in your server responses then we will use that to send in future requests and that may be causing your problem. In that case you would want to make sure that Etag and Last-Modified are not being sent in your server responses.

Some things I can think of:
Are you keeping the tile open on your WebTile while the updates are happening? If so, then tiles in some FW versions of the band do not update when new data comes in, close the tile and open it after the sync.
You can test your tile syncing more often than 30 minutes by hitting the sync icon on the top left of the left nav bar inside the Microsoft Health app.
After that, if you are still having problems please send feedback from inside the Microsoft Health app. Access via Left Nav, bottom under Settings, use the "Help and Feedback".
When reporting feedback, if you can attach the webtile that will help us test the webtile you are having problems with.

I share the frustration here. I too have exactly the same issue it seems. I have also been a developer for 20 years. My answer to this problem now is that there is a bug perhaps when JSON is used, and/or with Android phones. I've tried to get answers and discussions with Microsoft but not had any luck. My issue is at Web Tile works once but never refreshes

Related

Windchill REST API endpoint to fill BOM from file

We are developing an internal project to use the Windchill OData REST API to fill the eBOM for a given part. What we are trying to do here is to read data from another software we have to get the BOM info and send it to the part in windchill. But we cannot find an endpoint in servlet/odata to do it.
We guess the idea is to replicate the manual process. So we already know how to create, check out and check in a part. However we still cannot find an endpoint to modify the part and add the eBOM.
We know PartList, PartListItem, GetPartStructure in the PTC Product Management Domain. But these are GET endpoints and are only useful to retrieve data, including the BOM. But we cannot use them to modify the content.
I've found the solution.
The endpoint to use is:
POST /ProdMgmt/Parts('VR:wt.part.WTPart:xxxxxxxxx')/Uses
The body of the request must contain:
{
"Quantity": 1,
"Unit": {
"Value": "ea",
"Display": "Each"
},
"TraceCode": {
"Value": "0",
"Display": "Untraced"
},
"Uses#odata.bind": "Parts('OR:wt.part.WTPart:yyyyyyyyy')"
}
Where Uses#odata.bind contains the ID of the part we want to link

Firebase negativ topic condition stop working

Actually today I received a customer report.
The push notification do not arrive at the devices.
After some research I figured out that the negative topic condition I use to send notifications to all devices, stop working.
One week ago the sending worked well with the same condotions.
I uses Postman for developing the requests.
I use the Rest API with "send" endpoint.
https://fcm.googleapis.com/fcm/send
here is my payload:
{
"condition":"!('nonExistingTopic' in topics)",
"data": {
"notification_foreground": true,
"link": "https://www.google.com"
},
"notification": {
"click_action": "FCM_PLUGIN_ACTIVITY",
"title": "notification title",
"body": "notification message"
}
}
I received an "ok"-status from Firebase ad the "message_id", but no message was send. So obviously the condition do not fit to any of the devices.
When I use the field "registration_ids" with a fcm token of my device I receive the notification.
I allready tried to find some kind of Update changes in firebase changelogs, that maybe changed the behavior of the condition field. But I did not find anything.
Does anybody have the same problems?
Any ideas for a work around!
Thank You!
I have the same problem here, as temporary solution I have to send notifications thought Firebase Console (because I use it just for communication with users).
Also I will launch a new update that now it is registering to a topic called "general" when start up.
I didn't figure out how to send to all users using negative topic condition. It stopped working about 10 days ago.
Meanwhile i found a solution for this problem.
I only use the condition-field for really existing topics.
"condition" = "'sport' in topics"
To send a message to all devices you can use the "to" parameter with the value "/topics/all", instead of the "negativ" condition.
{
"notification": {
"title": "myTitle",
"body": "myTeaser"
},
"to": "/topics/all",
"data": {
"myCustomDataField": "myFieldValue"
}
}

How to reverse engineer POST request's body generation

I'm trying to scrape reviews from Google Play. Google Play loads reviews dynamically after page has been scrolled to the end. I intercepted post requests that browser sends for retrieving reviews and noticed that the only thing that changes per request is the request's body. What I'm struggling to understand is how the request's body is generated.
The first request's body looked like this:
f.req: [[["UsvDTd","[null,null,[2,null,[40,null,\"CpUBCpIBKm0KOfc7ms0D_z7jKJielp7Fz8_Pz8_Pms3OzpuZyJvMnMXOxYmSxc3MyczPz8vIycjMysbHxszPysb__hAoITbZQaENmbWoMU2VCwWZPGwZOdccwQD8MmXEUABaCwlwT4zmNQBa2BADYMm1lu0EMiEKHwodYW5kcm9pZF9oZWxwZnVsbmVzc19xc2NvcmVfdjI\"],null,[]],[\"com.feelingtouch.zf3d\",7]]",null,"generic"]]]
and this's is the second request:
f.req: [[["UsvDTd","[null,null,[2,null,[40,null,\"CpUBCpIBKm0KOfc7msyg_28-Rpielp7Fz8_Pz8_Pm56eypyZzcycm8XOxYmSxc3MyczPz8vIycjMysbHxszPysb__hB4ITbZQaENmbWoMZI5V7V-7g3BObnBkABfM2XEUABaCwli2aizD1W9ExADYMm1lu0EMiEKHwodYW5kcm9pZF9oZWxwZnVsbmVzc19xc2NvcmVfdjI\"],null,[]],[\"com.feelingtouch.zf3d\",7]]",null,"generic"]]]
Can I somehow reverse engineer how the request is generated?
I tried to use Selenium, but after scrolling down few dozens time RAM usage runs up and Selenium becomes unresponsive.
The thing that changes is the pagination token. But, there are a couple of other things as well.
Here is the full encoded request body, with the parameters wrapped in #{} (number_of_results, pagination_token, and product_id).
f.req=%5B%5B%5B%22UsvDTd%22%2C%22%5Bnull%2Cnull%2C%5B2%2Cnull%2C%5B#{number_of_results}%2Cnull%2C#{pagination_token}%5D%2Cnull%2C%5B%5D%5D%2C%5B%5C%22#{product_id}%5C%22%2C7%5D%5D%22%2Cnull%2C%22generic%22%5D%5D%5D
So each time you scroll the page the pagination_token would change. They use it to retrieve the next page results.
You don't need to reverse engineer the token itself. You can find the first one when inspecting the page source, and then, for each next time you make a request to retrieve the results, the next_page_toke will be included in there. So, you just keep replacing the token until you reach the last page, and retrieve all the reviews.
Alternatively, you could use a third-party solution like SerpApi. We handle proxies, solve captchas, and parse all rich structured data for you.
Example python code for retrieving YouTube reviews (available in other libraries also):
from serpapi import GoogleSearch
params = {
"api_key": "SECRET_API_KEY",
"engine": "google_play_product",
"store": "apps",
"gl": "us",
"product_id": "com.google.android.youtube",
"all_reviews": "true"
}
search = GoogleSearch(params)
results = search.get_dict()
Example JSON output:
"reviews": [
{
"title": "Qwerty Jones",
"avatar": "https://play-lh.googleusercontent.com/a/AATXAJwSQC_a0OIQqkAkzuw8nAxt4vrVBgvkmwoSiEZ3=mo",
"rating": 3,
"snippet": "Overall a great app. Lots of videos to see, look at shorts, learn hacks, etc. However, every time I want to go on the app, it says I need to update the game and that it's \"not the current version\". I've done it about 3 times now, and it's starting to get ridiculous. It could just be my device, but try to update me if you have any clue how to fix this. Thanks :)",
"likes": 586,
"date": "November 26, 2021"
},
{
"title": "matthew baxter",
"avatar": "https://play-lh.googleusercontent.com/a/AATXAJy9NbOSrGscHXhJu8wmwBvR4iD-BiApImKfD2RN=mo",
"rating": 1,
"snippet": "App is broken, every video shows no dislikes even after I hit the button. I've tested this with multiple videos and now my recommended is all messed up because of it. The ads are longer than the videos that I'm trying to watch and there is always a second ad after the first one. This app seriously sucks. I would not recommend this app to anyone.",
"likes": 352,
"date": "November 28, 2021"
},
{
"title": "Operation Blackout",
"avatar": "https://play-lh.googleusercontent.com/a-/AOh14GjMRxVZafTAmwYA5xtamcfQbp0-rUWFRx_JzQML",
"rating": 2,
"snippet": "YouTube used to be great, but now theyve made questionable and arguably stupid decisions that have effectively ruined the platform. For instance, you now have the grand chance of getting 30 seconds of unskipable ad time before the start of a video (or even in the middle of it)! This happens so frequently that its actually a feasible option to buy an ad blocker just for YouTube itself... In correlation with this, YouTube is so sensitive twords the public they decided to remove dislikes. Why????",
"likes": 370,
"date": "November 24, 2021"
},
...
],
"serpapi_pagination": {
"next": "https://serpapi.com/search.json?all_reviews=true&engine=google_play_product&gl=us&hl=en&next_page_token=CpEBCo4BKmgKR_8AwEEujFG0VLQA___-9zuazVT_jmsbmJ6WnsXPz8_Pz8_PxsfJx5vJns3Gxc7FiZLFxsrLysnHx8rIx87Mx8nNzsnLyv_-ECghlTCOpBLShpdQAFoLCZiJujt_EovhEANgmOjCATIiCiAKHmFuZHJvaWRfaGVscGZ1bG5lc3NfcXNjb3JlX3YyYQ&product_id=com.google.android.youtube&store=apps",
"next_page_token": "CpEBCo4BKmgKR_8AwEEujFG0VLQA___-9zuazVT_jmsbmJ6WnsXPz8_Pz8_PxsfJx5vJns3Gxc7FiZLFxsrLysnHx8rIx87Mx8nNzsnLyv_-ECghlTCOpBLShpdQAFoLCZiJujt_EovhEANgmOjCATIiCiAKHmFuZHJvaWRfaGVscGZ1bG5lc3NfcXNjb3JlX3YyYQ"
}
Check out the documentation for more details.
Test the search live on the playground.
Disclaimer: I work at SerpApi.

No large images in shares posted using LinkedIn API

During last couple of weeks any shares made using LinkedIn sharing API don't display large images, though we provide all required information for this, including image URL. The same happens when we use REST Console. Below you can see a sample request and how the share looks like.
{
"comment": "How Triggre achieves its simplicity",
"content": {
"title": "Triggre / Blog / Design Philosophy - Part 3",
"description": "In the previous two posts about our design philosophy you could read how we decided to build Triggre and why we chose simplicity as the core of our desi...",
"submitted-url": "https://www.triggre.com/en/blog/the-triggre-design-philosophy-part-3/",
"submitted-image-url": "https://www.triggre.com/media/1105/sagrada-familia.jpg?width=800"
},
"visibility": {
"code": "anyone"
}
}
A share without large image
What is happening and how could we workaround it?

Using webhooks with Google Analytics

I'm trying to integrate my CRM with Google Analytics to monitor lead changes (from lead to sell) and so on. As I understood, I need to use Google Measurement Protocol, to receive webhooks from CRM and translate it to Analytics Conversions.
But in fact, I don't really understand how to do it. I need to make some script, to translate webhook code to analytics, but where I need to place that script? Are there some templates? And so on.
So, If you know some tutorials/courses/freelancers to help me with intergrating webhooks with Analytics - I need your advice.
Example of webhook from CRM:
{
"leads": {
"status": {
"id": "25399013",
"name": "Lead title",
"old_status_id": "7039101",
"status_id": "142",
"price": "0",
"responsible_user_id": "102525",
"last_modified": "1413554372",
"modified_user_id": "102525",
"created_user_id": "102525",
"date_create": "1413554349",
"account_id": "7039099",
"custom_fields": [
{
"id": "427183",
"name": "Checkbox custom field",
"values": ["1"]
},
{
"id": "427271",
"name": "Date custom field",
"values": ["1412380800"]
},
{
"id": "1069602",
"name": "Checkbox custom field",
"values": ["0"]
},
{
"id": "427661",
"name": "Text custom field",
"values": ["Валера"]
},
{
"id": "1075272",
"name": "Date custom field",
"values": ["1413331200"]
}
]
}
}
}
"Webhook" is a fancy way of saying that your CRM can call a web based service whenever something interesting happens (i.e. the CRM can "hook" into a web based application). E.g. if a new lead is created you can call an url with the lead details as parameters.
Specifics depend on your CRM, but when you set up a webhook there should be a field to set a url; the script that evaluates the CRM data is located at the URL.
You have that big JSON thing as your example - No real way to tell without knowing your system, but I assume that is sent as request body. So in your script you evaluate the request body, extract the parameters you want to send to analytics (be mindful that you are not allowed to store personally identifiable information) and sent it via the measurement protocol as described in the documentation linked in the other answer.
Depending on the system you might even be able to call the measurement protocol without having a custom script in between (after all the measurement protocol is an url with a few parameters).
This is an awfully generic answer, but then the question is really broad.
I've done just this in my line of work.
You need to first decide your data model on how you would like the CRM data to look within Google Analytics. This could be just mapping Google Analytics' event category, event label, event action to your data, or perhpas using custom dimensions and metrics.
Then to make it most useful, you would like to be able to link the CRM activity of a customer to their online activity. You can do this if they login online. In that case, you can set the cid and/or uid of the user to your CRM id.
Then, if you send in a GA hit with the same cid/uid in your Measurement Protocol hit, you will link the online sessions with your offline CRM activity.
To make the actual record hit Google Analytics, you will need to program something that takes the CRM data and turns it into a Measurement Protocol hit, which is essentially just a URL with the correct parameters. Look here for reference: https://developers.google.com/analytics/devguides/collection/protocol/v1/reference
An example could be: http://www.google-analytics.com/collect?v=1&tid=UA-123456-1&cid=5555&t=pageview&dp=%2FpageA
We usually have this as a seperate process, that fires when the CRM data is written to its database (the webhook in your example). If its a lot of data, you should probably implement checks to see if the hit was sucessful, and caching in case the service is not online - you have an optional parameter that gives you 4 hours leeway in sending data.
Hope this gets you at least started.

Resources