$lastn support in Google Healthcare API (FHIR store) - google-cloud-healthcare

We are using Google healthcare FHIR store v1 with node.js client (googleapi).
Google Cloud platform API docs for v1beta1 (but not v1) mentioned $lastn, and the npm package (googleapi) only has the ObservationLastn() function when initialized with version="v1beta1", not with "v1".
Is Google Healthcare dropping $lastn support in v1 (and later)?
We were able to make the $lastn work on v1 FHIR store with the googleapi client initialized with "v1beta1", but the 1000 resources matches (not unique code) is too strict. Is it something configurable?

APIs are promoted to v1 based on the maturity of the feature, as well as customer demand. I recommend you use the "Send Feedback" link on the Observation-lastn documentation page so that the team knows you are interested in seeing this feature go to v1.
As for the 1000 resource limit, this is not configurable as it required to keep the performance acceptable. Feel free to mention your use case if you file feedback. But in the meantime, if you need to process more than 1000 search results I suggest you use the Search API and do the paging and post-processing yourself.

Related

How do I get the name of the creative through the adcreativev2 API?

I am trying to find the API that lets me query for the creative name through the creative ID. When I used this API: https://api.linkedin.com/v2/adCreativesV2/{creative id}
It returned the information of the creative but the name is missing.
So basically after exploring endlessly the doc and available forums, it seems a name can be found using the API https://api.linkedin.com/v2/adDirectSponsoredContents/{URN}
where the URN can be found in the original https://api.linkedin.com/v2/adCreativesV2/{creative id}. It if the value of the field reference.
It seems unintuitive to have it in a separate endpoint instead of the original GET api. Hope it helps anyone who stumbles on this answer.
P.S Sometimes the API will return a 404 response to a perfectly existing ad, I don't know the cause exactly but when you view the ad on the campaign manager you will see that the creative name will have the value of the creative ID.
You are correct on how to get the creative names through the adCreativeV2 API. HOWEVER, this is literally being deprecated in about a week (2/28).
By the 28th, you have to migrate to the "versioned" LinkedIn API. https://learn.microsoft.com/en-us/linkedin/marketing/versioning?view=li-lm
The adCreativesV2 endpoint will fail. I still have no idea how to grab the creative names on the versioned API. I've reached out to LinkedIn.
Here are some of the changes: https://learn.microsoft.com/en-us/linkedin/marketing/versioning?view=li-lms-2023-02
I haven't been able to use the new creatives endpoint to get the creative name. It's even less precise than their old API, and since the share endpoints are being deprecated, I'm not sure where to go.
I'll update you if I hear back. Their API is frustrating and very poorly documented.
EDIT: You're getting 404s on some ads probably because those ads are NOT DSC. There are unfortunately multiple different endpoints you have to call to expose the creative contents depending on the ad type and if it's DSC or not. But as mentioned, this is all being deprecated in about a week, so I don't think it's worth implementing since it'll all break on the 28th when the old V2 endpoints are deprecated.

How to cache an api response?

I'm using the api http://exchangeratesapi.io/ to get exchange rates.
Their site asks:
Please cache results whenever possible this will allow us to keep the service without any rate limits or api key requirements.
-source
Then I found this:
By default, the responses all of the requests to the exchangeratesapi.io API are cached. This allows for significant performance improvements and reduced bandwidth from your server.
-somebody's project on github, not sure if accurate
I've never cached something before and these two statements confuse me. When the API's site says to "please cache the results", it sounds like caching is something I can do in a fetch request, or somehow on the frontend. For example, some way to store the results in local storage or something. But I couldn't find anything about how to do this. I only found resources on how to force a response NOT to cache.
The second quote makes it sound like caching is something the API does itself on their servers, since they set the response to cache automatically.
How can I cache the results like the api site asks?
To clear your confusion on the conflicting statements you're referencing:
Caching just means to store the data. Examples of where the data can be stored are in memory, in some persistence layer (like Redis), or in the browser's local storage (like you mentioned). The intent behind caching can be to serve the data faster (compared to getting it from the primary data source) for future requests/fetches, and/or to save on costs for getting the same data repeatedly, among others.
For your case, the http://exchangeratesapi.io/ API is advising consumers to cache the results on their side (as you mentioned in your question, this can be in the browser's local storage, if you're calling the API front front-end code, or stored in memory or other caching mechanisms/structures on the server-side application code calling the API) to that they can avoid the need to introduce rate limiting.
The project from Github you're referencing, Laravel Exchange Rates, appears to be a PHP wrapper around the original API - so it's like a middleman between the API and a developer's PHP code. The intent is to make it easier to use the API from within PHP code, and avoid having to make raw HTTP requests to the API and avoid processing the responses; the Laravel Exchange Rates handles that for the developer.
In regards to the
By default, the responses all of the requests to the exchangeratesapi.io API are cached
statement you're asking about, it seems the library follows the advice of the API, and caches the results from the source API.
So, to sum up:
http://exchangeratesapi.io/ is the source API, and it advises consumers to cache results. If your code is going to be calling this API, you can cache the results in your own code.
The Laravel Exchange Rates PHP library is a wrapper around that source API, and does cache the results from the source API for the user. If you're using this library, you don't need to further cache.

how accurate is microsoft cognitive speaker identification

I am trying to build an application with Microsoft Cognitive Speaker Identification Service. But when I check it using its api some audio are not recognized correctly. I would like to know how much is the accuracy level of the service. Is there any way to improve it.
There are various things that can affect the accuracy of the identification e.g. Noise level, microphone quality, echo, etc.
To improve the performance in your condition, you can make sure the enrollment audio is recorded in the same conditions as the test audio (e.g. same microphone) and try to ensure that recording is done in a quiet environment.
It does work across multiple users and tried on different PCs/microphones.
I'd make sure that:
It is in a quiet room/environment
You are sending the audio correctly... (it is just byte array data, no additional encoding.)
Also check the header MediaTypeHeaderValue/content type, all request seem to be 'application/json' even though we send wav files.
Take care when mapping your users to the azure guids, and make sure you are using the correct ones. If you are using the SDK rather than API for profile creation and enrollment, there's no retreival of profile by id at the moment, I have done a workaround, to recreate the profile , and update the id in a database just before Enrollment. (the API doesn't need this though)
Also make sure you are using the latest API, (urls ending .../speaker/verification/v2.0/ etc... Some of the text independent features in the SDK are V2 only, and can fail verification becuse V2 stores profiles in 3 separate locations depending on the verification method.
Also check the profile was created/enrolled using the same verification method you are using to verify. try with a new profile if unsure.

Google Maps API V3 error: 403 (Forbidden access for too many pageviews)

I have a published layer in Google Maps Engine that I am attempting to display using the Google Maps V3 API. In GME, the layer's "Shared with" access list includes my user and the "API access" access list also includes by user.
I am making the Maps V3 API call using MapsEngineLayer from the visualization library, and setting the appropriate auth token and layer asset id as part of the layerOptions.
The API call is failing with the error message:
Google Maps API V3 error: 403 (Forbidden access for too many pageviews)
The URL looks like:
https://earthbuilder.googleapis.com/my_gme_layer_asset_id-4/maproot/json?
output=jsonp&access_token=my_auth_token&callback=xdc._tsel5i
I have found some discussion threads related to "403" and "forbidden", but am having difficulty figuring out the meaning of (and solution to) the "too many pageviews" issue.
Any suggestions would be appreciated.
If you tried with #David solution and it doesnt work. It worths to check that you are not caching (or storing locally) the google maps js script. Google don't allow that. If you serve that file it will work for about 3 days and then stop working.
How many requests have you made so far today, there are usage limits on the Maps API that may be preventing you from requesting any further data.
https://developers.google.com/maps/faq#usagelimits
Although it does seem unlikely that you have hit their hard set 25,000 requests, you may want to make sure that you aren't accidentally DOSing them with http requests. That sort of thing will invariably burn through your limit, and potentially place your IP on a blacklist.
Also, you should check the Maps API Reference materials, i think you may be trying to use a deprecated API.
https://developers.google.com/maps/documentation/webservices
Your http request should look more like this:
http://maps.googleapis.com/maps/api/service/output?parameters
Where output is either json or xml.
edit: The Maps API Help page is located here.

GSM system description

We want to launch a vehicle tracking service, remote monitoring of assets through GPRS/SMS. development, integration and maintenance of gps tracking software /Remote Monitoring SYSTem (Gsm/Gprs based)having Google Map API or mapinfo,.img or possibility to integrate any other map service, geo fencing, geo-coding, reverse geo-coding, alerts on events, user friendly gui, dash board, Billing each user , scrolling, fuel meter display etc. For reference , have a look at gpsgate.com (tracking server solution)
How to develop this and how much time is needed for this ?, any idea ?
First of all you will need some sort of gateway. It must handle TCP connections from devices(use async sockets!=)), parse their data and send to storage.
Next big thing is storage itself. If you want to support different devices, I would suggest to use something like Apache Cassandra with keys, based on date(only date, not time)and device UID.
Third part of puzzle is how you going to present data to users. This is pretty simple. Id suggest REST services.
This is my own experience. On my last job I was an Architect/Lead on quite the same project.
It is now live and successful handling 30k+ devices online on 1 server for apps(IIS), 2 for data and 2 for TCP gateways.
If you want more specific info, feel free to ask=)
Honestly, it all depends on your skills and expertise.
A team that is well versed in designing complex systems like that could finish the task in 4-6 months.
Given that you are asking such a question, rather than already having a ballpark estimate, means that you probably would be learning as you go. This could easily stretch into over a year, especially without prior experience managing such an overarching project.

Resources