Authenticating for findsequence service requests HERE Maps API - here-api

I'm attempting to make a GET request from the HERE Maps API service FindSequence. I noticed that in the docs, it includes three authentication parameters: app_id, app_code, apiKey. The docs insinuate that there is an option to use an app_id and app_code combination or an apiKey alone. This makes sense because elsewhere in the HERE Maps docs, it's noted that the old pattern was to use app_id and app_code but that has been recently deprecated and now one is supposed to use apiKey alone. In fact, you can not even generate an app_code anymore in the HERE developer projects dashboard.
So I attempted to make a request with the apiKey but I got an authentication error that demanded the app_id and app_code:
`curl --location --request GET "https://wse.api.here.com/2/findsequence.json?apiKey=[apiKey]`
{"faultCode":"s74149e0f-5b37-41b1-bf25-0d5f93e06938","responseCode":"400","message":"The request is missing the app_id and app_code parameters. They must both be passed as query parameters. If you do not have app_id and app_code, please obtain them through your customer representative or at http://developer.here.com/myapps."}
It's my understanding that freemium accounts do not have customer reps. I've asked for technical help and they send me to Stack Overflow. I followed the url http://developer.here.com/myapps but it redirects to https://developer.here.com/projects. There is no way to obtain the app_code from this url, as far as I can see.
My question is:
1) Do I need to supply an app_code? If not, how do I make a request without one? If I do need an app_code, how do I obtain one?
2) If app_codes can no longer be obtained, is there another service or another version of this service I should be using to calculate the optimal route sequence with given waypoint locations?

With the API key, you need to make sure to query the newer endpoints, in *.hereapi.com.
So the following request should work better:
curl --location --request GET "https://wse.ls.hereapi.com/2/findsequence.json?apiKey=[apiKey]&param1=value1&...

Related

Is there an endpoint to batch get urn:li:digitalmediaAsset in the LinkedIn API?

We are doing a rest/posts?author={MY_ORG} request against the LinkedIn Api (version 202211). Some of the posts returned contain content referenced with urn:li:digitalmediaAsset for which we need the download URL.
When I encounter urn:li:image or urn:li:video I can do a BATCH get to fetch additional details about the assets. I'd like to do the same thing for urn:li:digitalmediaAsset. I haven't seen an endpoint for that - does it exist?
I understand, that I can use a projection here but, I'd like to align with the code that I have for images and videos if the endpoint exists. In other words, I am looking for an alternative to using projections.

Extending artifactory's pypi repos with plugins

I am trying to migrate a legacy system to use artifactory. However I have two blockers:
the old scripts require PyPixmlrpc, which artifactory doesn't support
they also make use of upload_docs, not supported by artifactory's pypi implementation either
a smaller issue, the old scripts call register and they expect 200 instead of 204 http status code.
Would it be possible for me to write a plugin to implement this?
Looking at https://www.jfrog.com/confluence/display/RTF/User+Plugins I couldn't find a callback for when POST /api/pypi/<index-name> is requested.
If I can make
work for the methods we actually use, to just pretend it deployed docs and to respond with the correct status code I will be happy enough.
As you say, there is no plugin hook for the Pypi API endpoints. It would be possible to use the altResponse endpoint to customize artifact downloads, but then you would be restricted to GET requests with no request body, which is also not a good option for you.
I think the most viable approach would be to define a custom executions endpoint. With this, you can specify the acceptable method, read the body, and set your own response code and body. The main shortcoming with this is that you can't customize the path (it's always /api/plugins/execute/[execution_name]), but this can be worked around.
Execution endpoints can take params in the following form:
/api/plugins/execute/[execution_name]?params=[param_name]=[param_val]
Say your plugin takes a param path, which represents the API path your old scripts are going to call. Then you can set your base URL to /api/plugins/execute/[execution_name]?params=path=/, so that the API path is appended to the param. Alternatively, you can use nginx or another reverse proxy to rewrite the original API path to this form.
(Since you'll be using XML-RPC, I don't suppose you'll need to worry about any of this path stuff, but I'm including it anyway for completeness.)
Some issues with this approach:
Execution endpoints only allow String responses, so sending binary data in the response body might be finnicky. However, no such limitation exists with the request body.
If you need more than one request method, you'll need more than one execution endpoint. This means you'll need to use a reverse proxy to rewrite each method to a separate endpoint. Again, since XML-RPC just uses POST, this probably won't be an issue for you.
Execution endpoints can't customize response headers. Therefore, if your scripts expect a particular Content-Type or other header, you'll need to use a reverse proxy to insert it into the response.

Apigee Service Callout

I have a service callout in Apigee where instead of hardcoding the Url for the HTTPTargetConnection I want to use a variable for the value of the url.
Example:
http://{request.queryparam.url}
This is giving me a 404 Not Found error but if I hardcode the same value that is passed as a queryparam it works fine and calls the target service and return a response.I am not able to find any details on this in docs.Please help me out.Thanks.
This is potentially a bug.
Please attach a policy of your own in the target.xml request part of the application which assigns the target.url variable with your variable of choice. You can use assign variable or javascript or python as you wish for this policy.
EDIT:
What I answered above applies for target callouts. Not service callouts. To better understand the issue I request you to post the debug.xml in a pastebin or something similar.

Detect and rewrite HTTP Basic user/password headers into custom headers with Nginx/Lua

I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.

How do you use .pem files to authenticate a WCF request?

I'm trying to utilize the Amazon Product Advertising API. They provided me with a .wsdl file which I consumed and generated wrapper classes for via Visual Studio 2008's "Add Service Reference" option. This wrapper class works just fine as is and I've been successfully sending requests and receiving responses from Amazon.
However, they are now requiring that all partners start authenticating their requests. They have provided me with two .pem files (one which they call my X.509 certificate file, and one which they call my private key file). I'm not entirely sure what to do with these files. Amazon states the following:
Each SOAP request must be signed with the private key associated with the X.509 certificate. To create the signature, you sign the Timestamp element, and if you're using WS-Addressing, we recommend you also sign the Action header element. In addition, you can optionally sign the Body and the To header element
I realize that much more information may need to be provided here, so please let me know if I need to provide further detail in order to get an answer to this question.
Checkout this article --> http://www.byteblocks.com/post/2009/06/15/Secure-Amazon-Web-Service-Request.aspx
Looks like it should help you out.
Other links that might help:
1) http://developer.amazonwebservices.com/connect/thread.jspa?messageID=132705

Resources