Wrong S3 Authentication with Paw App - paw-app

End device: EMC ECS,
Protocol: AWS S3
I'm trying to authenticate with my Python script and construct the same request using Paw.
Python with boto works just fine.
The primitive code:
from boto.s3.connection import S3Connection
accessKeyId = 'objuser'
secretKey = 'spl4vDHl11H7uW/683WZCoYrle03Bn1hd42gy8bd'
host = '10.10.10.10'
port = 9020
conn = S3Connection(aws_access_key_id=accessKeyId,
aws_secret_access_key=secretKey,
host=host,
port=port,
calling_format='boto.s3.connection.ProtocolIndependentOrdinaryCallingFormat',
is_secure=False)
print conn.get_all_buckets()
Correct headers are accepted by the S3 server
Date: Fri, 08 Apr 2016 07:38:34 GMT
Authorisation: AWS obtuser:Gi/qcdbyYcVMdI9EkdORPMx2wbo=
Next I re-create the same request with Paw but get wrong headers:
Date: Fri, 08 Apr 2016 07:38:34 GMT
Authorisation: AWS obtuser:/znFNFviqD5fw3t1oWUwBQ8B5M4=
Of course it is rejected by the S3 server.
In Paw I use Authorisation header with standard "S3 Amazon S3 Authorisation Header" dynamic value. AWS Access Key ID and Secret Access key ID are the same as in the script (triple checked).
According to the ECS documentation, S3 Authentication follows Signing and Authenticating REST Requests So signature is based on the standard HMAC-SHA1.
I expect that the same method is used by Paw.
Could you please advice what is potential reason why Paw doesn't create correct Authorisation header and how to fix that?
Many thanks in advance !

Sorry for the very late answer! I've just tested this again myself, using our AWS account, and for example, I've been able to list all our S3 buckets easily, see this screenshot:
It seems like it's probably an issue related to the way the Authorization header has been configured, or an invalid character being inserted in the URL field. (A screenshot would help to see what can be wrong)

Using Paw v3.0.16 and "Amazon S3 Authorisation Header" dynamic value, SignatureDoesNotMatch error occurred with this URL.
GET https://s3-ap-northeast-1.amazonaws.com/bucket123/sub456/some789.json
And it works well with this URL.
GET https://bucket123.s3.amazonaws.com/sub456/some789.json
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html

Related

Cognitive Services Custom Vision SDK NotFound Error

I'm experiencing a strange behavior. Got a Custom Vision service deployed on Azure. It contains a single project with no published models.
Using the HTTP REST Api and querying for projects, it correctly returns a list of (one) projects as shown below:
GET https://westeurope.api.cognitive.microsoft.com/customvision/v3.0/training/projects HTTP/1.1
Host: westeurope.api.cognitive.microsoft.com
Training-Key: {MY_TRAINING_APIKEY}
apim-request-id: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
x-content-type-options: nosniff
Date: Thu, 02 May 2019 18:57:25 GMT
Content-Length: 605
Content-Type: application/json; charset=utf-8
[{
PROJECT_DATA
}]
But, if I try to use the service via C# SDK using:
Microsoft.Azure.CognitiveServices.Vision.CustomVision.Prediction
Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training
both 1.0 version, every time I got:
Microsoft.Azure.CognitiveServices.Vision.CustomVision.Training.Models.CustomVisionErrorException:
'Operation returned an invalid status code 'NotFound''
This is the code snippet using the SDK.
using (CustomVisionTrainingClient client = new CustomVisionTrainingClient())
{
client.ApiKey = "{MY_TRAINING_APIKEY}";
client.Endpoint = "https://westeurope.api.cognitive.microsoft.com/customvision/v3.0/Training/";
var projects = client.GetProjects();
}
Interesting fact: trying to use both Training and Prediction clients on currently working Custom Vision project (with deployed models too) I keep getting NotFound error on every SDK method.
Am I missing something?
Thanks in advance.
Fabio.
For CognitiveServices Vision clients, you need to provide the base URI as the Endpoint property, not the entire API endpoint. The client SDK itself will add the remainder of the path (including the version) depending on the method you call.
So in your case you need to do the following:
using (CustomVisionTrainingClient client = new CustomVisionTrainingClient())
{
client.ApiKey = "{MY_TRAINING_APIKEY}";
client.Endpoint = "https://westeurope.api.cognitive.microsoft.com";
var projects = client.GetProjects();
}

AWS SES receipt rules for something#example.com not being triggered as expected

EDIT: My client has returned with an error message (2 day timeout) `Delivery attempt history for your mail:
Sun, 02 Sep 2018 21:20:35 +0000 (GMT)
TCP active open: Failed connect() to TCP port 25 of 104.27.139.56 104.27.138.56 last Error: Connection timed out`
The IP it mentions is Cloudflare, I've posted in their forum but thought worth asking as i do think this is the problem I'm experiencing and perhaps need to remove Cloudflare altogether
My scenario is:
SES is no longer in sandbox mode.
I've verified my domain and my personal email address.
I've added the TXT / Name records to cloudflare (I'm wondering if they are the issue)
I've set up rules such that 'something#example.com' and 'anything#example.com' should trigger by sending the email to an S3 bucket.
S3 isn't my ideal solution but as a proof of concept it'll do.
However, emails sent to 'something#example.com' are not getting through, nothing is in S3 and I'm receiving (via my personal email) a
Delivery Status Notification (Failure)
3 hours ago at 12:58 PM
From
MAILER-DAEMON#amazonses.com
To
myemail#me.com
An error occurred while trying to deliver the mail to the following recipients:
something#example.com
Can anyone perhaps shed some light on what I may be doing wrong?
MX records (in Cloudflare) have been updated to these that I obtained from SES:
amazonses - 10 feedback-smtp.us-east-1.amazonses.com - proxy is automatic

how to send authorization details in pact Json Generation or to pact verfier in PACT PYTHON

how to send authorization details in MOCK service or to pact verfier in PACT PYTHON
when i call API thorugh soapUI it is working fine but when i run it mock Json through Pact verfier , it is faling since i am not senidng Authorization details in request header or not adding in pact verfier.
can u please help me how can send authorization details through request headers in PACT python?
def test_HappyPath (self):
mockurl = 'http://localhost:1234'
expected = {body:true}
pact.given (
'Given there is a valid form'
).upon_receiving (
'fetch all the info '
).with_request (
'get',
'/',headers={Authorization:'Bearer 58771381-333e-334f-9604-784'}
).will_respond_with(200, body=expected)
with pact:
result = callAPI ( mockurl )
self.assertEqual(result, expected )
Request and Authroization Info:
GET https: http....com/ /v1/ forms/83359274-7ad6-4
Accept-Encoding: gzip,deflate
Authorization: Bearer 58771381-333e-334f-9604-ebf977ed7784
Content-Length: 0
Host: company.com
Connection: Keep-Alive
User-Agent: Apache-HttpClient/4.5.2 (Java/1.8.0_162)
OAUTH2.0:
CSClientUser=username
CSClientPassword=pwd
CSClientIdendification=xxxx
CSClientSecret=fffff
CSAccessTokenURI=company.com/oauth2/token
Please read https://docs.pact.io/faq#how-do-i-test-oauth-or-other-security-headers as a starting point.
On the consumer side, what you have done is fine (although you might want a matcher if you are generating unique tokens).
On the provider side, you will want to pass the (as yet undocumented in pact-python) flag --custom-provider-header option to the verification process[1], which allows you to override one or more headers with dynamic values (in your case, a live auth token).
This is assuming of course that you can't mock out the authentication system during provider verification.
Under the hood, a separate service[2] is spawned to do this verification. There is more info on this feature in the link provided.
[1] https://github.com/pact-foundation/pact-python/#verifying-pacts-against-a-service
[2] https://github.com/pact-foundation/pact-provider-verifier#usage

BizTalk 2016: How to use HTTP Send adapter with API token

I need to make calls to a rest API service via BizTalk Send adapter. The API simply uses a token in the header for authentication/authorization. I have tested this in a C# console app using httpclient and it works fine:
string apiUrl = "https://api.site.com/endpoint/<method>?";
string dateFormat = "dateFormat = 2017-05-01T00:00:00";
using (var client = new HttpClient())
{
client.DefaultRequestHeaders.Add("token", "<token>");
client.DefaultRequestHeaders.Add("Accept", "application/json");
string finalurl = apiUrl + dateFormat;
HttpResponseMessage resp = await client.GetAsync(finalurl);
if (resp.IsSuccessStatusCode)
{
string result = await resp.Content.ReadAsStringAsync();
var rootresult = JsonConvert.DeserializeObject<jobList>(result);
return rootresult;
}
else
{
return null;
}
}
however I want to use BizTalk to make the call and handle the response.
I have tried using the wcf-http adapter, selecting 'Transport' for security (it is an https site so security is required(?)) with no credential type specified and placed the header with the token in the 'messages' tab of the adapter configuration. This fails though with the exception: System.IO.IOException: Authentication failed because the remote party has closed the transport stream.
I have tried googling for this specific scenario and cannot find a solution. I did find this article with suggestions for OAUth handling but I'm surprised that even with BizTalk 2016 I still have to create a custom assembly for something so simple.
Does anyone know how this might be done in the wcf-http send adapter?
Yes, you have to write a custom Endpoint Behaviour and add it to the send port. In fact with the WCF-WebHttp adapter even Basic Auth doesn't work so I'm currently writing an Endpoint Behaviour to address this.
One of the issues with OAuth, is that there isn't one standard that everyone follows, so far I've had to write 2 different OAuth behaviours as they have implemented things differently. One using a secret and time stamp hashed to has to get a token, and the other using Basic Auth to get a token. Also one of them you could get multiple tokens using the same creds, whereas the other would expire the old token straight away.
Another thing I've had to write a custom behaviour for is which version of TLS the end points expects as by default BizTalk 2013 R2 tries TLS 1.0, and then will fail if the web site does not allow it.
You can feedback to Microsoft that you wish to have this feature by voting on Add support for OAuth 2.0 / OpenID Connect authentication
Maybe someone will open source their solution. See Announcement: BizTalk Server embrace open source!
Figured it out. I should have used the 'Certificate' for client credential type.
I just had to:
Add token in the Outbound HTTP Headers box in the Messages tab and select 'Transport' security and 'Certificate' for Transport client credential type.
Downloaded the certificate from the API's website via the browser (manually) and installed it on the local servers certificate store.
I then selected that certificate and thumbprint in the corresponding fields in the adapter via the 'browse' buttons (had to scroll through the available certificates and select the API/website certificate I was trying to connect to).
I discovered this on accident when I had Fiddler running and set the adapter proxy setting to the local Fiddler address (http://localhost:8888). I realized that since Fiddler negotiates the TLS connection/certificate (I enabled tls1.2 in fiddler) to the remote server, messages were able to get through but not directly between the adapter and the remote API server (when Fiddler WASN'T running).

DocumentDB web API access with R

I have the following issue when trying to connect to the documentDB web API with R and PostMan.
In the DocumentDB documentation the way to ask something to the web API is to compose an Authorization header with base64 hash.
In R I'm trying to compute the signature and test the header directly with postman.
But I get every time a http 401.
Here is my R code:
toHash <- enc2utf8("get\ncolls\ndbs/toto/colls/testtoto\nsun, 08 may 2016 06:43:05 gmt\n\n")
hash <- hmac(key, toHash, "sha256")
base64(hash)
the "key" is the primary key got from the portal.
And then, following the Azure documentation, my header is:
type=master&ver=1.0&sig=< thebase64(hash) >
I'm pasting that into PostMan with the headers x-ms-version, date and x-ms-date.
But it'is not working..
I'm stuck now, does anyone have an idea? Am I using a wrong R function? A wrong key, is there a way to get more information about the mismatch?
The web api response is :
{
"code": "Unauthorized",
"message": "The input authorization token can't serve the request. Please check that the expected payload is built as per the protocol, and check the key being used. Server used the following payload to sign: 'get\ncolls\ndbs/toto/colls/testtoto\nsun, 08 may 2016 06:43:05 gmt\n\n'\r\nActivityId: fadbfc0b-e298-418a-b56c-8114699fff91"
}
I found what was wrong by myself.
The token given in the Azure portal is base64 encoded. So It is mandatory to decode it:
RCurl::base64Decode(key, mode="raw")
in order to use it with the digest::hmac function. It is also mandatory to specify raw = TRUE within this hmac function.

Resources