I am looking for a solution to embed jwt token as header in the get/post requests I sent to the server in my phpunit tests.
Is there any option to add header to the GET/POST calls? I am currently using following method to send a GET request to the server.
content = $this->get('users/logout')->response->getContent();
You need to prefix the header with HTTP_. As Example:
$server = ['HTTP_X-Requested-With' => 'XMLHttpRequest'];
$this->call('get', '/users/logout', [], [], $server);
Alternative syntax::
$this->client->setServerParameter('HTTP_X-Requested-With', 'XMLHttpRequest');
$this->call('get', '/users/logout');
Hope this help
Related
I know this question has been asked a bunch of times, but I tried most of the answers and still can't get it to work.
I have a Golang API with net/http package and a JS frontend. I have a function
func SetCookie(w *http.ResponseWriter, email string) string {
val := uuid.NewString()
http.SetCookie(*w, &http.Cookie{
Name: "goCookie",
Value: val,
Path: "/",
})
return val
}
This function is called when the user logs in, and I expect it to be sent to all the other endpoints. This works as expected with Postman. However, when it comes to the browser, I can't seem to get it to remember the cookie or even send it to other endpoints.
An example of JS using an endpoint
async function getDataWithQuery(query, schema){
let raw = `{"query":"${query}", "schema":"${schema}"}`;
let requestOptions = {
method: 'POST',
body: raw,
redirect: 'follow',
};
try{
let dataJson = await fetch("http://localhost:8080/query/", requestOptions)
data = await dataJson.json();
}catch(error){
console.log(error);
}
return data;
}
I tried answers like setting SameSite attribute in Golang, or using credential: "include" in JS with no luck.
Thanks to the discussion in the comments, I found some hints about the problem.
Saving cookies (both API and frontend on the same host)
I used document.cookie to save the cookie. I set the options by hand since calling res.cookie on the response of the API fetch only returned the value. An example is document.cookie = `goCookie=${res.cookie}; path=/; domain=localhost;.
Sending cookies
This has been answered before in previous questions and answered again in the comments. The problem was that I used credential:'include' instead of the correct credentials:'include' (plural).
CORS and cookies
In case the API and the frontend are not on the same host you will have to modify both the API and the frontend.
frontend
The cookie has to have the domain of the API since it's the API that requires it, not the frontend. So, for security reasons, you can't set a cookie for a domain (API) from another domain (frontend). A solution would be redirect the user to an API endpoint that returns Set-Cookie header in the response header. This solution signals the browser to register that cookie with the domain attached to it (the API's domain, since the API sent it).
Also, you still need to include credentials:'include' in the frontend.
API
You will need to set a few headers. The ones I set are
w.Header().Set("Access-Control-Allow-Origin", frontendOrigin)
w.Header().Set("Access-Control-Allow-Credentials", "true")
w.Header().Set("Access-Control-Allow-Headers", "Content-Type, withCredentials")
w.Header().Set("Access-Control-Allow-Methods", method) // use the endpoint's method: POST, GET, OPTIONS
You need to expose the endpoint where the frontend will redirect the user and set the cookie in the response. Instead of setting the domain of the API by hand, you can omit it, the browser will fill it with the domain automatically.
To handle the CORS and let JS send the cookie successfully, you will have to set the SameSite=None and Secure attributes in the cookie and serve the API over https (I used ngrok to make it simple).
Like so
func SetCookie(w *http.ResponseWriter, email string) string {
val := uuid.NewString()
http.SetCookie(*w, &http.Cookie{
Name: "goCookie",
Value: val,
SameSite: http.SameSiteNoneMode,
Secure: true,
Path: "/",
})
// rest of the code
}
I recommend you also read the difference between using localStorage and document.cookie, it was one of the problems I had.
Hope this helps.
My system needs to authenticate with an external service (out of my control in any way), and the authentication requires two HTTP requests in very short sequence (the token I get in the first request expires in 2 seconds and I have to use it to finish the auth process).
So I'm basically doing this:
// create client
$client = HttpClient::create();
$baseUrl = "https://example.com/webservice.php";
$username = "foo";
$token = "foobarbaz";
// first request
$tokenResponse = $client->request( Method::GET, $baseUrl . '?operation=getchallenge&username=' . $username );
$decoded = $tokenResponse->toArray();
// second request
$loginResponse = $client->request( Method::POST, $this->baseUrl, [
'body' => [
'operation' => 'login',
'username' => $username,
'accessKey' => md5( $decoded['result']['token'] . $token ),
],
]
);
$sessionData = $loginResponse->toArray();
Doing this, I get an exception ( TransportExceptionInterface):
Failed sending data to the peer for "https://example.com/webservice.php"
But, if instead of using the same $client I instantiate a new one ($client2 = HttpClient::create();) and I use that one to make the second request, without making any other change to the code, it will work flawlessly.
My guess would be that this is related to the default request concurrency that HTTP Client uses, but I checked the docs and the options available on HttpClientInterface but I haven't been anyway to change the client behaviour in a way that doesn't require me to create a new instance (nor a way to increase error logging verbosity).
I have the cURL extension installed, so the client being created is CurlHttpClient and not NativeHttpClient.
Is there a way around this, without having to create a new client instance?
Here is what docs say.
Even when doing regular synchronous calls, this design allows keeping
connections to remote hosts open between requests, improving
performance by saving repetitive DNS resolution, SSL negotiation, etc.
To leverage all these design benefits, the cURL extension is needed.
In order to use php streams the example says you use NativeClient as follows:
use Symfony\Component\HttpClient\CurlHttpClient;
use Symfony\Component\HttpClient\NativeHttpClient;
// uses native PHP streams
$client = new NativeHttpClient();
// uses the cURL PHP extension
$client = new CurlHttpClient();
This way you can select specific client without framework making you use curl client.
Once you get token, close the connection and create an new one from same client, this will ensure you are using same Client object but initiating new connection requirements, which mimics the behavior server requires in order to authenticate.
It is always good to check server response before parsing body. Please check if server has sent 200 response with token and branch out your logic. Try catch or checking headers is always a bonus and good practice in error handling.
I have an App Engine service with a few methods implemented, where I restrict all routes with the login: admin option in the app.yaml.
Making a POST request to my service works:
fetch('http://localhost:8081/api/foo', {
credentials: 'include'});
But making a PUT request fails
await fetch('http://localhost:8081/api/foo', {
credentials: 'include',
method: 'PUT',
body: 'hi there'});
with the following error:
Response to preflight request doesn't pass access control check:
Redirect is not allowed for a preflight request.
I understand this is because my request is somehow not authenticated, and the server redirects my request to the login page. What I don't understand is how to authenticate it.
I'm using webapp2 to process the requests, and setting the following headers:
self.response.headers['Access-Control-Allow-Credentials'] = 'true'
self.response.headers['Content-Type'] = 'application/json'
# This feels wrong, but I still don't clearly understand what this header's purpose is...
self.response.headers['Access-Control-Allow-Origin'] = self.request.headers['Origin']
I think the deeper problem is that I don't undestand how this login feature works (is it cookie based? Why does it work with GET but not PUT? ...), and I don't truly understand CORS either.
Thanks for any help!
So, after discussing with Dan Cornilescu, here is the solution I came up with (Thanks Dan!)
Instead of having my classes inherit webapp2.RequestHandler, they inherit this custom HandlerWrapper.
The big difference is that when receiving an 'OPTIONS' request (ie. preflight), there is no login required. This is what was causing my problem: I couldn't get the preflight request to be authenticated, so now it doesn't need to be.
The CORS is also handled there, with a list of allowed origins
class HandlerWrapper(webapp2.RequestHandler):
def __init__(self, request, response):
super(HandlerWrapper, self).__init__(request, response)
self.allowed_origins = [
r'http://localhost(:\d{2,})?$', # localhost on any port
r'https://\w+-dot-myproject.appspot.com' # all services in the app engine project
]
self.allowed_methods = 'GET, PUT, POST, OPTIONS'
self.content_type = 'application/json'
# login mode: either 'admin', 'user', or 'public'
self.login = 'admin'
def dispatch(self):
# set the Allow-Origin header.
if self.request.headers.has_key('origin') and match_origin(self.request.headers['Origin'], self.allowed_origins):
self.response.headers['Access-Control-Allow-Origin'] = self.request.headers['Origin']
# set other headers
self.response.headers['Access-Control-Allow-Methods'] = self.allowed_methods
self.response.headers['Content-Type'] = 'application/json'
self.response.headers['Access-Control-Allow-Credentials'] = 'true'
# Handle preflight requests: Never require a login.
if self.request.method == "OPTIONS":
# For some reason, the following line raises a '405 (Method Not Allowed)'
# error, so we just skip the dispatch and it works.
# super(HandlerWrapper, self).dispatch()
return
# Handle regular requests
user = users.get_current_user()
if self.login == 'admin' and not users.is_current_user_admin():
self.abort(403)
elif self.login == 'user' and not user:
self.abort(403)
else:
super(HandlerWrapper, self).dispatch()
def match_origin(origin, allowed_origins):
for pattern in allowed_origins:
if re.match(pattern, origin): return True
return False
The login: admin configuration is based on the Users API, available only in the 1st generation standard environment. Not a CORS problem. From the login row in the handlers element table:
When a URL handler with a login setting other than optional
matches a URL, the handler first checks whether the user has signed in
to the application using its authentication option. If not, by
default, the user is redirected to the sign-in page. You can also use
auth_fail_action to configure the app to simply reject requests
for a handler from users who are not properly authenticated, instead
of redirecting the user to the sign-in page.
To use the Users API the user must literally login before the PUT request is made. Make a GET request first, which will redirect you to the login page, execute the login, then make the PUT request.
If that's not something you can achieve then you need to use a different authentication mechanism, not the one based on login: admin.
Update:
The above is true, but rather unrelated as the Users API authentication is addressed - you did mention that some other request method to the same URL works.
The error you get is indeed CORS-related, see Response to preflight request doesn't pass access control check. But I'd suggest not focusing on the accepted answer (which is only about working around CORS), but rather on this one, which is about doing CORS correctly.
A server I need to integrate with returns its answers encoded as a JWT. Worse, the response body actually is a json, of the form:
{d: token} with token = JWT.encode({id: 123, field: "John", etc.})
I'd like to use a pact verification on the content of the decoded token. I know I can easily have a pact verifying that I get back a {d: string}, I can't do an exact match on the string (as the JWT contains some varying IDs). What I want is the following, which presumes the addition of a new Pact.JWT functionality.
my_provider.
upon_receiving('my request')
.with(method: :post,
path: '/order',
headers: {'Content-Type' => 'application/json'}
).will_respond_with(
status: 200,
headers: {'Content-Type' => 'application/json; charset=utf-8'},
body: {
d: Pact::JWT( {
id: Pact.like(123),
field: Pact.term(generate: "John", matcher: /J.*/
},signing_key,algo
)
})
Short of adding this Pact::JWT, is there a way to achive this kind of result?
I am already using the pact proxy to run my verification. I know you can modify the request before sending it for verification (How do I verify pacts against an API that requires an auth token?). Can you modify the request once you receive it from the proxy server?
If that's the case, I can plan for the following work around:
a switch in my actual code to sometimes expect the answers decoded instead of in the JWT
run my tests once with the swich off (normal code behaviour, mocks returns JWT data encoded.
run my tests a second time with the swich off (code expect data already decoded, mocks return decoded data.)
use the contract json from this second run
hook into the proxy:verify task to decode the JWT on the fly, and use the existing pact mechanisms for verification. (Step that I do not know how to do).
My code is in ruby. I do not have access to the provider.
Any suggestions appreciated! Thanks
You can modify the request or response by using (another) proxy app.
class ProxyApp
def initialize real_provider_app
#real_provider_app = real_provider_app
end
def call env
response = #real_provider_app.call(env)
# modify response here
response
end
end
Pact.service_provider "My Service Provider" do
app { ProxyApp.new(RealApp) }
end
Pact as a tool, I don't expect it to give this behavior out of the box.
In my opinion, the best is,
Do not change source code only for tests
Make sure your tests verifies encoded json only (generate encoded expected json in test & verify that with actual)
So I completely understand how to use getIceServers via your demo, but what's the best practice for implementing on the server side / compiled client-side?
"This token should only be implemented in a secure environment, such as a server-side application or a compiled client-side application."
Do the list of IceServers expire at some point? Should I request new IceServers on each page request or do I cache the list for X amount of time?
The Ice Server credentials expire after about 10 seconds. Because you want to keep your XirSys secret token secure (so no one can hack your account's connection allotment), you'll want to make a backend/server side curl request for the ice servers. It's assumed that your app uses its own authentication. I.e., it'll reject any non-authenticated requests to https://yourdomain.com/ajax/get-ice-servers.
So ... whenever you need to create a PeerConnection object, get a list of Ice servers through an Ajax call ...
var pc = RTCPeerConnection(
getIceServers(),
{optional: []}
);
where ...
function getIceServers() {
var result = jQuery.ajax({
async: false,
url: "https://" + yourDomain + ".com/ajax/get-ice-servers"
}).responseText;
return JSON.parse(result);
}
Note you'll want a synchronous ajax request so the getIceServers() function returns the result before RTCPeerConnection is instantiated.
Also note that if you start a webRTC connection automatically on page load, then you could probably just use the iceServers result from the server curl request.