I'm setting up a Cypress E2E test, with a basic first step of loading the app and signing in. When cypress loads the page, however, I see odd behavior in that the cookies are being set twice, kindof, like:
xsrftoken - someValue123 - env.app.domain.com // hostOnly: true
xsrftoken - otherValue12 - .env.app.domain.com // hostOnly: false
sessiontoken - someValue - env.app.domain.com // hostOnly: true
sessiontoken - someValue - .env.app.domain.com // hostOnly: false
sessiontoken value stays consistent if I use cy.session (see code below), otherwise it also differs between the two. The only difference besides the leading . is the hostOnly value.
This appears to cause problems because (from what I can tell) on subsequent requests the wrong xsrftoken gets sent.
When I visit the app outside of Cypress, the cookies are only set once:
xsrftoken - someValue123 - env.app.domain.com
sessiontoken - someValue - env.app.domain.com
The Cypress setup is pretty basic:
// With this setup, the sessiontoken doesn't change, but the xsrftoken does,
// and the "Signs in" test doesn't get an authenticated session.
beforeEach(() => {
cy.session('mySession', () => {
cy.visit('https://env.app.domain.com/')
cy.get('input[type="text"]').type('userName');
cy.get('input[type="password"]').type('passWord');
cy.get('button[type="submit"').click();
})
})
it('Signs in', () => {
cy.visit('https://env.app.domain.com/')
})
// With just this single setup I can sign in successfully if I clear initial cookies:
it('Signs in', () => {
cy.visit('https://env.app.domain.com/')
cy.get('input[type="text"]').type('userName');
cy.get('input[type="password"]').type('passWord');
cy.clearAllCookies(); // <-- Should not need this...
cy.get('button[type="submit"').click();
})
I don't really think this is a Cypress issue, but maybe I'm wrong? I believe the server is some kind of nginx - I'm not certain of the exact configuration BUT I don't see this same behavior in other environments for the same app I do see this in other environments, although initially I thought I did not, at least for a few test runs.
I'm looking for either a workaround (prevent certain cookies being set?) or a root cause.
I think it might be a Cypress issue since I also have it with csrf cookies: it creates 2: one regular, second with . in
As I cannot clear all cookies in my circumstances, I have to clear them individually
cy.clearCookie('_csrf', {domain: 'domain'});
cy.clearCookie('_csrf', {domain: '.domain'});
Update: there is an existing issue on github: https://github.com/cypress-io/cypress/issues/25174
Related
While trying to set up some simple end-to-end tests with Jest and Puppeteer, I've found that any test I write will inexplicably fail with a timeout.
Here's a simple example test file, which deviates only slightly from Puppeteer's own example:
import puppeteer from 'puppeteer';
describe('Load Google Puppeteer Test', () => {
test('Load Google', async () => {
const browser = await puppeteer.launch({
headless: false
});
const page = await browser.newPage();
await page.goto('https://google.co.uk');
await expect(page).toMatch("I'm Feeling Lucky");
await browser.close();
});
});
And the response it produces:
TimeoutError: Text not found "I'm Feeling Lucky"
waiting for function failed: timeout 500ms exceeded
I have tried adding in custom timeouts to the goto line, the test clause, amongst other things, all with no effect. Any ideas on what might be causing this? Thanks.
What I would say is happening here is that using toMatch expects text to be displayed. However, in your case, the text you want to verify is text associated with a button.
You should try something like this:
await expect(page).toMatchElement('input[value="I\'m Feeling Lucky"]');
Update 1:
Another possibility (and it's one you've raised yourself) is that the verification is timing out before the page has a chance to load. This is a common issue, from my experience, with executing code in headless mode. It's very fast. Sometimes too fast. Statements can be executed before everything in the UI is ready.
In this case you're better off adding some waitForSelector statements throughout your code as follows:
await page.waitForSelector('input[value="I\'m Feeling Lucky"]');
This will ensure that the selector you want is displayed before carrying on with the next step in your code. By doing this you will make your scripts much more robust while maintaining efficiency - these waits won't slow down your code. They'll simply pause until puppeteer registers the selector you want to interact with / verify as being displayed. Most of the time you won't even notice the pause as it will be so short (I'm talking milliseconds).
But this will make your scripts rock solid while also ensuring that things won't break if the web page is slower to respond for any reason during test execution.
You're probably using 'expect-puppeteer' package which does the toMatch expect. This is not a small deviation. The weird thing is that your default timeout isn't 30 seconds as the package's default, check that.
However, to fix your issue:
await expect(page).toMatch("I'm Feeling Lucky", { timeout: 6000 });
Or set the default timeout explicitly using:
page.setDefaultTimeout(timeout)
See here.
UPDATE: Google has recently updated their error message with an additional error code possibility: "timeout-or-duplicate".
This new error code seems to cover 99% of our previously mentioned mysterious
cases.
We are still left wondering why we get that many validation requests that are either timeouts or duplicates. Determinining this with certainty is likely to be impossible, but now I am just hoping that someone else has experienced something like it.
Disclaimer: I cross posted this to Google Groups, so apologies for spamming the ether for the ones of you who frequent both sites.
I am currently working on a page as part of a ASP.Net MVC application with a form that uses reCAPTCHA validation. The page currently has many daily users.
In my server side validation** of a reCAPTCHA response, for a while now, I have seen the case of the reCAPTCHA response having its success property set to false, but with an accompanying empty error code array.
Most of the requests pass validation, but some keep exhibiting this pattern.
So after doing some research online, I explored the two possible scenarios I could think of:
The validation has timed out and is no longer valid.
The user has already been validated using the response value, so they are rejected the second time.
After collecting data for a while, I have found that all cases of "Success: false, error codes: []" have either had the validation be rather old (ranging from 5 minutes to 10 days(!)), or it has been a case of a re-used response value, or sometimes a combination of the two.
Even after implementing client side prevention of double-clicking my submit-form button, a lot of double submits still seem to get through to the server side Google reCAPTCHA validation logic.
My data tells me that 1.6% (28) of all requests (1760) have failed with at least one of the above scenarios being true ("timeout" or "double submission").
Meanwhile, not a single request of the 1760 has failed where the error code array was not empty.
I just have a hard time imagining a practical use case where a ChallengeTimeStamp gets issued, and then after 10 days validation is attempted, server side.
My question is:
What could be the reason for a non-negligible percentage of all Google reCAPTCHA server side validation attempts to be either very old or a case of double submission?
**By "server side validation" I mean logic that looks like this:
public bool IsVerifiedUser(string captchaResponse, string endUserIp)
{
string apiUrl = ConfigurationManager.AppSettings["Google_Captcha_API"];
string secret = ConfigurationManager.AppSettings["Google_Captcha_SecretKey"];
using (var client = new HttpClient())
{
var parameters = new Dictionary<string, string>
{
{ "secret", secret },
{ "response", captchaResponse },
{ "remoteip", endUserIp },
};
var content = new FormUrlEncodedContent(parameters);
var response = client.PostAsync(apiUrl, content).Result;
var responseContent = response.Content.ReadAsStringAsync().Result;
GoogleCaptchaResponse googleCaptchaResponse = JsonConvert.DeserializeObject<GoogleCaptchaResponse>(responseContent);
if (googleCaptchaResponse.Success)
{
_dal.LogGoogleRecaptchaResponse(endUserIp, captchaResponse);
return true;
}
else
{
//Actual code ommitted
//Try to determine the cause of failure
//Look at googleCaptchaResponse.ErrorCodes array (this has been empty in all of the 28 cases of "success: false")
//Measure time between googleCaptchaResponse.ChallengeTimeStamp (which is UTC) and DateTime.UtcNow
//Check reCAPTCHAresponse against local database of previously used reCAPTCHAresponses to detect cases of double submission
return false;
}
}
}
Thank you in advance to anyone who has a clue and can perhaps shed some light on the subject.
You will get timeout-or-duplicate problem if your captcha is validated twice.
Save logs in a file in append mode and check if you are validating a Captcha twice.
Here is an example
$verifyResponse = file_get_contents('https://www.google.com/recaptcha/api/siteverify?secret='.$secret.'&response='.$_POST['g-recaptcha-response'])
file_put_contents( "logfile", $verifyResponse, FILE_APPEND );
Now read the content of logfile created above and check if captcha is verified twice
This is an interesting question, but it's going to be impossible to answer with any sort of certainly. I can give an educated guess about what's occurring.
As far as the old submissions go, that could simply be users leaving the page open in the browser and coming back later to finally submit. You can handle this scenario in a few different ways:
Set a meta refresh for the page, such that it will update itself after a defined period of time, and hopefully either get a new ReCAPTCHA validation code or at least prompt the user to verify the CAPTCHA again. However, this is less than ideal as it increases requests to your server and will blow out any work the user has done on the form. It's also very brute-force: it will simply refresh after a certain amount of time, regardless of whether the user is currently actively using the page or not.
Use a JavaScript timer to notify the user about the page timing out and then refresh. This is like #1, but with much more finesse. You can pop a warning dialog telling the user that they've left the page sitting too long and it will soon need to be refreshed, giving them time to finish up if they're actively using it. You can also check for user activity via events like onmousemove. If the user's not moving the mouse, it's very likely they aren't on the page.
Handle it server-side, by catching this scenario. I actually prefer this method the most as it's the most fluid, and honestly the easiest to achieve. When you get back success: false with no error codes, simply send the user back to the page, as if they had made a validation error in the form. Provide a message telling them that their CAPTCHA validation expired and they need to verify again. Then, all they have to do is verify and resubmit.
The double-submit issue is a perennial one that plagues all web developers. User behavior studies have shown that the vast majority occur because users have been trained to double-click icons, and as a result, think they need to double-click submit buttons as well. Some of it is impatience if something doesn't happen immediately on click. Regardless, the best thing you can do is implement JavaScript that disables the button on click, preventing a second click.
This is giving me quite some headache. I have an page-tab-application, where DB-interaction uses the facebook-user-id to assign and save data and also to check user permissions. Until a weak ago everything was working fine, but now with the upcoming december-changes this setup doesnt work anymore:
config.php:
$facebook = new Facebook( array(
'appId' => $app_id,
'secret' => $app_secret,
'cookie' => true
));
index.php:
includes config.php and gets the signed request (not important for the question
javascript.js:
calls the read-user-status.php and handles the data
read-user-status.php:
gives json-response, includes config.php and calls the $facebook -> getUser()-function to get the uid
Even when called from the index.php directly after page-load, I sometimes get the uid and sometimes I don't. Strangly enough I usually have to wait a little until I reload the page and then it works again. But this isn't always the case. This all is just very strange to me.
EDIT: Should have mentioned that this call:
$uid = $facebook -> getUser();
if ($uid) {
try {
// Proceed knowing you have a logged in user who's authenticated.
$user_profile = $facebook -> api('/me');
} catch (FacebookApiException $e) {
error_log($e);
$uid = FALSE;
echo "EXCEPTION $e";
}
}
gives out "EXCEPTION An active access token must be used to query information about the current user".
I know there quite a lot of similar questions out there, but none of the answers were helpful to my particular (and probably to the new breaking changes relied) problem.
EDIT2: I now suppose that it is a sdk-bug (https://developers.facebook.com/bugs/238039849657148 , thanks to CBroe). Any recommendations for a work-around are of course very welcome.
EDIT 3, TEMPORARY SOLUTION
Everytime you make an ajax request, you post the token you get from the FB.getLoginStatus or FB.login and read it out in the php file and set it via $facebook -> setAccessToken. Not suitable in all circumstances (you definately need to use post), is slower and brings some security issues, but still works.
Sounds like you are affected by the bug I reported beginning of November, https://developers.facebook.com/bugs/238039849657148
They’ve confirmed it and say they’re working on a fix – but since the change is only a few days away now, they should hurry up a little …
I got this working by doing the following...
if(!$user){
$loginUrl = $facebook->getLoginUrl(array(
'scope' => 'email',
'redirect_uri' => $app_url
));
header('Location: ' . $loginUrl);
}
I also added my app to be integrated with:
Website with Facebook login
App on Facebook
Page Tab
try by adding access token to request.
$accessToken = $facebook->getAccessToken();
$user_profile = $facebook->api('/me?access_token=' . $accessToken);
I found a work-around for this, until it is fixed (which it seems like, wont be in time until the changes take place).
Everytime you make an ajax request, you post the token you get from the FB.getLoginStatus or FB.login and read it out in the php file and set it via $facebook -> setAccessToken. Not suitable in all circumstances (you definately need to use post), is slower and brings some security issues, but still works.
if it you are lucky and your version of php sdk still registers session variables than right after _graph method declaration:
//find this method below
protected function _graph ($path, $method = 'GET', $params = array ())
{
//paste right after _graph method declaration code below:
if (isset($_SESSION["fb_".$this->getAppId()."_access_token"]))
{
$this->setAccessToken($_SESSION["fb_".$this->getAppId()."_access_token"]);
}
//till here
//and you are good to go
//remember: your version of sdk must be registering access token variable in session
//right after ajax call
//i used git to get version before last commit of sdk published on github
I'm creating an application in Silex with unit tests.
Running unit tests works fine against the regular session handler:
$app->register(new Silex\Provider\SessionServiceProvider(), array(
'session.storage.options' => array(
'cookie_lifetime' => 1209600, // 2 weeks
),
));
and setting this flag in my unit tests:
$this->app['session.test'] = true;
If I don't set that session.test flag, my unit tests throw a headers already sent error and all fail. With it on, my tests run well.
The issue is I am attempting to use the flashBag feature (session info that lasts only until first request then get removed):
$foo = $app['session']->getFlashBag()->all();
The flashBag does not seem to respect the session.test flag, and attempts to send headers, which cause all my unit tests to fail:
24)
Yumilicious\UnitTests\Validator\PersonAccountTest::setConstraintsPassesWithMinimumAttributes
RuntimeException: Failed to start the session because headers have
already been sent.
/webroot/yumilicious/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Session/Storage/NativeSessionStorage.php:142
/webroot/yumilicious/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Session/Storage/NativeSessionStorage.php:262
/webroot/yumilicious/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Session/Session.php:240
/webroot/yumilicious/vendor/symfony/http-foundation/Symfony/Component/HttpFoundation/Session/Session.php:250
/webroot/yumilicious/src/app.php:38
/webroot/yumilicious/tests/Yumilicious/UnitTests/Base.php:13
/webroot/yumilicious/vendor/silex/silex/src/Silex/WebTestCase.php:34
/webroot/yumilicious/vendor/EHER/PHPUnit/src/phpunit/phpunit.php:46
/webroot/yumilicious/vendor/EHER/PHPUnit/bin/phpunit:5
I've narrowed it down to this bit of code: https://github.com/symfony/symfony/blob/master/src/Symfony/Component/HttpFoundation/Session/Storage/NativeSessionStorage.php#L259
Specifically, line 262. Commenting out that single line allows my tests to work properly and all pass green.
I've searched quite a bit to get this to work, but am not having any luck. I think it's because the flashBag stuff is new (https://github.com/symfony/symfony/blob/master/src/Symfony/Component/HttpFoundation/Session/Session.php#L305) and the old methods are being deprecated.
Any suggestions on getting my unit tests to work would be awesome.
For testing you need to replace the session.storage service with an instance of MockArraySessionStorage:
use Symfony\Component\HttpFoundation\Session\Storage\MockArraySessionStorage;
$app['session.storage'] = new MockArraySessionStorage();
This is because the native one tries to send a cookie via header which of course fails in a test environment.
EDIT: There is now a session.test parameter that you should set to true. That will automatically make the session use a mock storage.
I had this happen too, if i am not mistaking i fixed by having my unittests run via a different environment, wich has
framework:
test: ~
session:
storage_id: session.storage.mock_file
set in the config_test.yml
I came across similar problem today and temp fix would be to comment out block of code in
\Symfony\Component\HttpFoundation\Session\Storage\NativeSessionStorage
in start() method
/*
if (ini_get('session.use_cookies') && headers_sent()) {
throw new \RuntimeException('Failed to start the session because headers have already been sent.');
}
*/
This solution keeps tests "green" and from looks of it the application session functionality as is.
I am using Meteor 4.2 (Windows) and I am always getting the "update failed: 403 -- Access denied. Can't replace document in restricted collection" when I am trying to update an object in my collection. Strangely I had no problem inserting new ones, only updates are failing.
I tried to "allow" everything on my collection:
Maps.allow({
insert: function () { return true; },
update: function () { return true; },
remove: function () { return true; },
fetch: function () { return true; }
});
But still, this update fails:
Maps.update({
_id: Session.get('current_map')
}, {
name: $('#newMapName').val()
});
Is there something else I can check? Or maybe my code is wrong? Last time I played with my project was with a previous version of Meteor (< 4.0).
Thanks for your help.
PS: Just for information, when I do this update, the local collection is updated, I can see the changes in the UI. Then very quickly it is reverted along with the error message, as the changes has been rejected by the server-side.
Alright, the syntax was actually incorrect. I don't understand really why as it was working well before, but anyway, here is the code that works fine:
Maps.update({
Session.get('current_map')
}, {
$set: {
name: $('#newMapName').val()
}
});
It seems like it must be related to what you're storing in the 'current_map' session variable. If it's a db object, then it probably looks like {_id:<mongo id here>} which would make the update finder work properly.
I ran into the same issues, and found the following to work
Blocks.update {_id:block_id}, {$set: params}
where params is a hash of all the bits i'd like to update and block_id is the mongo object id of the Block i'm trying to update.
Your note about the client side update (which flashes the update and then reverts) is expected behavior. If you check out their docs under the Data and Security section:
Meteor has a cute trick, though. When a client issues a write to the server, it also updates its local cache immediately, without waiting for the server's response. This means the screen will redraw right away. If the server accepted the update — what ought to happen most of the time in a properly behaving client — then the client got a jump on the change and didn't have to wait for the round trip to update its own screen. If the server rejects the change, Meteor patches up the client's cache with the server's result.