Setting Phantomjs page settings in grunt-mocha - gruntjs

When using grunt-mocha, is it possible to set PhantomJS Page settings in the Gruntfile?
Specifically, i'd like to modify webSecurityEnabled and localToRemoteUrlAccessEnabled to enable cross-origin requests.
Setting page.settings.(optionname) in the task's target options didn't to the trick. Anyone have any pointers?

It seems this feature is currently open as a pull request on their GitHub repo, I'll just have to wait for them to merge it.
Update:
The pull request was a bit off so i made my own fork and created a new PR. If anyone is having the same issues, try this fork (you can use a custom github repo for a package's 'version' in npm's package.json)

Related

Why is my raw source code easily accessible via the Debugger's Network tab?

I have been working on my website for a month now and just realized that there is this extra _N_E server that is providing access to my raw source code used for each page.
I am using NextJS and suspect that Sentry may be responsible here but I cannot find anything in their documentation about it. This is a risk because not only does this happen in development but in production as well and I do not users to have access to my raw source code.
Has anyone ever seen this before?
Can anything be done about it and still get accurate results from Sentry?
Publishing sourcemaps publically means anyone (including Sentry) have access
There are two ways you can achieve this
Setup a CDN rule that only allows Sentry's servers to get the sourcemaps, a.k.a IP Whitelisting
You could upload SourceMaps to sentry - https://docs.sentry.io/platforms/javascript/guides/react/sourcemaps/uploading/
Here is a ticket describing this problem and how to resolve it.
Make sure to use #sentry/nextjs >= 6.17.1.
In your next config file, you want to add the hidden-source-map flag. This boolean will determine if the source map should be uploaded or not. For instance, you may want to conditionally set it for preview deploys.
// next.config.js
const nextConfig = {
// ... other options
sentry: {
hideSourceMaps: process.env.NEXT_PUBLIC_VERCEL_ENV === "production",
},
}
One thing to note. Previously I was using v7.6.0 and was able to get the source map files. I have now upgraded to v7.14.1 and am no longer able to get the source files to display on deploys, regardless of the flags condition. Not sure if this is a regression or just a partially implemented feature.

How to copy a package to Node_Modules through manually in angular-2

After copying the package to node_modules i am getting 404 error. Even i specified in package.json in dependencies...
Plz help me to fix this thing. Thank you.
In developer tool
Request URL:http://localhost:3000/ng2-pdf-viewer
Request Method:GET
Status Code:404 Not Found
So, node modules are libraries that add functionality to your application. Angular has no way to know that you intend to make that functionality available at a given URL unless you code it to do so.
On the ng2-pdf-viewer GitHub page, scroll down to the Usage section to see an example of how they are implementing that in a custom component.
Essentially, you need to create a component in Angular, add that component to the router for the URL you want, and then code in the ng2-pdf-viewer functionality like in the example. You can read more about how Angular routing works here: https://angular.io/guide/router

Phabricator: running over https, doesn't load any images. Firefox reports blocking unencrypted content

Phabricator: running over https, doesn't load any images. Firefox reports blocking unencrypted content.
If I click that little shield thingy next to 'https', and select "Disable protection for now" with "Options" button, things seem to work fine.
I added https:// in phabricator.production-uri and phabricator.allowed-uris with no luck.
Found it:
bin/config set phabricator.base-uri https://<your-base-url>
bin/phd restart
I had previously added that https url in phabricator.production-uri and phabricator.allowed-uris (I don't know if that mattered).
Warning: At one point, I was able to complete messup the login screen. Probably because I didn't run bin/phd restart. If that happens, restore phabricator.base-uri to its previous value.
In addition to setting phabricator.base-uri, you may also need to change security.alternate-file-domain to use HTTPS. Read https://secure.phabricator.com/book/phabricator/article/configuring_file_domain/ to find more about this setting.
Alternatively, you can simply delete the setting by running bin/config delete security.alternate-file-domain.
This same issue occurred to me after installing a TLS certificate.
Setting the base-uri option did not work for me, nor did the production or allowed uri options.
What solved it was setting the security.alternate-file-domain parameter to the https url, as explained here: https://secure.phabricator.com/book/phabricator/article/configuring_file_domain/
Perhaps this isn't the optimal solution, but it's not clear what else to do.
My setup: Bitnami Phabricator pre-configured instance over AWS.
Looks like now the way to go is to screate a support/preamble.php which contains nothing but
<?php
$_SERVER['HTTPS'] = true;
as described here

Download failed. There are no HTTP transports available which can complete the requested request

When I try to install theme on wamp server it shows a line as below
Download failed. There are no HTTP transports available which can complete the requested request.
How to fix it?
I think you probably need to activate the php_curl extension to solve this issue.
Do this to activate the php_curl extension :-
Left click wampmanager ( thats the W icon in the system tray )
wampmanager -> PHP -> PHP Extensions
If the php_curl extension is not ticked then click it and it will activate the extension and restart Apache for you.
It may also be necessary to activate php_openssl in the same way, depending on the theme's requirements.
Make sure that:
allow_url_fopen = On
Under your PHP Extensions allow php_curl
Restart your Server.
WordPress will try to use several transports to make the request. First it will try curl, then streams, then fsock.
If your server is set up with curl and your version of curl supports SSL (required for using the API), then it will use that.
Next it will try to use streams (fopen). If fopen is set up and working on your server, it is set to applow opening from a URL (allow_url_fopen), and openssl is set up and functional, then it will send the request with streams.
Lastly, if your server has fsockopen set up and usable, and openssl is also set up and functional, then it will use fsockopen to make the request.
If none of those work, then it will be unable to send any. This is all built into the WordPress HTTP API.
If your server can't make the requests, it will be unable to make many other requests as well.
You need to get your host to set up a transport method that WordPress can use.
Sometimes if you face similar kind of error after trying to update any plugin or wordpress itself from localhost using your Xampp, then don’t worry. You could avoid this error and update it from your localhost enabling PHP curl extension.
To enable it:
1. Open your “php.ini” file.
2. Find the line which says “;extension=php_curl.dll”.
3. Now simply remove “; or semi-colon” from the line.
4. That’s it. Restart your Xampp and now it should work updating without error.
You probably need to activate the php_curl extension to solve this issue as well
as uncommented the extension=php_openssl.dll in php.ini file.
Do this to uncomment the following extension:-
wampicon -> php -> php.ini
then search the following extension in php.ini file.
Hope this will help you to resolve your issues.
Setting both of these extensions in the php.ini file worked for me:
C:\wamp\bin\php\php5.3.13
C:\wamp\bin\apache\apache2.2.22\bin
extension=php_openssl.dll
extension=php_curl.dll
You will need to enable php_curl.dll from your php.ini file, this should correct the error. Just remove the ; on the line stated to correct the error.
Hope this help.
Cheers!

Googlebot returns with AJAX temporarily unreachable in Meteor Js Application

I built a Meteor application and installed the spiderable package. I've deployed with Meteor Up smart package. Everything seems to be working fine but in google webmaster tools I only get partial page rendering. I have checked that the <meta name="fragment" content="!"> is present in the <head>
section.
The site can be seen at http://adjustmentjobs.com
The errors listed by googlebot are as follows:
https://checkout.stripe.com/v2/checkout.js Script Blocked
http://adjustmentjobs.com/sockjs/725/ju0i7bzc/xhr_send AJAX Not found
http://adjustmentjobs.com/sockjs/068/uat6sjkk/xhr_send AJAX Temporarily unreachable
http://adjustmentjobs.com/sockjs/370/u7hz6996/xhr_send AJAX Temporarily unreachable
http://adjustmentjobs.com/sockjs/info?cb=nqf08z0y_h AJAX Temporarily unreachable
http://adjustmentjobs.com/sockjs/info?cb=wqf89krdmo AJAX Temporarily unreachable
I think the problem might be with phantomjs on the server but I can't determine what exactly the problem is. Any help would be appreciated.
If you see sockjs/info?cb=xxx as an error, spiderable isn't working.
Spiderable is a bit quirky when it comes to debugging. There are some weird issues, so make sure:
If you use select2 make sure you use a version without the accented characters. Which spiderable is not compatible with (bit odd). Use a compatible package from atmosphere that have these removed.
If you use ssl check that your certificate isn't being rejected.
Check that each of your publish methods actually return something, if they return nothing and don't call this.ready() then spiderable will time out waiting to render the page.
Check the page can render on your own computer using a local version of phantomjs (gives out debug errors too). A script that does this can be found on : http://www.meteorpedia.com/read/spiderable/
Check your server logs for output from phantomjs's stderr
There are a couple of packages on atmospherjs that have forked versions of spiderable that provide a bit more debug info, you could use those to see whats going wrong.
Check your web page's raw html source to see that the html is actually being rendered by appending ?_escaped_fragment_=# to the url. You shouldn't expect to see an empty body.
Also looking at your site http://adjustmentjobs.com/?_escaped_fragment_=# it looks like all is okay. You might want to check that all pages work.
Also Googlebot will 'test' your site without the ?_escaped_fragment_, so in this case there will be errors like the above.

Resources