Why does Symfony 2 use app_dev.php instead of one front controller? - symfony

Coming from ZF background I was quite surprised to see Symfony's app_dev.php file. It seems like a very bad idea security-wise. It's one more step you need to take to make sure your dev is not accessible in production. I.e. you can't have simple Git deploy, unless you don't mind having dev version on production or have post checkout hook to remove the file or specific vhost setup that disables access to it.
What is the idea behind it? Why is it better than IP-triggered or ENV based dev mode?

First off all, there is no real "production" and "dev" modes per se. They are just named that way. You could easily use other modes, each with their own set of configurations which are loaded through config_<env>.yml for instance. Actually, there is a third standard mode called "test" which is reserved for testing Symfony.
Second, it's very easy to test things in "production" mode and "development" mode by simple add app_dev.php in between your url. This will make it easier to look at how your app actually looks like in production, without losing all the nice things like verbose logging and the web debug toolbar.
And as stated: by default the app_dev.php frontend uses by default a whitelist so even if you happen to push this file onto production, they would not be able to view it).
There are other reasons for multiple entry points: as an Symfony application can contain multiple applications at the same time. By default, app.php and app_dev.php are the front controllers for the default application, but it might be possible (quite easily actually) to have an admin.php and admin_dev.php as well, bootstrapping a whole different application, while still being part of the same repository (if this is a wise thing to do, is up for debate).

There is an issue #677 discussing why it's this way. There are also some interesting comments in related issue #11310.
The most reasonable argument for me is that it just works out of box without the need to setup environment variables or anything else. Although counterargument is that you still need to setup htaccess rewrite anyway.

if you look into app_dev.php file, you can see that only whitelisted IP addresses have access to app_dev.php file.
// This check prevents access to debug front controllers that are deployed by accident to production servers.
// Feel free to remove this, extend it, or make something more sophisticated.
if (isset($_SERVER['HTTP_CLIENT_IP'])
|| isset($_SERVER['HTTP_X_FORWARDED_FOR'])
|| !(in_array(#$_SERVER['REMOTE_ADDR'], array('127.0.0.1', 'fe80::1', '::1')) || php_sapi_name() === 'cli-server')
) {
header('HTTP/1.0 403 Forbidden');
exit('You are not allowed to access this file. Check '.basename(__FILE__).' for more information.');
}

Related

Why is my raw source code easily accessible via the Debugger's Network tab?

I have been working on my website for a month now and just realized that there is this extra _N_E server that is providing access to my raw source code used for each page.
I am using NextJS and suspect that Sentry may be responsible here but I cannot find anything in their documentation about it. This is a risk because not only does this happen in development but in production as well and I do not users to have access to my raw source code.
Has anyone ever seen this before?
Can anything be done about it and still get accurate results from Sentry?
Publishing sourcemaps publically means anyone (including Sentry) have access
There are two ways you can achieve this
Setup a CDN rule that only allows Sentry's servers to get the sourcemaps, a.k.a IP Whitelisting
You could upload SourceMaps to sentry - https://docs.sentry.io/platforms/javascript/guides/react/sourcemaps/uploading/
Here is a ticket describing this problem and how to resolve it.
Make sure to use #sentry/nextjs >= 6.17.1.
In your next config file, you want to add the hidden-source-map flag. This boolean will determine if the source map should be uploaded or not. For instance, you may want to conditionally set it for preview deploys.
// next.config.js
const nextConfig = {
// ... other options
sentry: {
hideSourceMaps: process.env.NEXT_PUBLIC_VERCEL_ENV === "production",
},
}
One thing to note. Previously I was using v7.6.0 and was able to get the source map files. I have now upgraded to v7.14.1 and am no longer able to get the source files to display on deploys, regardless of the flags condition. Not sure if this is a regression or just a partially implemented feature.

Drupal 8 Redirects to external URLs are not allowed by default

I am setting up a website with Drupal, the website is deployed on the live server through bitbucket pipelines. Normally when I browse to myurl.com/user it redirects me to myurl.com/user/login however now I get this error:
Redirects to external URLs are not allowed by default, use \Drupal\Core\Routing\TrustedRedirectResponse for it.
I have already set-up the "trusted_host_paterns" however this doesn't seem to fix the problem.
trusted host patterns:
$settings['trusted_host_patterns'] = array(
'^myurl\.com$',
);
Just in case somebody else comes here: It is also possible, that you have migrated a multi language site to a different server / localhost, and in your database are still the old redirect domains, that will now no longer work.
To fix this, you need to manually change the following value in the database. Go to dr_config and search for language.negotiation
In the cryptic blob, search have a look for something like
{s:6:"source";s:6:"domain";
and change it to
{s:6:"source";s:6:"path_prefix";
afterwards empty all cache_* tables (to force a reprocessing of the configuration) and there is a good chance it might work then.
The pattern seems OK to me. However there is what I'd check:
Confirm using your browser that the redirect to /user/login happens to exactly myurl.com domain. Not www.myurl.com for example.
Try clearing cache as well.
Then looking through this post on DO
And then this one. It's a D8 issue not fixed yet.
I remember having a similar issue on one of my websites during development on a local environment and the issue was really in the pattern.
In the case you have this because of a multilingual site not finding the domains on localhost, put in settings.php:
$config['language.negotiation']['url']['domains']['en'] = 'my-en-url.localhost';
$config['language.negotiation']['url']['domains']['de'] = 'my-de-url.localhost';
$config['language.negotiation']['url']['domains']['es'] = 'my-es-url.localhost';
$config['language.negotiation']['url']['domains']['fr'] = 'my-fr-url.localhost';

Load Drupal Site on Any URL

I'm setting up access to a Drupal 7 site. The site sits alone on a box that answers to a number of domains and that number is likely to grow. What I'd like to do is to tell Drupal to load the site regardless of which actual domain brought us to the box (the rest of the URL will always be the same, of course). Currently most of those domains send me to the install page.
The problem is the lack of a directory (symlink) in the sites/ directory.
I can probably rewrite requests coming through alternate domains in Nginx, but I'm wondering whether there's an application level answer. As it stands right now, accessing the box/site by any domain other than the canonical domain sends me to the install page.
Is there anything I can do?
It looks to me that you didn't configure your Drupal site as the "default" one.
The file "sites/default/settings.php" is loaded if no better (more specific to the current request) settings file can be found in the sites/folder... This is in fact a "wildcard" config, so the best solution would be to move the site files to the default folder. See the multi-site documentation for more details.
If you can't do that, then you can use sites.php for the rewriting, but you will need to update it to add any new URL you want to match. There's a little shortcut though: you can add a bunch of rewrites such as
$sites['com'] = 'default';
$sites['net'] = 'default';
$sites['org'] = 'default';
...
which will act as catch-all rewrites for sites ending in .com, .net, .org and so on, saving you a lot of (but not all) the manual rewrites.
Altering the conf_path() function should really be your last solution, since it will make updating Drupal a slower process (and if you forget to re-apply the changes after an update, your setup won't work any more).

ASP.Net custom errors on specific hosts

Is there a way to display custom errors on specific host(s) (eg: www.example.com) and display vanilla errors on others (eg: beta.example.com)?
I'm thinking along the lines of configuration syntax that can be added to the customErrors section of the web.config.
It's actually for MVC 3, if that makes any difference.
The sites are hosted on separate servers. http://beta.yogaloft.co.uk/ is built and deployed automatically by appharbor and promoted to http://www.yogaloft.co.uk/ whenever it's ready for the wild.
what i would do is use customized HandleErrorAttribute to detect the request and show the custom error on the www.example.com ?
basically, extend the HandleErrorAttribute ( HandleCustomErrorAttribute : HandleErrorAttribute )and put the logic to detect if the request is coming from example.com and if so show a specified view.
I have not tried it this way, but shouldnt be impossible.
If this hosts are set on the same directory, you can't.
All you can do - use the customErrors="RemoteOnly" setting and beta test locally.
You really should use two different sites for the production and testing.

app_offline alternative

I usually place an app_offline.htm in my root directory when I am releasing a website to a production environment. However sometimes if there has been a few big changes to the site, I would like to click around first to make sure it's stable without allowing access to anyone other than me.
As far as I am aware this isn't possible, but I'm hoping someone has a neat solution...
The solution has to include if someone has a deeplink into the site, so using a default.htm/asp page in the root won't do the trick unfortunately.
I agree with the staging environment answer above, but otherwise here's one possible approach: Temporarily block all IP addresses besides your own. This can be achieved through IIS Directory Security configuration, or programmatically in any number of ways
You can redirect all the non-authorized users to an Under Construction page of some sort. Meanwhile, you can happily browse the site from your IP. When the site is vetted, you remove that IP restriction and the site becomes available to the world at large.
It's a difficult thing to achieve. That's why you should have a staging environment where everything should be validated before shipping into production. Then during the deployment process (if it takes long, but it shouldn't) you could use an App_Offline file. This staging environment should be as close as possible to your production environment (in terms of software, patches and configurations installed, not in terms of hardware power of course).
Another quick suggestion that would allow you to control things from the web.config might include a custom module that redirected all requests to a static page except those defined by a filter (i.e. hostname, url sniffing) that could be configured via the web.config.

Resources