How can I access files outside of my webroot when using Vagrant? - drupal

I have a directory that contains the following structure:
phpinfo.php
adminer.php
drupal/
Vagrantfile
bootstrap.sh (config file)
index.html (on-boarding information for site-builder, etc.)
My synced folder is drupal (mapped to /var/www/html), but I also want to access phpinfo.php and adminer.php.
A hostname is also setup to be built as a webapp.dev host mapped to this new vagrant guest.
I could make the overall directory the synced folder, but I don't want to create clutter or have to access the site at webapp.dev/drupal.
How can I access both the drupal site as web root but still run the various tools? Is it possible to create an additional virtual host and synced directory that maps to the containing folder structure?

You can configure another synced folder. I'm doing this for certs that should be kept above webroot. Here's an excerpt from my Vagrantfile.
config.vm.synced_folder "./public_html", "/vagrant",
id: "vagrant-root",
owner: "vagrant",
group: "www-data",
mount_options: ["dmode=775,fmode=664"]
config.vm.synced_folder "./certs", "/certs",
id: "certs"
Note you have to use a separate id for each folder.

Related

Publish profile copy file to remote directory outside of destination root VS2019

I'm trying to accomplish what seems like a simple task but having issues with the publish profile (pubxml) syntax to make this work
I'm using a web publish profile to push changes to our remote staging server, and I'm trying to include a global config file that's shared across our apps into a remote directory on that staging server into a folder that is 'outside' of the publish destination root folder.
Here's a basic sample folder layout of the source and destination:
Source file:
C:\dev\webapp1\Global.config
This is just a basic web config file in the project root.
The destination folders for our web apps would be:
C:\websites\Config
C:\websites\App1
C:\websites\App2
C:\websites\App3
App1 is the project with the publish profile I'm working with, so when I publish it needs to place the Global.config file into the websites\Config directory on the remote server.
So far I have:
<ItemGroup>
<ResolvedFileToPublish Include="Global.config">
<RelativePath>../Config/Global.config</RelativePath>
</ResolvedFileToPublish>
</ItemGroup>
I was hoping using the ../Config would back up a directory from the destination root and place the file in the remote server Config folder but it's placing it into the destination root folder (App1) instead.
Anyone know if this is possible?

Sublime text SFTP plugin. How to set automatic selection of a remote directory?

How can I set up automatic selection of a remote directory based on the location of a local file?
Settings: "remote_path" "/site.ua/www/"
For example: the abs.php file is located on the local server along the path: /www/php/abs.php, and this plugin uploads all files to the root directory on the remote server /site.ua/www/abs.php
I need the files to be loaded into the appropriate directories.
UPD: at the same time, the js- and css-files are automatically allocated to the remote directory according to their location on the local server.

Configuring Web Server in Google Cloud Compute Engine

I have a dash application in a compute engine instance that I'm looking to view in my browser. HTTP and HTTPS traffic is enabled and the instance has a static IP address. The apache server works and when I first ran an application, the default index page located at /var/www/html showed up at the browser address http://EXTENAL_IP_OF_VM_INSTANCE
From what I've seen elsewhere, web application files tend to be stored in the /var/www directory and the index.html file is present as the default page. However I have a single app.py file that I want to run which is located in the /home/user directory, so not in any particular web directory.
I run the app by entering python3 app.py and it does run:
Running on http://127.0.0.1:8050/ (Press CTRL+C to quit)
However, going to the instance's external IP address (34.89.0.xx) in my browser doesn't show the app, instead it shows text from an old 'hello world' application I made previously, that I thought I had deleted but is still showing up.
Part of the server configuration file apache2.conf is below:
The sites-available folder contains two files, 000-default.conf and default-ssl.conf, both with /var/www/html as the DocumentRoot. 000-default.conf is also in the sites-available folder, and is the only file there.
I tried changing the DocumentRoot in these files to /home/user where the app.py file is which didn't work, then I tried moving the file to the web directory /var/www which didn't work either.
Does anyone have any suggestions on how to fix this so that I can see my application in the browser?

Nginx: private folder not accesible from web

I created a default nginx webserver in this location: /var/www/domapi/html
root in default is root /var/www/domapi/html;
I want to store certificates and other private files in a location accessible from my $USER:www-data that can upload, delete and create files (right permission).
Is it secure to use /var/www/domapi/private ?
Of course I will assign right permission for that folder /private.
Because I did not get any answer, I re-ask different...
With a root folder in file default as root /var/www/domapi/html; , is accessible from internet the folder /var/www/domapi/private or no? I think no but I'm looking for an expert answer.
Thank you

Change the path of a symlinked directory in NGINX

Lets say that I have a directory /var/www/assets/ and within that directory I have a symlink which points to a folder which contains all the latest asset files for my website.
/var/www/assets/asssets -> /var/www/website/releases/xxxxxxx/public/assets
If I configure NGINX to serve asset files from /var/www/assets with the domain assetfilesdomain.com and asset files are prefixed with the directory /asset/ then when that /asset folder's symlink is changed then the updated link is not reflected in NGINX. The way that I see it, NGINX grabs the resolved path for that asset folder when it is started.
Is there any way to get around this?
Reloading nginx (sending a HUP signal to the master process) seems to solve this issue (probably because it starts new workers and shuts down the old ones, gracefully).
it seems like you're using Capistrano. You can override deploy:restart and put the nginx reloading there.

Resources