How to manage multiple symfony projects in a development computer - symfony

I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.

I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.

Related

How to proxy subdomains to other servers with dokku?

I want my dokku host to run the main nginx for my domain (let say cooldok.ku).
On cooldok.ku for some reasons I have other Virtual Machines serving content. I want to expose this content on a subdomain (say vm.cooldok.ku, runs in a VM at 10.0.0.7 on the cooldok.ku host).
I figured the involved methodology is called reverse-proxy.
In an optimal world, there would be a dokku-only way to register and 'link'/proxy the subdomains. As an added bonus, the cooldok.ku host would do the ssl-stuff for https itself (like ssltunnel) so that I could leverage existing certificates and/or use the awesome letsencrypt on the same machine and secure applications in the VM that were not meant to be served via https.
How can this scenario be realised with dokku? How difficult would it be to write a plugin doing that?
Update
So, basically dokku (0.8) comes equipped with exactly everything it would need. The question is, how much of what dokku wants to achieve (fire up those yummy docker containers) is in the way. To hack a setup which does what I want, following can be done:
# create folder where we want it
dokku apps:create vm
Now, these files have to be created/be present (vanilla 0.8 dokku installation)
#/home/dokku/vm/DOCKER_OPTIONS_DEPLOY
--restart=on-failure:10
#/home/dokku/vm/IP.web.1
10.0.0.7
#/home/dokku/vm/PORT.web.1
80
#/home/dokku/vm/URLS
# THIS FILE IS GENERATED BY DOKKU - DO NOT EDIT, YOUR CHANGES WILL BE OVERWRITTEN - I did it nonetheless
http://vm.cooldok.ku
#/home/dokku/vm/VHOST
vm.cookdok.ku
#/home/dokku/vm/nginx.conf
# Just listing changes from another default app
[...]
proxy_pass http://vm-host;
[...]
upstream vm-host {
server 10.0.0.7:80;
}
Afterwards, nginx needs a manual restart (or ... dokku can do something for us here)
I am pretty sure that some of the (redundant) information can be left out, as dokku should puzzle the nginx.conf itself, for example. I am not sure if this setup survives a reboot/nginx restart. Also, on tests, letsencrypt would not let me install the certificates/rebuild the nginx configuration because it sees the app vm as not being deployed.
Update2
To overcome the "app not deployed" issue, it suffices to touch /home/dokku/vm/CONTAINER, but this gets messier and messier ...
I bundled the information from the updates of my post into a dirty script at https://github.com/econya/scripts/blob/master/scripts/virt-helpers/fake-dokku-app.sh .
I guess the cleanest solution as-is with upwards compatibility would be to create a Dockerfile that launches a reverse proxy itself (configured via env/ config:set variables) - but I am happy to learn a smarter and nicer solution, or that I get paid to write a proper plugin ;)
Second approach would be to use a "Null"-Docker image together with a custom nginx template I guess.
Update 2021
According to the release notes it works now (look for "Routing to non-Dokku managed apps"):
https://dokku.github.io/release/dokku-0.25.0
I still use an older dokku and the solution written above, though.

Opennebula VM not persisting network config

I've created a VM with a VNET attached on Opennebula, after a while I changed the params of the VNET but those changes do not persist on the VM after my (physical)host is restarted.
I’ve changed the /var/lib/one/vms/{$VM_ID}/context.sh file but still no luck persisting the changes.
Do you know what it could be?
I'm using OpenNebula with KVM on a Debian8 host.
After a while I figure out how to do this myself.
It seems that when the VM is started, the file /var/lib/one/datastores/0/$VM_ID/disk.1 is attached as /dev/sr0.
During boot process /usr/sbin/one-contextd mounts this unit an uses the variables inside it, they usually look like this:
DISK_ID='1'
ETH0_IP='192.168.168.217'
ETH0_MAC='02:00:c0:a8:a8:d9'
ETH0_DNS='192.168.168.217'
ETH0_GATEWAY='192.168.168.254'
This info are used to export ENV variables (the exported variables can be found on /tmp/one_env) which are used by the script /etc/one-context.d/00-network to set network configuration.
OpenNebula doesn't provide a simple way of replacing this configs after the VM is created, but you can do the following:
Edit /var/lib/one/datastores/0/$VM_ID/disk.1 and make the required
changes
Restart opennebula service
Restart the VM
Hope this is useful to someone :)
Yes, the issue is that this functionality is not supported in current versions of OpenNebula. This will be supported in the upcoming 5.0 version.
You can power off the VM and change most of the parameters(not network parameters as they are linked to a vnet) in the conf tab of the VM.
For a network-specific change only, you can simply log-in to the VM and mv the file /etc/one-context.d/00-network to some other place and your changes to the network configuration of VM won't be overwritten by the network context script.

How to configure WebSphere administration console for two simultaneous logins?

I have two WebSphere 8.0 environments set up; test and production. When I connect to the WebSphere Integrated Solutions Console of the test or production environment I do this via sub.domain.com:port/ibm/console... The difference between accessing the production or test environment console is in the port number of the URL.
My problem is that if I, for example, log into the production environment and I'm already logged into the test environment I can do that, but when I want to switch back to the test console and continue working there I'm prompted to login again.
I think that the problem lies with the cookies and the session. So is there a way to tweak this? I didn't find anything useful in the documentation or on the web... Any reading recommendations? If only a hint, into the right direction :)
Yes, the cookies gets confused, since the only difference is the port.
I use either of the following tricks (depends on the environment I'm working with):
Use different browser for each environment (e.g. FF for prod, and IE for dev)
Access one environment via hostname, other via IP
Create few virtual host names (aliases) in your local etc/hosts file and access different environment via different hostname.

WordPress FTP settings for AWS EC2

I have WordPress installed on AWS with EC2. I can connect via SFTP using FileZilla but if I try to update a plugin from within WordPress it asks me for the FTP details and I get the following error message:
ERROR: There was an error connecting to the server, Please verify the settings are correct.
I've read a lot of threads on here and followed a lot of steps to try to rectify, including:
added 2 new inbound Custom TCP rules to the EC2 Security Group; one for port 21 and one for ports 0-65000
added the following to my wp-config.ini:
define('FS_METHOD', 'ftpext');
define('FTP_BASE', '/var/www/');
define('FTP_CONTENT_DIR', '/var/www/wp-content/');
define('FTP_PLUGIN_DIR ', '/var/www/wp-content/plugins/');
define('FTP_USER', 'ubuntu');
define('FTP_PASS', 'my_password_obviously');
define('FTP_HOST', 'my.ip.obviously');
define('FTP_SSL', false);
still no luck. can anyone help?
Thanks
Sean
Since you are on ec2, you have full control of your instance. You can use direct setting for FS_METHOD as a means of updating the core and any plugins.
Although keep in mind that this can be somewhat insecure if you do not properly configure your instance (The webserver user should be isolated). You would also want to be sure that you can trust the plugins your are installing.
Amazon EC2 has some issues with FTP. See here for a solution to this common issue. However, this may not be your best solution. I go by the philosophy that the fewer ports I can open, the safer I am. Even if you are keeping it open only to your local IP, you are not completely safe from a DoS or some other malicious attack. Multiple checks are better than one, and fewer ports are better than more.
The issue is that FTP is that it was designed and implemented prior to any of today's security concerns. While you can make FTP more secure, and there are solutions on the web for this (like the one above), a better - and possibly MUCH easier - solution could be found in allowing FTPS over Port 22. Evidently, by installing and activating some packages you may be able to open Wordpress updates to a new option.
See here (not tested by me) for the FTPS solution which runs through Port 22 by binding to PHP through libssh2-php on Debian (or these steps on CentOS).

Finding an HTTP proxy that will intercept static resource requests

Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.
Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
I think what you want is varnish.
Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)

Resources