I want to develop with Symfony4 on my local machine. But somehow I can't get any good configuration working. At the moment, I believe I've done it the way Symfony suggests.
I don't want to make use of the server component but want to run it on a real server with redis, mysql etc.
I've install Homestead following this topic: https://symfony.com/doc/current/setup/homestead.html
In the Homestead.yaml file I also added the nfs type on the mapped folders so it speeds up the code. See my Homestead.yaml file below:
But when I run the website, the server throws me a 502 Bad Gateway. But if I hit refresh, it might just show me the default page after Symfony is installed. Hitting refresh again, I might be lucky to get the webpage again, but often it shows the 502 Bad Gateway again. So on every refresh it is a surprise to what I get, a 502 or just te page.
Oh, and if I'm not getting a 502, I might be getting a Whoops, looks like something went wrong..
I don't understand anything of this, hopefully someone can help me out.
Homestead.yaml
---
ip: "192.168.10.10"
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: ~/Code
to: /home/vagrant/code
type: "nfs"
sites:
- map: blog.symfony.test
to: /home/vagrant/code/symfony/blog.symfony.nl/public
type: "symfony4"
databases:
- homestead
Related
Background:
I am only able to get past the ansible console install/config tasks by adding --region localhost to anywhere in: /usr/share/eucalyptus-ansible/roles/cloud-post/tasks/console.yml wherever it calls tools that take that argument.
Otherwise each sub task fails like this: ["euca-describe-images: error: connection error (('Connection aborted.', gaierror(-2, 'Name or service not known')))"]
Running the commands from that playbook directly on the euca server being configured gives the same result unless I specify --region localhost
Problem:
I'm stuck here: [cloud-post : update console route53 system domain for eucalyptus-cloud authentication]
Error: "euform-update-stack: error (ValidationError): No updates are to be performed.", "stderr_lines": ["euform-update-stack: error (ValidationError): No updates are to be performed."]
All services are running except the ImagingBackend is Not Ready
No instances are running according to euca-describe-instances
Images are available:
IMAGE ami-5be483c81cf8bd65c eucalyptus-console-image-5-0-823/eucalyptus-console-image-5-0-823.raw.manifest.xml 000216594841 available private x86_64 machine instance-store hvm
TAG image ami-5be483c81cf8bd65c type eucalyptus-console-image
TAG image ami-5be483c81cf8bd65c version 5.0.823
IMAGE ami-f31092ddb73e29af9 eucalyptus-service-image-v5.0.100/eucalyptus-service-image.raw.manifest.xml 000216594841 available privatx86_64 machine instance-store hvm
TAG image ami-f31092ddb73e29af9 provides imaging,loadbalancing
TAG image ami-f31092ddb73e29af9 type eucalyptus-service-image
TAG image ami-f31092ddb73e29af9 version 5.0.100
---
all:
hosts:
exp-euca.lan.com:
exp-enc-[01:02].lan.com:
vars:
vpcmido_public_ip_range: "192.168.100.5-192.168.100.254"
vpcmido_public_ip_cidr: "192.168.100.1/24"
cloud_system_dns_dnsdomain: "cloud.lan.com"
cloud_public_port: 443
eucalyptus_console_cloud_deploy: yes
cloud_service_image_rpm: no
cloud_properties:
services.imaging.worker.ntp_server: "x.x.x.x"
services.loadbalancing.worker.ntp_server: "x.x.x.x"
children:
cloud:
hosts:
exp-euca.lan.com:
console:
hosts:
exp-euca.lan.com:
node:
hosts:
exp-enc-[01:02].lan.com:
EDIT:
Solved. Details are in the comments of the marked answer.
The name error most likely means that DNS for the domain cloud.lan.com is not being correctly delegated to your deployment. To test this, check if the nameserver is found:
dig +short NS cloud.lan.com
you should see "ns1.cloud.lan.com" and then should be able to use that nameserver to resolve services, e.g.
dig +short ec2.cloud.lan.com #ns1.cloud.lan.com
which should be the IP of the host for the compute service.
The second item is a bug in the ansible playbook that occurs when the stack is already present and up to date. To work around it, you can either update your playbook or delete the stack before running the playbook. Depending on how far the playbook progressed you may have a script to do this:
/usr/local/bin/console-manage-stack -a delete
the related playbook change is https://github.com/AppScale/ats-deploy/pull/36
I am trying to get an old project (not made by me) up and running, and I see that the routes are configured in some peculiar format. This is a typical route config:
customer_home:
path: /customer
host: "web.{domain}"
defaults:
_controller: "BackendBundle:Customer:index"
domain: "%domain%"
methods: [get]
options:
expose: true
requirements:
domain: '%domain%'
Now, I grepped the source code and found domain in the config files. It was null by default and by setting it to localhost:8000 I was able to at least load the root without complaints about %domain%. Now it complains about not finding a matching route, which makes sense, as none was configured. What was configured (which I found by doing console debug:router) was a root route for admin.{domain} and web.{domain}. I assume this means that if the domain is myapp.com, there should be routes configured for admin.myapp.com/ and web.myapp.com.
This is a local development setup, running on 127.0.0.1:8000, so I tried adding this to /etc/hosts:
127.0.0.1 web.localhost admin.localhost
I was now hoping that going to web.localhost:8000 would load a route, but none was matched. I still get NotFoundHttpException, but now I no longer understand why ... How can I configure this setup so that I can load the web and admin subdomains on my development machine? Other routes, like /api/1/doc, works fine.
I was not far off. The answer was to simply drop the port portion of what I had entered as the domain setting. So domain: localhost did the trick. The server is by default running on port 8000, no matter the setting, so it was not needed. I can now access web.localhost and admin.localhost (after adding them as host aliases for the loopback device in /etc/hosts).
Background
We run a kubernetes cluster that handles several php/lumen microservices. We started seeing the app php-fpm/nginx reporting 499 status code in it's logs, and it seems to correspond with the client getting a blank response (curl returns curl: (52) Empty reply from server) while the applications log 499.
10.10.x.x - - [09/Mar/2020:18:26:46 +0000] "POST /some/path/ HTTP/1.1" 499 0 "-" "curl/7.65.3"
My understanding is nginx will return the 499 code when the client socket is no longer open/available to return the content to. In this situation that appears to mean something before the nginx/application layer is terminating this connection. Our configuration currently is:
ELB -> k8s nginx ingress -> application
So my thoughts are either ELB or ingress since the application is the one who has no socket left to return to. So i started hitting ingress logs...
Potential core problem?
While looking the the ingress logs i'm seeing quite a few of these:
2020/03/06 17:40:01 [crit] 11006#11006: ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone "vhost_traffic_status"
Potential Solution
I imagine if i gave vhost_traffic_status_zone some more memory at least that error would go away and on to finding the next error.. but I can't seem to find any configmap value or annotation that would allow me to control this. I've checked the docs:
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/
Thanks in advance for any insight / suggestions / documentation I might be missing!
here is the standard way to look up how to modify the nginx.conf in the ingress controller. After that, I'll link in some info on suggestions on how much memory you should give the zone.
First start by getting the ingress controller version by checking the image version on the deploy
kubectl -n <namespace> get deployment <deployment-name> | grep 'image:'
From there, you can retrieve the code for your version from the following URL. In the following, I will be using version 0.10.2.
https://github.com/kubernetes/ingress-nginx/releases/tag/nginx-0.10.2
The nginx.conf template can be found at rootfs/etc/nginx/template/nginx.tmpl in the code or /etc/nginx/template/nginx.tmpl on a pod. This can be grepped for the line of interest. I the example case, we find the following line in the nginx.tmpl
vhost_traffic_status_zone shared:vhost_traffic_status:{{ $cfg.VtsStatusZoneSize }};
This gives us the config variable to look up in the code. Our next grep for VtsStatusZoneSize leads us to the lines in internal/ingress/controller/config/config.go
// Description: Sets parameters for a shared memory zone that will keep states for various keys. The cache is shared between all worker processe
// https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
// Default value is 10m
VtsStatusZoneSize string `json:"vts-status-zone-size,omitempty"
This gives us the key "vts-status-zone-size" to be added to the configmap "ingress-nginx-ingress-controller". The current value can be found in the rendered nginx.conf template on a pod at /etc/nginx/nginx.conf.
When it comes to what size you may want to set the zone, there are the docs here that suggest setting it to 2*usedSize:
If the message("ngx_slab_alloc() failed: no memory in vhost_traffic_status_zone") printed in error_log, increase to more than (usedSize * 2).
https://github.com/vozlt/nginx-module-vts#vhost_traffic_status_zone
"usedSize" can be found by hitting the stats page for nginx or through the JSON endpoint. Here is the request to get the JSON version of the stats and if you have jq the path to the value: curl http://localhost:18080/nginx_status/format/json 2> /dev/null | jq .sharedZones.usedSize
Hope this helps.
Scenario:
salt-minion original version: salt-minion 2015.8.8.2 (Beryllium)
salt-minion updated version: salt-minion 2016.11.2 (Carbon)
running a state which uses grains.host breaks
Checks:
salt 'minion' grains.item host originally returned the hostname configured in /etc/hostname (eg: minion)
After the update, it returns localhost
I tried restarting the minion (as I had to change the master url anyway), also tried the undocumented sanitized=True, which only hides it away.
Any help is appreciated, couldn't find anything on the site.
I found recently that sometimes it does matter what are your settings in /etc/hosts.
I had a set of hosts with FQDN like host01.company.com. And despite the same settings, one of them was having the same issues like you have. I went to /etc/hosts and removed record like
127.0.0.1 host01 host01.company.com
And after this and a minion restart, everything was fine immediately.
Looks like there is some kind of reverse-lookup happening to set some grains.
Hope this helps.
My vhost configuration: http://pastebin.com/ZyXUmQtx (only one domain on this installation)
I've been racking my head and Google for a solution the last two days and can't quite seem to come up with a solution that works.
My setup (from the above configuration):
IP.Board 3.4 installation in %root_domain%/forums/
IP.Content 2.3 installation in %root_domain/forums/ (with external access index.php on the top-level)
Redmine 2.2.2 install at /usr/share/redmine (this is working because Thin is running and there are no errors in either log file)
Stale phpMyAdmin configuration at /usr/share/phpmyadmin/ that also kinda doesn't load html/css properly.
Symlink to /usr/share/redmine/public to /srv/www/tiberian-genesis.net/public_html/redmine
I'm trying to get redmine setup to run under %root_domain%/redmine/, but I keep getting a 404 page from my IP.Content installation.
Accessing it takes me to the url: /redmine/login?back_url=http://redmine_thin_servers/redmine/ (which now that notice it, it seems to not like my upstream...)
In case someone requests the Thin configuration file:
---
pid: /var/run/thin/redmine.pid
group: tgmod
prefix: /redmine
timeout: 30
log: /var/log/thin/redmine.log
max_conns: 1024
require: []
max_persistent_conns: 512
environment: production
user: tgmod
servers: 1
daemonize: true
chdir: /usr/share/redmine
socket: /var/run/thin/redmine.sock
I'm out of ideas here.
Thanks in advance!
I just ended up setting it up on a sub-domain. I wanted to try to proxy it on a sub-directory, but my main website kept interfering with the rules.