i know that we use jetty as java servlet container on our staging/production servers but what is the java servlet container within laradock when it comes to solr? overall im only familiar with jetty/tomcat but those i cant find. we had to adjust the servlet configuration on staging but i would need those settings in the dev environment too.
and in case someone already had that problem within laradock - on staging we adjusted the
requestHeaderSize
from the default 8k to 64k bytes so the length of the uri is no issue anymore. and now we also need that setting within laradock/solr.
According to the Dockerfile for laradock/solr it builds using the regular 5.5 Solr image with minimal changes.
Solr uses a bundled, internal Jetty version since dropping support for other containers in Solr 5.
In general there should be no reason to change the requestHeaderSize for a Solr installation, as you can use POST requests instead of GET for any Solr request that contains a large request body (i.e. which usually happens if you have many boolean arguments).
Your Solr client should default to using POST instead of GET for requests.
Related
I've seen some post, including How to manage multiple backend stacks for development?, but nothing related to use lxc for a stable, safe and separate development environment, matching the production environment and regardless the desktop and/or linux distribution.
There was a feature previous to symfony cli release that allowed to specify a socket via ip:port, and this allowed to use different names in /etc/hosts using the 127.0.0.0/8 loopback network, so I could always use "bin/console server:start -p:myproject:8000", and I knew that using http://myproject:8000 (specified in /etc/hosts) I could access my project and keep the sessions, etc.
The symfony cli, as far as I've tried, doesn't allow this. Reading the docs, there's a built-in proxy in symfony cli, but though I've set a couple of projects to use this in the container, clicking on the list doesn't open the project (with .wip suffix), and issues an error about proxy redirections. If I browse to the port and ip of the container ip, it works perfectly, but the port is something that can change with every reboot of the container.
If there's nothing that can be set on the proxy side to solve this scenario, I'd ask to take back the socket feature that existed previously, so I can manage this situation as I used to do before, and solve this.
Thanks in advance.
I think I've finally found a good solution. I've created an issue to improve the situation that seemed not to work, so I'll try to explain for whoever might be interested.
I've setup the proxy server built-in with the symfony cli, but instead of allowing it to run with the defaults, I've had to specify --host=proxyhost (resolvable from the host) and setting proxy exceptions for .com, .org, .net, .tv, etc, together with setting a name to attach for every project (issuing symfony proxy:domain:attach myproject from inside the project dir), I can go to http://myproject.wip just like http://proxyhost:portX, no matter which port is portX.
I am trying to use nginx for serving static contents(images/css etc.)
I need to span up multiple instances of nginx to support as per the incoming load.
So i am looking for Mongo+gridfs solution to store the static files- since it provides replication and sharding.
I see i can serve contents from gridfs using either of these these modules.
Direct nginx module -
https://github.com/mdirolf/nginx-gridfs
Using Lua scripting language
https://github.com/bigplum/lua-resty-mongol
Question is - can i create UploadImage api in nginx itself to store files in gridfs when user calls a POST method passing the file.
It looks to me that it is possible using lua resty module but not sure. Any idea?
You can use the lua-resty-upload module to handle user uploads, and then pass the data over to lua-resty-mongol for writing to Mongo.
For large files you may be able to write the chunks directly as they are read to avoid buffering all of the data in memory, there's a good example on the page using a file.
I have used the upload resty module along with lua mongol module.. and it works well..
Now i got a suggestion from people around to see if we can use java. instead of lua to do db connections primarily to store retrive static file contents.
I see there is a Java module as well that can be used to do the job, or can use php or python as well in nginx.
Q is What would be the difference in using any of these languages- Lua vs Java vs PHP. and what factors should i need to consider while picking up a language.. Performance, solution usage, packaging, etc. Point of view
I have a long process in my cloudbees (roo + spring mvc) app that results in a timeout. According to this previous question a solution would be to change the configuration of nginx (in particular the send_timeout directive ).
My problem is that I´m not sure how can I change this given the fact that I´m not self-hosting the application but using CloudBees for that.
Is this something that I can somehow indicate in the cloudbees-web.xml configuration file? (I haven´t been able to find a complete list of configuration parameters I can include in this file eihter)
Yes you can do this.
You need to change your applications setting to have
proxyBuffering=false
when you deploy. This will allow long running connections. You only need to do this once when you deploy.
eg
bees app:deploy (etc) proxyBuffering=false
you can also use app:update to change an existing apps config (only need to do this once, it will remember it) using the BeesSDK - look for the section on app:deploy and app:update
does Mercurial have an HTTP protocol we could browse files/folders/branches instead of clone/pull changesets?
I've seen something using TortoiseHG WebServer and access http://localhost:8080/ using browser but completely different HTML is served when you use project on https://bitbucket.org/ (at least I could not find the same representation).
Update the HttpCommandProtocol document describes only changesets but not files/folders. So, the task is to download only few files only for particular revision (for example with tip 'stable') and a list of files. However I do not want to download a complete repository for this.
Non HTTP protocols are welcome but conditions are the same: do not download a complete repository.
Update 2 hgweb serves static HTML and files. Is it always the same HTML fromat for different hgweb versions? What about bitbucket.org? Is there any common protocol?
As you noticed already, the HttpCommandProtocol defines the exchange of repository information and changesets - it ensures that you can clone/push/pull from/to any repo served by HTTP. But AFAIK there's no standard for how to browse a repo (e.g. getting a single file of a certain revision).
You'll have to adapt to whatever URL scheme your hosting system of choice uses (as you also noticed, hgweb and bitbucket have different schemes). Depending on your use case you could define your own file access protocol and feed it to an converter.
For instance you might want to access files with this scheme:
<repo-url>/<rev>/<path>
Where <repo-url> is the URL you use to clone/push/pull. In practice you would then use URLs like that:
https://bitbucket.org/user/repo/<rev>/<path>
https://hgwebhost.org/.../repo/<rev>/<path>
Obviously these are virtual URLs which do not exist. That's where your converter comes in: check the hosting system type and convert URLs accordingly:
https://bitbucket.org/user/repo/raw/<rev>/<path>
https://hgwebhost.org/.../repo/raw-file/<rev>/<path>
If your converter knows bitbucket and hgweb, then it already works with a good deal of repositories out there.
Mercurial has hgweb. It can be deployed via any wsgi container and I think it even has CGI support.
If you just go to any hg repo and type
hg serve
you will have a webserver listening at a url that you can point a browser at. The formatting of the webpages generated by hg can be changed via templates. It is very likely bitbucket.org has their own fancier templates, hence they have a prettier webpages.
Further the listening url can be used to push and pull from as well using hg. This is in fact the same website that is channeled via hgweb.cgi and also the underlying mechanism for doing push/pull over SSH.
Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.
Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
I think what you want is varnish.
Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)