How to receive and parse a HTTP incoming request using python?
You can do:
python -m SimpleHTTPServer 8080
By default it serves the current working directory.
To get an idea of how this is put together/parses the requests or to work out how to build one for your own needs, look at the "SimpleHTTPServer.py" module in the lib directory of your python install.
You could also look at the built in webservers that the web frameworks like django, werkzeug, cherry-py provide. Stackoverflow has quite a few interesting questions.
There are dozens of solutions for this, of all kinds and flavors.
The most basic would be SimpleHTTPServer mentioned by #davey.
Other options would be:
http://webpy.org/ - simple, lightweight framework
http://www.tornadoweb.org/ - flexible and scalable web server
http://twistedmatrix.com/trac/wiki/TwistedWeb - single-threaded, event-driven server and framework
http://bottle.paws.de/ - single-file server and framework
Related
I've seen problems stem from confusing grunt-connect and grunt-contrib-connect(See grunt connect port option ignored and grunt watch & connect)
Given their similarity and conflicting natures, could someone describe their syntactical differences, as well as any functional differences?
grunt-connect
For when you want to run a server indefinitely, for example on a web server (note: this is not what grunt is intended for, instead use grunt to process files, then something like nodejs or apache HTTP server, to host the processed files).
grunt-contrib-connect
When you want to start a web server for other grunt tasks to use.
grunt-serve
This is similar to grunt-connect and is what I use in my projects.
I am trying to use cURL to automate some deployment tasks to an ASP.NET Application. But I am having some problems with it. The login part works perfectly but i guess the application for some reason has some sort of control for this kind of tools. These application was developed or an external company(real crappy app). Basically what i need to do is upload every month like 10 xml files by hand which is stupid!. I want to be able to automate this by using my own script(like ruby) and call cURL on the background to process the http requests.
Does any one know about any problems using cURL and an ASP.NET app?. May be should I write my own C# tool for this?.
Any ideas?
There's no problem with curl and ASP.NET. curl speaks HTTP and offers many different features and ways to send data, including multiport formpost uploads etc. You most likely just haven't figured out exactly how to mimic a browser when doing your command lines.
I use Anthill Pro for build and deployment management. I'm attempting to configure Anthill to make a HTTP POST or GET to a web server after a deployment has completed. I'm passing basic information regarding the deployment to a web application.
It looks like writing a Beanshell script is the way to do this in Anthill (I'm open to suggestions), but the documentation does not seem to offer an obvious way to do this.
Can a Beanshell script from Anthill make a HTTP connection to accomplish this, or is there a better way?
Example of the connection I'm trying to make:
http://myServer/myScript/?param1=deployValue1¶m2=deployValue2
Any help is appreciated.
Are you just trying to hit a URL and validate that some text on it is there? If so we have a plugin baking that does this sort of thing. Shoot me a mail at eric#urbancode.com (or file a support ticket) and we can get that over to you.
Background
I develop a web application that lives on an embedded device. In order to make dev times sane, frontend development is done using apache serving static documents, with PHP proxying out to the embedded device for specifically configured dynamic resources. This requires that we keep various server-simulation scripts hanging around in source control, and it requires updating those scripts whenever we add a new dynamic resource.
Problem
I'd like to invert the logic: if the requested document is available in the static documents directory, serve it; otherwise, proxy the request to the embedded device.
Optimally, I want a software package that will do this for me (for Windows or buildable on cygwin). I can deal with forcing apache to do it with PHP, but I'm unsure how to configure it to make it happen. I've looked at squid and privoxy, but neither of them seem to do what I want.
Any ideas? I'd rather not have to roll my own.
Now, Varnish is available in cygwin, see:
Installation instructions: http://varnish-cache.org/trac/wiki/VarnishOnCygwinWindows
I think what you want is varnish.
Now that I've looked at varnish, I understand that what I actually want is a special case of a reverse proxy, and that squid can be configured to do what I need. (With the added bonus of having it available as a cygwin package.)
I have a python script on a linux server that I can SSH into and I want to run the script on the linux server( and pass it parameters entered by the user) and get the output on an ASP.net webpage running on IIS. How would I be able to do that?
Would it be easier if I was running a wamp server?
Edit: The servers are in the same internal intranet.
Probably the best approach is the least coupled one. If you can determine a protocol that you're comfortable with the two (asp/python) talking in, it will go a long way to reducing headaches.
Let's say you pick XML.
Setup the python script to run as a WSGI application with either cherrypy or apache (or whatever). The script formats it's response in XML and passes that to WSGI which returns the XML over HTTP.
On the ASP.NET side of things, whenever you want to "run the script" you simply query the URL with the WebRequest class, then parse the results with LINQ-to-XML (which on a side note is a really cool technology).
Here's where this becomes relevant: Later on if either the ASP.NET implementation or the python implementation changes you don't have to re-code/refactor the other. Later if you realize that the ASP.NET app and some desktop app need to be able to do that, you've standardized on a protocol and implementing it should be easy and well supported.