Basic CGI with Lua - http

I need a very light web solution to run on a Linux appliance to handle HTML forms, so intend to use uwsgi and Lua.
In the CGI script, this article uses the following code:
print ("Content-type: Text/html\n")
print ("Hello, world!")
However, this works too:
print("Status: 200 OK\r\n\r\nHello, world!\r\n")
I'd like to know what CGI scripts are really required to return to the web server.
Thank you.

You don't need a header at all, the only thing you really need is the blank line that ends the header and starts the body:
print ("\nHello, World")
should work as well.
However, you should at least include the Content-type including the character set, since browsers should default to iso-8859-1, but the user may override this, and you should use utf-8 to avoid being restricted in what characters you can display.
print("Content-type: text/html; charset=utf-8")
Also, if you're programming an appliance, you probably want to avoid caching, so i'd spend an extra
print("Cache-control: no-cache")
print("Pragma: no-cache")
which prevents browsers and proxies from caching your page.

yet you need to process HTML forms data (say POST data). You can use the power of Ajax Forms (javascript library) in order to have your POST data being sent as JSON, which you can read with io:read('*') back to your Lua script.

Related

Documentation for Rebol2's read/custom?

I've been trying to update Ross-Gill's Twitter API for REBOL2 to support uploading media. From looking at its source, the REBOL cookbook, the codeconscious site, and other questions here, my understanding is that read/custom is the preferred way to POST data to websites.
However, I haven't been able to find any real documentation on read/custom. For example: Does it support sending multipart/form-data? (I've managed to work around this by manually composing each part, but it doesn't seem to work for all image files on Twitter's end and is a bit of a hack). Does read/custom only return text on an HTTP/1.0 200 OK response? (It appears so, which is problematic when I receive HTTP/1.0 202 Accepted and need to read the resulting data). Is there a reason that read/custom/binary doesn't appear to send binary data correctly without converting the data using to-string?
TL;DR: Is there good documentation on REBOL2's read/custom somewhere? Alternatively, is read/custom only meant for basic POSTs and I should be using ports and handling the HTTP responses manually?
You guessed right, read/custom is meant for simple HTTP posts, handling web forms data only (that is why it will fail on binary data). No official documentation for it. But that is not an issue as you can access the source code of the HTTP implementation:
probe system/schemes/HTTP
There you can see that /custom refinement supports two keywords, post and header (for setting custom HTTP headers). It also appears that even if you use both keywords, Content-Type will be forced to application/x-www-form-urlencoded no matter what (which is probably the reason why your binary data gets rejected by the server, as the provided mime type is wrong).
In order to work around that, you can save the HTTP object, modify its implementation to fit your needs and reload it.
Saving:
save %http-scheme.r system/schemes/HTTP
Reloading:
system/schemes/HTTP: do load %http-scheme.r
If you just disable the hard-coded Content-Type setting in the HTTP code, and then provide your own one using header keyword, it should work fine, even with binary data:
read/custom <url> [header [Content-Type: <...>] post <data>]
Hope this helps.

multipart/mixed support in Netty

By browsing the source code and playing with some toy examples I got to the conclusion that Netty currently (as of 5.0.0 alpha2) supports only multipart/form-data, but not multipart/mixed, at least not as specified in RFC1342 (sec. 7.2). It looks like mixed is supported inside a part in multipart/form-data though.
Is that really the case or am I missing something?
Since I get the very same question, I post here what could be an beginning of answear...
However, the current implementation seems to have 2 limitations:
1) it supports only multipart/form-data. I would like to also be able
to use multipart/mixed, which is very similar on the wire (see
http://www.w3.org/Protocols/rfc1341/7_2_Multipart.html ). I think that
the encoder/decoder could be extended to understand multipart/mixed
and still create the same kinds of HttpDatas.
Yes, the current codec is focused on multipart/form-data. I shall be possible to extend or propose a new one (based on it probably) to enable the support of multipart/mixed.
The current codec was made based on user needs (mine in the beginning, others following). Since no one yet has requested a support for multipart/mixed, it was not coded, except for internal multipart/mixed code.
The reference is RFC1867.
As Netty loves contributions, you are more than welcome to propose yours ;-)
2) it seems that is it only possible to use efficient HttpDatas like
FileUpload if you are in multipart/form-data. I would like to be able
to add a FileUpload to the request, and by this way make the contents
of the file be the body of the request, without making it a multipart
request. I think this could be done by extending the Standard Post
Encoder to understand FileUploads.
This could a bit more complicated since it has to be done without multipart, which holds currently the FileUpload class.
Maybe a good direction could be to switch to ChunkFile or ChunkNioFile and to combine it with "your" HttpCodec or in your "HttpHandler" when doing the body request, in order to pass the content through the ChunkFile.
Hoping this helps you in the right direction...

How could ASP server-side code corrupt a smart quote ’?

My company just converted many columns from varchar to nvarchar.
Now it seems that when we render a smart quote (i.e. ALT+0146 ’) to the screen and then send it back to the SQL Server 2000 database for persistence, the smart quote gets corrupted to - ’ -.
My Question:
How could ASP server-side code corrupt a smart quote ’ ?
EDIT: It appears that my question is similar to this one. Incidentally, Powerpoint content introduced the smart quote into the mix. However as I said before, I'm dealing with an ASP page, whereas the referenced question pertains to a PHP page.
EDIT: The server-side directive CODEPAGE=65001 makes the page render correctly, but it still posts content as 'Western European' on a Windows 2000 box. Does anyone know why?
It looks like something is doing an implicit conversion between ANSI and Unicode (and choosing the wrong code page in the process). You may need to do the conversion manually and supply the correct code page. It's hard to say without seeing the code.
Take a look at this:
http://support.microsoft.com/kb/232580
You may want to set your Code Page in the ASP so you don't get hokey characters.
While you do need to tell the server which encoding to use, have you told the client what the page encoding is? If not, the client will happily post in whatever encoding the user last explicitly chose, or a system default encoding, which is likely to be western european on most US or Western European machines.
In your html, do you have something like this in your <head> ?
<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
You can also ask the server to send this explicitly in your Response.Headers. Although I think it's a good idea to send it in the HTTP headers, it's helpful to include it in the HTML as well for people who decide to save the document locally for whatever reason.
VBScript might mangle the Unicode characters; especially on older versions of IIS (i.e IIS 5.0 on Windows Server 2000).
In my case, a For Each construct was to blame.
Here's some example code that executes after a POST:
Response.Write Request.Form("selOptions")(0) ' A-OK! - Displays Unicode characters fine!
For Each sOption in Request.Form("selOptions")
Response.Write sOption ' Bad! Unicode characters are mangled!
Next
As always, your mileage may vary.

ASP.NET response filtering and post-cache substitution are not compatible

According to this article http://support.microsoft.com/kb/2014472 you can't use Response filters and Substitution controls together. Has anyone found a workaround for this? I am trying to process complete HTML response just before it's written to client and I do use substitution controls widely.
Here's an official "answer" from MS Dev Support on this issue.
Question:
What is the alternative to response filtering in ASP.NET for modifying HTML rendered by another process when:
1. The other process cannot be modified
2. Post-cache substitution must be supported
Answer:
"Yes, you question is clear as blue sky and this is officially claimed to be not support. As Post-cache substitution would combine certain substitution chunks to the response bytes while response filtering expects to filter the raw bytes of the response(not modified). So the previously combined substitution chunks cannot be preserved anymore.
There is not an alternative from Microsoft so far."
The page you reference has the solution:
Disable output caching on pages that are using substitution blocks.
Edit
Possible solution:
Create master pages of all non-dynamic content. Cache that. Don't cache the changing content.

J2me Httpconnection,which one is better get or post?

In J2ME ,Which connection type is better?Get or post.Which one is faster?which one uses less bandwidth?and which one is supported by most of the handsets?What are the advantages and disadvantages of both?
Also, see Is there a limit to the length of a GET request? which may be relevant if you plan to abuse GET.
Be aware that network operators (certainly in the UK) have caching schemes in place that may affect your traffic.
If you look at what Opera Mini does, they only use HTTP POST in their HTTP mode.
I think this is a great idea because of the following reasons:
POST's are never cached (according to HTTP spec at least) - this saves you from operator caching etc.
It seems some operators do better with POST's than GET's - this is a feeling I get from what some Nigerian users mention.
Opera has the most installations of any J2ME app in the world most probably, and if they do it, it's probably safer.
No problems with HTTP GET limits on query length.
You can use a more flexible data format if you like that uses less data (no encoding needed on the data as with GET)
I think it's much cleaner, but does require some extra work, e.g. if you are using your HTTP web logs to parse out number of requests per "?type=blah" for example, then you'll have to move that into your site's logic.
If you follow standards get should be used only for data retrieval and post for adding new items. It depends on the server handler implementation which one is faster/slower.

Resources