gwan/csp/strangesubfolder/inc.c can be visited via http://domainName.com/strangesubfolder/?inc
I feel this servlet mapping strange but that suits my need. I can't find the mapping description in the gwan user's manual.
Please correct me if I am wrong and confirm if it is the expected behavior.
Yes it is a standard feature.
The '?' tells G-WAN that it is a servlet. If there's no '?' it will look for the file in WWW folder.
Update:
Now I understand your confusion.
Since version release 3.3.27 this has been changed so users can easily make restful URL's
G-WAN timeline
Read the update for March 27 2012.
Now you need to place the '?' before the actual servlet name. By doing this G-WAN can efficiently rewrite '/' to '&' so you can use restful URL's like these without writing any code.
//Old way
http://domain/?user/profile&user1
http://domain/?blog/archive&2012&march
//New way (more restful no '&')
http://domain/user/?profile/user1
http://domain/blog/?archive/2012/march
Yes, as Richard rightly (and promptly, thanks Richard!) explained it, this is the expected behavior.
The directory /gwan/.../csp/script.c is used to store servlets that must be run while /gwan/.../www/script.c is used to store files intended to be served as an HTTP resource.
The corresponding URLs are GET /?script.c and GET /script.c.
Any sub-directory used in the /csp or /www folders is reflected accordingly in the HTTP request: GET /folder/?script.c for dynamic contents and GET /folder/script.c for static contents.
The choice of moving the '?' query character (which can be replaced by other characters) from the old GET /csp?/folder/script.c form to the new GET /folder/?script.c form was motivated by the need to:
distinguish servlet names from folder names (requests can lack the servlet extension for the defined 'default' programming language, which is C if nothing is defined)
allow any number of sub-directories in HTTP requests
allow any number of query arguments in HTTP queries
distinguish between folders and query arguments in HTTP requests
make it possible to have RESTFUL requests in all the above cases.
It took us a while to find the proper mix of features with the minimal verbosity but experience has shown that this works well.
Here is an example of a RESTFUL query having both a sub-folder and query arguments:
GET /folder/?script/arg1/value1/arg2/value2/arg3/value3
By default, this is a C script, unless another language (among the 15 available for scripting) has been defined as the 'default' language.
Note that the 50+ script examples provided in the download archive illustrate this scheme which is also presented on the developers page.
Related
web.xml's <error-page> allows a developer to specify what to return to client in case of some error (either HTTP status or java exception).
But I have 2 different 404 error pages, per locale.
My web application is structured so that all resources for locale A is under path /a/; resources for locale B under path /b/.
I'd like to have a localized error page for 404 when trying to access pages under each locale (to be clear, trying to access /a/some-undefined-resource should return 404 + an error page localized for locale A).
Given other limitations, it is not really possible to deploy 2 separate applications, a.war and b.war for each locale.
How can I serve an error page that depends on original resource requested?
I ended giving up the idea to use <error-page> to serve my SPA and used the urlrewrite filter to rewrite URLs to either /a/index.html or /b/index.html.
The downside is that now I have two places to edit when I add new routes inside my SPA:
angular routes (app-routing.module.ts), and
urlrewrite.xml.
Besides, if I add a new locale in the future, I'll need to add a new set of rules to cover all my routes.
Also I don't know how this filter will impact my site's performance. Since it is a low traffic project, I'll keep it this way until I find a better solution.
There is an upside too: now those - otherwise legal - requests are served with status 200, and really missing resources are signaled with 404 and no default content included.
I am trying to migrate a legacy system to use artifactory. However I have two blockers:
the old scripts require PyPixmlrpc, which artifactory doesn't support
they also make use of upload_docs, not supported by artifactory's pypi implementation either
a smaller issue, the old scripts call register and they expect 200 instead of 204 http status code.
Would it be possible for me to write a plugin to implement this?
Looking at https://www.jfrog.com/confluence/display/RTF/User+Plugins I couldn't find a callback for when POST /api/pypi/<index-name> is requested.
If I can make
work for the methods we actually use, to just pretend it deployed docs and to respond with the correct status code I will be happy enough.
As you say, there is no plugin hook for the Pypi API endpoints. It would be possible to use the altResponse endpoint to customize artifact downloads, but then you would be restricted to GET requests with no request body, which is also not a good option for you.
I think the most viable approach would be to define a custom executions endpoint. With this, you can specify the acceptable method, read the body, and set your own response code and body. The main shortcoming with this is that you can't customize the path (it's always /api/plugins/execute/[execution_name]), but this can be worked around.
Execution endpoints can take params in the following form:
/api/plugins/execute/[execution_name]?params=[param_name]=[param_val]
Say your plugin takes a param path, which represents the API path your old scripts are going to call. Then you can set your base URL to /api/plugins/execute/[execution_name]?params=path=/, so that the API path is appended to the param. Alternatively, you can use nginx or another reverse proxy to rewrite the original API path to this form.
(Since you'll be using XML-RPC, I don't suppose you'll need to worry about any of this path stuff, but I'm including it anyway for completeness.)
Some issues with this approach:
Execution endpoints only allow String responses, so sending binary data in the response body might be finnicky. However, no such limitation exists with the request body.
If you need more than one request method, you'll need more than one execution endpoint. This means you'll need to use a reverse proxy to rewrite each method to a separate endpoint. Again, since XML-RPC just uses POST, this probably won't be an issue for you.
Execution endpoints can't customize response headers. Therefore, if your scripts expect a particular Content-Type or other header, you'll need to use a reverse proxy to insert it into the response.
Assume that I have a http service which serves some contents, and I want to place cdn in front of it to serve cached content. the issue is, the url can take parameters, and mulitiple parameters maps to one result file, will cdn be efficient in this case? will cdn cache a different copy of the file for each of the strings that map to the same file?
For example:
http://myservice.com/getlogfile?time=10000
to
http://myservice.com/getlogfile?time=19999
all above maps to log.1
The Akamai cache configuration can be altered to adhere to specified rules. If you would like for one file to be cached for all query strings, simply set your Akamai configuration to ignore query strings. The Akamai Knowledge base found within the Akamai control panel contains a document called 'Edge Server Configuration Guide' that explains this configuration in detail. Start with the section called 'Ignore Query Strings' (page 104).
While looking at the code in "petclinic", part of Spring 3.0 samples I noticed the following lines
<c:choose>
<c:when test="${owner.new}"><c:set var="method" value="post"/></c:when>
<c:otherwise><c:set var="method" value="put"/></c:otherwise>
</c:choose>
In this discussion at SO it seems that PUT should be used for "create/update" and POST for "updates".
Which is right?
What is the impact of using post for "create" and put for "update"?
Note : According to the HTTP/1.1 spec. quoted in the referenced SO discussion, the code given above seems to have the correct behavior.
Both POST and PUT are have well defined behavior as per HTTP spec.
The result of a POST request should be a new resource that is subordinate to the request URL; the response should contain Location header with the URL of the newly created resource.
The result of a PUT should be an update of the resource at the request URL. if there is no existing resource at the request URL, a new one can be created.
The confusion arises from the fact that POST is also used with forms as a mechanism to pass the form data. Most common implementation of forms is to post back to the same URL at which the form page is located, thus giving the false idea that the POST operation is used for an update. However, in this particular usage, the form page is not the resource.
With all this in mind, here's the correct (in my opinion of course :-)) usage:
POST should be used to create new resources when:
- the new resource is subordinate to an existing resource
- the resource identity/URL is not known at creation time
PUT should be used to update existing resources with well-known URL. It can be used to create a resource at well-known URL as well; however, it does help to think about this scenario in a different way - if the resource URL is known before the PUT request is made, this could be treated the same as the resource at this location already existing but being empty.
It's quite simple:
POST allows anything to happen, and it isn't restricted to creating "subordinate" resources, but allows the client to "provide a block of data ... to a data-handling process" (RFC 2616 sec 9.5). POST means "Here's that data you asked for just now"
PUT is used as an opposite of GET. The usual flow is that you GET a resource, modify it somehow, and then you PUT it back at the same URI that you got it from. PUT means "Please store this file at this URI".
The uniformity of PUT (which is to store a file) allows intermediaries (e.g. caches) to invalidate any cached responses they might have at that exact URI (since they know that it's about to change). The uniformity of PUT also allows clients (that understand this) to modify a resource by first retrieving it (GET) and then send a modified copy back (PUT). It also allows clients to retry on a network failure, due to PUT's idempotency.
Side note: Using PUT to create resources is dubious. While it's possible within the spec, I don't see it as a good idea, just as using POST to perform searches isn't a good idea, just as tunneling SOAP over HTTP isn't a good idea. AtomPub explicitly states that PUT isn't used to create atom entries.
POSTs ubiquitousness comes from the fact that HTML defines <form> elements that result in POSTing a application/x-www-form-urlencoded entity, with which the recipient can do anything it pleases, including
creating subordinate resources (The repsonse is usually accompanied by a 201 response and Location header)
creating a completely different resource (again usually a 201 response and Location header)
creating many subordinate and/or unrelated resources (perhaps with a simple response indicating the URIs of the created resources)
doing nothing except return a response (e.g. 200 or 302) (a case where perhaps GET should have been used)
modifying the resource that received the POST itself (returning or redirecting back to the updated resource).
delete one or more resource.
any combination of the above.
The only one who knows what will happen in a POST request is the user who initiated the request (by clicking the huge "yes I confirm deleting my Facebook profile" button) and the server that's handling the request. To the rest of the world, the request is opaque and doesn't mean anything other than "this URI is being passed some data".
So the answer to your question is that both POST and PUT can be used for both create and update.
POST is often use to create resources (like AtomPub 9.2)
PUT semantics fits well for modifying resources (like AtomPub 9.3)
POST may be used to modify resources (like a www form edit your profile)
PUT can technically be used to create resources (although I advise against it)
I have a situation where we're aggregating what amounts to marketing data from N number of clients, where a client can host a HTML form using any backend of their choice, each with the action of the form pointing to a path that we're hosting. Each client has a different URL, there's no auth (but there is some simple validation of the data) and it's all generally working just fine.
However, there's one small wrinkle that I can't seem to get my head around.
The aspx that is processing the submitted data resides at a path, let's call it ~/submit/default.aspx. The idea is that we should be able to hand to our partner a URL along the lines of "http://sample.com/submit/?foo=bar" as the action of their form. Doing this however results in a HTTP 405 error, "Resource not allowed".
Having the action of the form set as "http://sample.com/submit/default.aspx" works just fine and dandy however.
Default.aspx is set as one of the default document names in IIS 6.
The .aspx file extension is properly mapped to the correct .Net dll and has the verbs GET, HEAd, POST, and DEBUG activated for the mapping.
Those were the only two things I could think of to double check first--anyone else have any ideas? I'd have preferred to use URL rewriting / routing with IIS7, but that's unfortunately not an option--and I have a number of additional requirements where "clean" URLs will highly be preferable, so solving this problem is going to be a pretty core problem to get through.
IIRC, IIS will only use the default docs if the requested resource is a directory. Since the requested resource in the first case is not, it'll never make it through the default doc handlers - instead failing on a POST to an unregistered script extension (405).
it may depend on the document type of "http://sample.com/submit/?foo=bar"... if you IIS doesn't know how to handle the document type being returned to it (which then returns it to you, the client), then you may get an http 405 error - which means that it doesn't know how to handle that document type, server-wise. Maybe try putting something like
in your web.config file that drives the app. HTTP Handlers are modular pieces of code, written and compiled in a .net language, and act as kind of a 'servlet' if you're familiar with Java terms. It's a piece of code that writes out something to the client -- in your case maybe a rendering of a .doc file, found programmatically in your handler class.
for some reason, it didn't render my code sample!! you guys need to decode and encode less than and greater than signs for your "Your Answer" text box.... anyways,
<httpHandlers>
<add verb="your.class.to.handle.doc.files"/>
</httpHandlers>
is what should be in your web.config file.