Serving static files using Racket servlets - servlets

I'm trying to learn about servers using Racket, and I'm getting caught up on trying to use static assets. From this answer, I was able to include a static stylesheet like so:
#lang racket
(require web-server/servlet
web-server/servlet-env
web-server/configuration/responders)
(define (home req)
(response/xexpr
'(html
(head (link ([rel "stylesheet"] [type "text/css"] [href "/style.css"])))
(body
(span ([class "emph"]) "Hello, world!")))))
(define-values (dispatch input-url)
(dispatch-rules
[("home") home]
[("style.css") (λ (_) (file-response 200 #"OK" "style.css"))]))
(serve/servlet dispatch
#:servlet-regexp #rx""
#:servlet-path "/home"
#:server-root-path (current-directory))
However, I'm still confused as to how to do this in general, i.e. serving all files in #:extra-files-paths without making a dispatch rule for each of them. I tried Jay's advice and changed the dispatcher order in the definition of serve/servlet by moving the htdocs and extra-files-paths parts up (I probably shouldn't copy that whole thing here) and I broke the ability to resolve MIME types somehow. Overall it was a mess.
So any of these questions would be related/relevant to my problem (from less to more general):
Is there a better way to include static files using tools at the level of serve/servlet?
Can anyone outline specifically how I might rearrange the pieces in serve/servlet without breaking things?
Is there a better place than the docs to learn about how to use the lower level server tools in Racket? (I'm pretty new in this particular area so "learn more about servers" may be a valid response to this question)

It looks to me like the problem is your #:servlet-regexp, which is set to the empty regexp, meaning that it will match anything. One easy solution is to restrict this regexp so that it only matches the non-static assets; then, all of the other requests should be served from the #:extra-files-paths.
Perhaps there's some reason why you need to intercept all requests and handle them in your code? Not sure.

Related

Plone Conditional - If Content Type is Versionable

Is there a simple way to check if a content-type, or a specific object, has Versioning enabled/disabled in Plone (4.3.2)?
For context, I am making some unique conditionals around portal_actions. So instead of checking path('object/##iterate_control').checkout_allowed(), I need to first see if versioning is even enabled. Otherwise, the action in question does not display for items that have versioning disabled, because obviously it isn't checkout_allowed.
I didn't have any luck with good ole Google, and couldn't find this question anywhere here, so I hope it's not a dupe. Thanks!
I was able to get this working by creating a new script, importing getToolByName, and checking current content type against portal_repository.getVersionableContentTypes(). Then just included that script in the conditional.
I was looking for something like this that already existed, so if anyone knows of one let me know. Otherwise, I've got my own now. Thanks again!
The first thing that checkout_allowed does is check if the object in question supports versioning at all:
if not interfaces.IIterateAware.providedBy(context):
return False
(the interface being plone.app.iterate.interfaces.IIterateAware:
class IIterateAware( Interface ):
"""An object that can be used for check-in/check-out operations.
"""
The semantics Interface.providedBy(instance) are a bit unfortunate for usage in conditions or TAL scripts, because you'd need to import the interface, but there's a reversal helper:
context.portal_interface.objectImplements(context,
'plone.app.iterate.interfaces.IIterateAware')

Lift Cookbook Avoiding CSS and JavaScript Caching

I am trying to use the data-lift="with-resource-id" parameter to the tag as described in the Lift Cookbook (http://cookbook.liftweb.net/#AvoidAssetCaching) to avoid asset caching in the browser. I've copied the code sample provided in the cookbook and modified it to my environment in order to introduce a random value into the parameter path.
My assets are stored in two root directories -- one called "css" and one called "js" for css and javascript respectively.
My code looks like:
import net.liftweb.http._
import net.liftweb.util._
object AssetCacheBuster {
def init() : Unit = {
val resourceId = Helpers.nextFuncName
LiftRules.attachResourceId = (path: String) => {
val PathRegex = """\/cached(\/css\/|\/js\/)(\S+)""".r
try {
val PathRegex(root, rest) = path
"/cached" + root + resourceId + "/" + rest
} catch {
case e: scala.MatchError => path
}
}
// Remove the cache/{resourceId} from the request if there is one
LiftRules.statelessRewrite.prepend( NamedPF("BrowserCacheAssist") {
case RewriteRequest(ParsePath("cached" :: "css" :: id :: file :: Nil, suffix, _, _), _, _) =>
RewriteResponse("css" :: file :: Nil, suffix)
case RewriteRequest(ParsePath("js" :: id :: file :: Nil, suffix, _, _), _, _) =>
RewriteResponse("js" :: file :: Nil, suffix)
})
}
}
I embed the css files, for example, with a call like:
<link data-lift="with-resource-id" rel="stylesheet" type="text/css" href="/cached/css/standard.css" />
The way I expect it to work is that attachResourceId logic will recognize embedded css files by the path "/cached/css" and inject a unique value in the path. So, for example, /cached/css/standard.css becomes /cached/css/F7017951738702RYSX0/standard.css. By inspecting elements using Chrome, I can see that this is indeed occurring, so I believe this is working as expected.
In the rewriting logic at the bottom, I expect it to look for requests that start with "/cached/css" and remove the /cached and unique id components. By tracing in the debugger, this too seems to be working. In the debugger I can see the the resulting url it is trying to rewrite is "/css/standard.css". And I can verify that if I enter this value in my browser URL, that content does indeed get served. Yet, the browser is showing an error (which I can see via Chrome's console) that the .css file is not found.
Here's what I think you're seeing...
Typically Lift will ignore CSS and JS files, and they will be served by the underlying engine, such as Jetty or Tomcat. That all happens outside of the Lift.
In this case, the end result of the rewrite is (as an example) a request for /css/standard.css, which is correct. But that final resource is being resolved inside Lift (because that's where we are -- the re-writes aren't HTTP redirects, so we stay inside the Lift pipeline). And because Lift doesn't serve those files by default, you're seeing a 404.
This is also why /css/standard.css works for you directly in a browser, because Lift is ignoring the request and Tomcat (or similar) is serving the content.
So why does it work in the book example? In that case, the example is for /classpath/jquery.js, and that URL is something Lift knows how to serve (via ResourceServer).
Here's what you can do about it....
I'd say the simple solution is to teach Lift how to serve up these files. You can do that by matching on the path you care about, and streaming back the content:
LiftRules.statelessDispatch.append {
case Req("css" :: file :: Nil, "css", _) =>
() => for (in <- LiftRules.getResource("/css/"+file+".css").map(_.openStream))
yield {
StreamingResponse(in, () => in.close, size = -1,
headers = Nil, cookies = Nil, code=200)
}
}
The same applies for the JS files, so you can probably generalise that code a little, or adjust to your needs. There's a chapter on streaming content in the Cookbook.
If that works for you, let me know, and I'll update the book.
As an aside, you might get more eyeballs on the problem if you post to the Lift mailing list. Don't get me wrong: I totally love Stackoverflow, but just due to the history of Lift, the mailing list is where you find users and committers looking at questions and problems.
Do you really need this complex logic?
By default, lift's resources will look like /static/css/example.css?F745187285965AXEHTY=_. In this case, if you use either nginx / jetty / tomcat / embedded jetty, you'll just see everything working.
The reason of that is that jetty/tomcat/nginx take the main resource example.css instead of example.css?asdfadf=_ if they can't find the latter. And the resource with ?asdfasdf=_ will be cached by the browser. So, the browser remembers the content of the full css address.
This is a common technique to avoid caching, BTW. It's not only Lift related. By default, developers update resources and write some HTML like: /static/css/example.css?14 where 14 is the virtual version of the resource. This way they don't have to rename the resource itself.
You need to inject the random value as a GET value not a path. Changing the path would result in not finding the file (unless you are dynamically writing the css file every time to the random location).
This can be done inline with javascript.
<script>
document.write("<link rel=\"stylesheet\"type=\"text/css\" href=\"/cached/css/standard.css?" + Math.random() + "\" />");
</script>

Nested REST Routing

Simple situation: I have a server with thousands of pictures on it. I want to create a restful business layer which will allow me to add tags (categories) to each picture. That's simple. I also want to get lists of pictures that match a single tag. That's simple too. But now I also want to create a method that accepts a list of tags and which will return only pictures that match all these tags. That's a bit more complex, but I can still do that.
The problem is this, however. Say, my rest service is at pictures.example.com, I want to be able to make the following calls:
pictures.example.com/Image/{ID} - Should return a specific image
pictures.example.com/Images - Should return a list of image IDs.
pictures.example.com/Images/{TAG} - Should return a list of image IDs with this tag.
pictures.example.com/Images/{TAG}/{TAG} - Should return a list of image IDs with these tags.
pictures.example.com/Images/{TAG}/{TAG}/{TAG} - Should return a list of image IDs with these tags.
pictures.example.com/Images/{TAG}/{TAG}/{TAG}/{TAG}/{TAG} - Should return a list of image IDs with these tags.
etcetera...
So, how do I set up a RESTful web service projects that will allow me to nest tags like this and still be able to read them all? Without any limitations for the number of tags, although the URL length would be a limit. I might want to have up to 30 tags in a selection and I don't want to set up 30 different routing thingies to get it to work. I want one routing thingie that could technically allow unlimited tags.
Yes, I know there could be other ways to send such a list back and forth. Better even, but I want to know if this is possible. And if it's easy to create. So the URL cannot be different from above examples.
Must be simple, I think. Just can't come up with a good solution...
The URL structure you choose should be based on whatever is easy to implement with your web framework. I would expect something like:
http://pictures.example.com/images?tags=tag1,tag2,tag3,tag4
Is going to be much easier to handle on the server, and I can see no advantage to the path segment approach that you are having trouble with.
I assume you can figure out how to actually write the SQL or filesystem query to filter by multiple tags. In CherryPy, for example, hooking that up to a URL is as simple as:
class Images:
#cherrypy.tools.json_out()
def index(self):
return [cherrypy.url("/images/" + x.id)
for x in mylib.images()]
index.exposed = True
#cherrypy.tools.json_out()
def default(self, *tags):
return [cherrypy.url("/images/" + x.id)
for x in mylib.images(*tags)]
default.exposed = True
...where the *tags argument is a tuple of all the /{TAG} path segments the client sends. Other web frameworks will have similar options.

How to work with hook_nodeapi after image thumbnail creation with ImageCache

A bit of a followup from a previous question.
As I mentioned in that question, my overall goal is to call a Ruby script after ImageCache does its magic with generating thumbnails and whatnot.
Sebi's suggestion from this question involved using hook_nodeapi.
Sadly, my Drupal knowledge of creating modules and/or hacking into existing modules is pretty limited.
So, for this question:
Should I create my own module or attempt to modify the ImageCache module?
How do I go about getting the generated thumbnail path (from ImageCache) to pass into my Ruby script?
edit
I found this question searching through SO...
Is it possible to do something similar in the _imagecache_cache function that would do what I want?
ie
function _imagecache_cache($presetname, $path) {
...
...
// check if deriv exists... (file was created between apaches request handler and reaching this code)
// otherwise try to create the derivative.
if (file_exists($dst) || imagecache_build_derivative($preset['actions'], $src, $dst)) {
imagecache_transfer($dst);
// call ruby script here
call('MY RUBY SCRIPT');
}
Don't hack into imagecache, remember every time you hack core/contrib modules god kills a kitten ;)
You should create a module that invokes hook_nodeapi, look at the api documentation to find the correct entry point for your script, nodeapi works on various different levels of the node process so you have to pick the correct one for you (it should become clear when you check the link out) http://api.drupal.org/api/function/hook_nodeapi
You won't be able to call the function you've shown because it is private so you'll have to find another route.
You could try and build the path up manually, you should be able to pull out the filename of the uploaded file and then append it to the directory structure, ugly but it should work. e.g.
If the uploaded file is called test123.jpg then it should be in /files/imagecache/thumbnails/test123/jpg (or something similar).
Hope it helps.

Are there solutions for streamlining the update of legacy code in multiple places?

I'm working in some old code which was originally designed for handling two different kinds of files. I was recently tasked with adding a new kind of file to this code. Most of my problems were solved by filling out an extensive XML file with a new entry that handled everything from what lists were named to how the file is written in plural lower case. But this ended up being insufficient, as there were maybe 50 different places in 24 different code files where I had to update hardcoded switch-statements that only branched for the original two file types.
Unfortunately there is no consistency in this; there are methods which operate half from the XML file, and half off of hardcode. Some of the files which look like they would operate off of the XML file don't, and some that I would expect that I'd need to update the hardcode don't need it. So the only way to find the majority of these is to run through testing the whole system when only part of it is operational, finding that one step to fix (when I'm lucky that error logging actually tells me what is going on), and then running the whole thing again. This wastes time testing the parts of the code which are already confirmed to work, time better spent testing the new parts I have to add on top of it all.
It's a hassle and a half, and to my luck I can expect that I will have to add yet another new kind of file in the near future.
Are there any solutions out there which can aid in this kind of endeavour? Something which I can input some parameters of current features, document what points in a whole code project actually need to be updated, and run something nice the next time I need to add a new feature to the code. It needn't even be fully automated, something that'll help me navigate straight to the specific points in everything and maybe even record what kind of parameters need to be loaded.
Doubt it matters specifically, but the code is comprised of ASP.NET pages, some ASP.NET controls, hundreds of C# code files, and a handful of additional XML files. It's all currently in a couple big Visual Studio 2008 projects.
Not exactly what you are describing, but if you can introduce a seam into the code and lay down some interfaces you can break out and mock, a suite of unit/integration tests would go a long way to helping you modify old code you may not fully understand well.
I completely agree with the comment about using Michael Feathers' book to learn how to wedge new tests into legacy code. I'd also strongly recommend Refactoring, by Martin Fowler. What it sounds like you need to do for your code is to implement the "Replace conditionals with polymorphism" refactoring.
I imagine your code today looks somewhat like this:
if (filetype == 23)
{
type23parser.parse(file);
}
else if (filetype == 69)
{
filestore = type69reader.read(file);
File newfile = convertFSto23(filestore);
type23parser.parse(newfile);
}
What you want to do is to abstract away all the "if (type == foo)" kinds of logic into strategy patterns that are created in a factory.
class FileRules : pReader(NULL), pParser(NULL)
{
private:
FileReaderRules *pReader;
FileParserRules *pParser;
public:
void read(File* inFile) {pReader->read(inFile);};
void parse(File* inFile) {pParser->parse(inFile);};
};
class FileRulesFactory
{
FileRules* GetRules(int inputFiletype, int parserType)
{
switch (inputFiletype)
{
case 23:
pReader = new ASCIIReader;
break;
case 69:
pReader = new EBCDICReader;
break;
}
switch (parserType)
... etc...
then your main line of code looks like this:
FileRules* rules = FileRulesFactory.GetRules(filetype, parsertype);
rules.read(file);
rules.parse(file);
Pull off this refactoring, and adding a new set of file types, parsers, readers, etc., becomes as simple as writing one exclusive to your new type.
Of course, go read the book. I vastly oversimplified it here, and probably got stuff wrong, but you should get the general idea of how to approach it from this. I can also recommend another book, "Head First Design Patterns", which has a great section on the Factory patterns (if you like those "Head First" kinds of books.)

Resources