Is it possible to use the h5ai "pretty" index UI on a CDN? I'm using Dreamhosts' DreamObjects and have it installed correctly (I've used it before on standard hosting sites). Am only getting an XML parse of the data back.
See it here: https://randassets.objects.cdn.dream.io/
Any thoughts? Thanks!
I guess what you would like to see is a pretty-looking list of files and directories on a web page, like a file browser to explore the content of your DreamObjects bucket. If that's the case, hi5ai would not work because from what I understand, hi5ai doesn't natively speak neither the S3 API nor the OpenStack Swift ones. hi5ai relies on a web server and a php interpreter, which are not provided by DreamObjects.
Maybe if you expand on your use case I can suggest you other tools you could use to browse your collection of files, something like ownCloud (and more specifically how to configure DreamObjects with ownCloud) or others.
Related
We have a similar problem to that of the asker of this question -- after upgrading from log4j-2.17.1 to 2.17.2, the application, though otherwise working, is not logging anything.
Having read the release notes, I see the following part:
By default, the only remote protocol allowed for loading configuration files is HTTPS.
Users can specify a system property to allow others or prevent remote loading entirely.
Indeed, in our case the log4j2.xml is downloaded via regular (non-encrypted) HTTP, and that likely explains our problem (as well as that of the other guy). However, try as I might, I cannot find, how to (re)enable the ability to use HTTP -- which system property is it, that now controls the capability?
Thanks!
The system property you are looking for is called log4j2.Configuration.allowedProtocols (cf. documentation) and should contained a comma separated list of URL schemes (e.g. "http,https").
You can set it using any available property source (e.g. a log4j.component.properties file or a Java system property).
We have quite a large MVC4 application and we would like to have Selenium go through every page and make sure it loads - some sort of smoke test.
I can use reflection to go through the assembly, find all controllers and all actions, check if actions are not post, come up with parameters for actions that require parameters.
Then I'll feed this list to Selenium and check that everything I need on the pages is done appropriately.
But before I start playing with reflection, I'd like to check if this has already been done, so I don't reinvent the bicycle. I have googled for such thing, but could not find anything.
p.s. Writing the reflection code is not an issue. Selenium is covered as well. Just checking if this has already been done.
The AttributeRouting project has a route debugger in place, which does work even if you don't use attribute routing inside your project.
You can see the class that handles displaying the routes over on Github but I'm not sure it will display the routing information when the project isn't run locally. You may need to adapt that code so you can access it safely from your Selenium instance (and make it machine readable using JSON or something).
We are hosting huge app for our cutomers. There are diffrent configuration and contents (images, user files). But the core code, directories structure, databse scheme is this same for every client.
I'm looking for a way to create one core code repository, so all clientes will use it. We do updates often, so this will make our live easyer.
The idea is to create the repo and In clients directories create just symbolic links to that repo direcories: bin, App_Resources, Css, SystemImages etc.
Is this a good idea? Will ASP.NET MVC app handle this correctly, or I've to add some code for it handle the 'virtual direcotories'?
I would suggest that you take a look Single-tenant and Multi-tenant applications even if you say that your code base is the same for every one.
Here is a nice Multi-Tenancy ASP.NET example
I would also suggest that you check http://appHarbour.com as you can easily push changes from your master repository to appHarbour using Git or Mercurial.
Regarding your exact question, I also keep static files in a custom scheme under Amazon S3, so each client can upload there own files, plus the ones I have and all is based on a single location that does not put more resources just to delivery static files.
You can see my live web application using this technique checking the View Source.
We are creating ReST Web Services using ASP.NET and OpenRasta.
Is there any tool that can could help us:
create WADL file
or/and create human readable API documentation similar which decribed resources/HTTP
methods supported for each resource, etc ?
Looks like REST Describe & Compile should do the trick.
On the WADL developer site Marc Hadley
maintains a command line tool named
WADL2Java. The ambitious goal of REST
Describe & Compile is to provide sort
of WADL2Anything. So what REST
Describe & Compile does is that it:
Generates new WADL files in a completely interactive way.
Lets you upload and edit existing WADL files.
Allows you to compile WADL files to source code in various programming
languages.
For OpenRasta, it'd be possible to use a UriDecorator to have help-like URIs defined for your resources (such as /myResource$help). You can then rewrite the URI before parsing to something yo can document easily, parse teh uri, find the resource type, and rewrite to /help/{resourcetype}
From there you register a resource for your help system:
ResourceSpace.Has.ResourcesOfType()
.AtUri("/help/{resourceType}")
.HandledBy()
.RenderedByXxx()
Then you can create your handler to return the documentation about a resource. You could for example use the IOperationCreator service to know which http methodds are available and with what input arguments, use the ICodecRepository to see what media types may be accepted as input, and potentially what a media type serialization would look like by calling the codec and generating an html friendly view of it.
That's definitly an area we're going to work on for the next version.
Due to the lack of clientaccesspolicy.xml, there appears to be problems with using Amazon S3 via Flex. Are there any work arounds?
Edit: Both of the below answers are great and work, I've upvoted both (I'm not going to assign an answer to the question as they both work):
Can you use Amazon S3 via Flex?
Can you use Amazon S3 via Flex?
You can CNAME a subdomain you control at Amazon S3 (to a bucket with the name of the subdomain), like so:
http://s3.ceejayoz.com/ (goes to my 's3.ceejayoz.com' bucket)
Uploading your own clientaccesspolicy.xml file to the root of that bucket (and setting the permissions to be globally viewable) should do the trick, if I'm understanding your question correctly, as it will be accessible at http://s3.ceejayoz.com/clientaccesspolicy.xml.
More information in the S3 docs: http://docs.amazonwebservices.com/AmazonS3/2006-03-01/index.html?VirtualHosting.html
edit: From looking at that, you could also use the "Example Virtual Hosted Style Method" without a CNAME: http://bucketname.s3.amazonaws.com/clientaccesspolicy.xml
Yes. You can find an ActionScript 3 library for connecting to S3 at:
http://code.google.com/p/as3awss3lib/
with more information at:
http://weblogs.macromedia.com/cantrell/archives/2007/05/actionscript_li.html
You can look at an example of the API in use by checking out the S3E AIR app at:
http://download.macromedia.com/pub/developer/air/sample_apps/S3E.air
and you can grab the source code from:
http://download.macromedia.com/pub/developer/air/sample_apps/S3E.zip
hope that helps...
mike chambers
If you're looking for a working Rails - Flex - S3 example then have a look at this rails project: http://github.com/GreenAsJade/s3-swf-upload-plugin
Its documented and works out of the box. You can even reverse engineer the Flex logic.