In my current project I need to use a strong PBE encryption algorithm, as asked by the client. But for that to work, I would need to install JCE Policy Files on each machine I want to deploy on. Based on the amount of machines, that is NOT a valid option for it to be "copy-and-run deployment".
How to avoid installing "Unlimited Strength" JCE policy files when deploying an application?
The solution there provided by using reflection to override JCE validations works perfectly, but only on J7 or above.
However, the entire project is designed for Java6 (we have already tried to upgrade it, without success). So the elements used in the reflection solution are not even present.
I'm currently using Jasypt + BouncyCastle for a StandardPBEStringEncryptor, with PBEWITHSHA256AND256BITAES-CBC-BC.
Is there a way to bypass the JCE restriction on Java6 by using reflection (or any other method that does not involve patching the JVM or getting an international Government Approval)?
Related
can any one explain java callout a little help will do.Actually i am having several doubts regarding where to add the expressions and message flow jar and where to add my custom jar.
Can i access the resources/java folder directly and can i use it to store my data?
First, check the docs on apigee at
Customize an API using Java
http://apigee.com/docs/api-services/content/customize-api-using-java
Keep in mind Java Callouts are only supported in the paid, Apigee Edge product, not the free Developer platform.
As you decide how to use Java, you should consider this basic hierarchy of policy management:
Policy Configuration First: Apigee policy configurations are in broad use and therefore tested daily by clients and most performant.
Javascript Callout: For stuff you can't do in a standard policy there is Javascript -- keep in mind this is "Compiled Javascript" which means at the time you deploy your project the JS gets interpreted by the Java Rhino engine and then runs like native code. Very fast, very scalable, and very easy to manage as your code is all in plain text files.
Java: You have to have a pretty compelling reason to use Java. Most common cases are where you have some complex connection that needs to be negotiated with custom encryption schemas or manipulating binary content. While perfomant, it's the most difficult code to manage (you upload compiled jars, so if someone takes over your work, the source code is in a separate place than your deployment bundle), and it's the most difficult to debug in the event of a failure.
To your specific question: All Apigee variables are available in Java and Java gives you pretty much god-like powers on the local server where the code is executed. Keep in mind, Apigee's physical architecture is distributed -- your jar may run on different servers for different API calls, so any persistent data (that you might want to store locally) should really be put into Key Value Map and read as needed. Keep your API development as stateless as possible.
Hope that helps.
I want to encrypt the files that are uploaded by users of a web application.
The files need to be encrypted and decrypted individually.
Are there advantages to encrypting the file using an external tool than inside the application?
For example calling gpg, crypt or 7zip (or any other tool) immediately after a file was uploaded.
Upon request for a retrieval, call them again to decrypt, then serve the file.
I thought this may have performance advantages as well as the fact that encryption can be outsourced to a potentially more robust and well trusted application than the library available in the programming language.
Launching an tool creates a new process every time, which can impact scalability. There are libraries as respectable as tools, some of them from the same codebase.
First of all, you shouldn't implement your own crypto. That said, the alternatives don't look that different to me. Surely you can use GPG either in-process (called via an API) or out-of-process (with parameters passed on the command line). Then the considerations come down to the usual engineering ones of performance, robustness, etc and really have nothing in particular to do with cryptography.
There is a lot of action in the CSS/JS bundling+minification space with MVC4 and things like Cassette, but I can't seem to find one that supports uploading to a CDN natively.
We use Rackspace Cloud Files and it requires that we upload (via their API no less) our assets directly - it doesn't do an origin-pull.
Right now, we have MSBuild script that does this for us, but it is very difficult to maintain and work with.
If you could map a drive, I think RequestReduce MIGHT get you what you want out of the box. It performs bundling and minification at runtime and provides some configuration options that allow you to specify the drop location of generated assets to any UNC path. The intent of this config is for web farm scenarios that have a dedicated share for static assets. I'm wondering if this might work for you. It also exposes an interface that allows you to essentially take over the process of saving and retrieving assets from any durable store. It comes with a local disk store and there is a SqlServer store provided as a separate Nuget package. I've had others propose writing ones for Azure blob or amazon ec3. Its a bit involved but not too horrible. At any rate its free, it provides background image spriting and optimization which few others provide and there is another Nuget package that adds Less/Sass/Coffee compiling. Its used by Microsoft on alot of the MSDN/Technet properties.
I run the project and would be happy to answer any questions via the Github Issues page.
I am about to write a tender. The solution might be a PHP based CMS. Later I might want to integrate an ASP.NET framework and make it look like one site.
What features would make this relatively easy.
Would OpenId and similar make a difference?
In the PHP world Joomla is supposed to be more integrative than Druapal. What are the important differences here?
Are there spesific frameworks in ASP.NET, Python or Ruby that are more open to integration than others?
The most important thing is going to be putting as much of the look-and-feel in a format that can be shared by any platforms. That means you should develop a standard set of CSS files and (X)HTML files which can be imported (or directly presented) in any of those platform options. Think about it as writing a dynamic library that can be loaded by different programs.
Using OpenID for authentication, if all of your platform options support it, would be nice, but remember that each platform is going to require additional user metadata be stored for each user (preferences, last login, permissions/roles, etc) which you'll still have to wrangle between them. OpenID only solves the authentication problem, not the authorization or preferences problems.
Lastly, since there are so many options, I would stick to cross-platform solutions. That will leave you the most options going forward. There's no compelling advantage IMHO to using ASP.NET if there's a chance you may one day integrate with other systems or move to another system.
I think that most important thing is to choose the right server. The server needs to have adequate modules. Apache would be good choice as it supports all that you want, including mod_aspnet (which I didn't test, but many people say it works).
If you think asp.net integration is certanly going to come, I would choose Windows as OS as it will certanly be easier.
You could also install reverse proxy that would decide which server to render content based on request - if user request aspx page, proxy will connect to the IIS and windoze page, if it asks for php it can connect to other server. The problem with this approach is shared memory & state, which could be solved with carefull design to support this - like shared database holding all state information and model data....
OpenID doesn't make a difference - there are libs for any framework you choose.
I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.