Mine is more of a question than a code problem, I have been trying to read about this for a while and I have not found an answer. My question is if amazon S3 automatically encrypts the data when it is uploaded to it or do we have to encrypt the data before we up load to S3. If we have to encrypt the data before we upload it can anyone recommend what gem to use and how.
It is not my experience that all files uploaded to S3 are encrypted by default.
Certainly this is not the case with Paperclip 3.3.1, as the S3 web console shows 'Server Side Encryption: None' for a document uploaded by this version with default attachment options.
However, Paperclip does support adding the x-amz-server-side-encryption header to the upload request via the s3_server_side_encryption option.
has_attached_file :file, s3_permissions: :private,
s3_server_side_encryption: :aes256
Should result in the desired behavior, but it doesn't. The following works, by manually setting this header, until pull request 1398 is merged, making the above work as expected.
has_attached_file :file, s3_permissions: :private,
s3_headers: { "x-amz-server-side-encryption" => "AES256" }
I confirmed that this 2nd configuration results in 'Server Side Encryption: AES-256' being indicated in the web console. Also confirmed using the fork referenced in the pull request and the first code snippet.
I added a wiki page for this, as documentation was lacking.
Paperclip Wiki Document on Encryption
AWS SSE Doc
Originating Paperclip Issue
All S3 content is encrypted by default. As additional security measurement you can encrypt it on your own. You can use for example this : https://github.com/tcnijmeijer/aws-s3-cse for client side encryption.
Here is Amazon S3 documentation mentioning client-side encryption : http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingClientSideEncryption.html
Related
What is the use of X-Mx-ReqToken HTTP header? Almost all tutorials on enabling CORS in nginx whitelist the X-Mx-ReqToken header. But I can't find any information on the purpose of the header.
X-Mx-ReqToken is a header used to supply a profiler key for Mendix Runtime:
Member Data Documentation
final String com.mendix.systemwideinterfaces.core.IProfiler.PROFILER_KEY = "X-Mx-ReqToken" [static]
Basically, this is just a case of copying & pasting the same snippet over and over again until it made it to every nginx CORS tutorial out there. But unless you actually use said profiler, it's safe to omit.
I have a file in my folder and I want to send it to my Embedded Linux device via FTP (much like this and this and this). I know the step-by-step to do it, but I'm failing when it comes to creating the correct QUrl for it: when I call ''put'', I always get the error 301:
QNetworkReply::ProtocolUnknownError 301 the Network Access API cannot honor the request because the protocol is not known
As details, I want to save the file in a specific directory located inside a SD Card in the device, /media/mmcblk0p2/bin, and the connection doesn't have, at least for now, a password and user name defined¹. Also interesting to notice that I'm not being able to connect myself via FTP using Terminal; it always says "421 Service not available, remote server has closed connection", which is not the same problem AFAIK. (Btw I'm being able to connect via SSH using FileZilla, so it's not a hardware/physical problem)
So where is the problem? I have exactly the same code as in the mentioned links. As for now, the link I'm using is
ftp://10.1.25.10/media/mmcblk0p2/bin/center.png
(when returning the QUrl object with QDebug) and I'm not being able to make it work.
Any help would be appreciated.
¹: Btw I remember reading somewhere that when one doesn't use a user name for connecting to FTP, the system only allows the client to connect to the /ftp folder. Is that true? In that case, just calling QUrl::setUserName("root"); would suffice?
I finally discovered my problem: since I was copying the code from examples of upload to HTTP servers, I was using the function "post", specific for HTTP, instead of "put" which was the correct function to use.
Regarding the QUrl, I used QUrl urlTemp("//10.1.25.10/test.info"); while telling it to use ftp with scheme, urlTemp.setScheme("ftp");.
I am working with a historic API which grants access via a key/secret combo, which the original API designer specified should be passed as the user name & password in an HTTP Basic auth header, e.g.:
curl -u api_key:api_secret http://api.example.com/....
Now that our API client base is going to be growing, we're looking to using 3scale to handle both authentication, rate limiting and other functions. As per 3scale's instructions and advice, we'll be using an Nginx proxy in front of our API server, which authenticates against 3scale's services to handle all the access control systems.
We'll be exporting our existing clients' keys and secrets into 3scale and keeping the two systems in sync. We need our existing app to continue to receive the key & secret in the existing manner, as some of the returned data is client-specific. However, I need to find a way of converting that HTTP basic auth request, which 3scale doesn't natively support as an authentication method, into rewritten custom headers which they do.
I've been able to set up the proxy using the Nginx and Lua configs that 3scale configures for you. This allows the -u key:secret to be passed through to our server, and correctly processed. At the moment, though, I need to additionally add the same authentication information either as query params or custom headers, so that 3scale can manage the access.
I want my Nginx proxy to handle that for me, so that users provide one set of auth details, in the pre-existing manner, and 3scale can also pick it up.
In a language I know, e.g., Ruby, I can decode the HTTP_AUTHORIZATION header, pick out the Base64-encoded portion, and decode it to find the key & secret components that have been supplied. But I'm an Nginx newbie, and don't know how to achieve the same within Nginx (I also don't know if 3scale's supplied Lua script can/will be part of a solution)...
Reusing the HTTP Authorization header for the 3scale keys can be supported with a small tweak in your Nginx configuration files. As you were rightly pointing out, the Lua script that you download is the place to do this.
However, I would suggest a slightly different approach regarding the keys that you import to 3scale. Instead of using the app_id/app_key authentication pattern, you could use the user_key mode (which is a single key). Then what you would import to 3scale for each application would be the base64 string of api_key+api_secret combined.
This way the changes you will need to do to the configuration files will be fewer and simpler.
The steps you will need to follow are:
in your 3scale admin portal, set the authentication mode to API key (https://support.3scale.net/howtos/api-configuration/authentication-patterns)
go to the proxy configuration screen (where you set your API backend, mappings and where you download the Nginx files).
under "Authentication Settings", set the location of the credentials to HTTP headers.
download the Nginx config files and open the Lua script
find the following line (should be towards the end of the file):
local parameters = get_auth_params("headers", string.split(ngx.var.request, " ")[1] )
replace it with:
local parameters = get_auth_params("basicauth", string.split(ngx.var.request, " ")[1] )
finally, within the same file, replace the entire function named "get_auth_params" for the one in this gist: https://gist.github.com/vdel26/9050170
I hope this approach suits your needs. You can also contact at support#3scale.net if you need more help.
Does HTTP PUT have advantages over HTTP POST, particularly for File Uploads? Data transfer should be highly secure. Your ideas / guidance on this will be of great help.
PUT is designed for file uploads moreso than POST which requires doing a multipart upload, but then it comes down to what your server can do as to which is more convenient for you to implement.
Whichever HTTP method you use, you'll be transmitting data in the clear unless you secure the connection using SSL.
I think the choice of PUT vs. POST should be more based on the rule:
PUT to a URL should be used to update or create the resource that can be located at that URL.
POST to a URL should be used to update or create a resource which is located at some other ("subordinate") URL, or is not locatable via http.
Any choices regarding security should work equally with both PUT and POST. https is a good start, if you are building a REST API then keys, authorisation, authentication and message signing are worth investigating.
Does HTTP PUT have advantages over HTTP POST, particularly for File Uploads?
You can use standard tools for sending the data (i.e. ones that don't have to be aware of your custom scheme for describing where the file should be uploaded to or how to represent that file). For example, OpenOffice.org includes WebDAV support.
Data transfer should be highly secure
The method you use has nothing to do with that. For security use SSL in combination with some form of authentication and authorization.
I'm trying to utilize the Amazon Product Advertising API. They provided me with a .wsdl file which I consumed and generated wrapper classes for via Visual Studio 2008's "Add Service Reference" option. This wrapper class works just fine as is and I've been successfully sending requests and receiving responses from Amazon.
However, they are now requiring that all partners start authenticating their requests. They have provided me with two .pem files (one which they call my X.509 certificate file, and one which they call my private key file). I'm not entirely sure what to do with these files. Amazon states the following:
Each SOAP request must be signed with the private key associated with the X.509 certificate. To create the signature, you sign the Timestamp element, and if you're using WS-Addressing, we recommend you also sign the Action header element. In addition, you can optionally sign the Body and the To header element
I realize that much more information may need to be provided here, so please let me know if I need to provide further detail in order to get an answer to this question.
Checkout this article --> http://www.byteblocks.com/post/2009/06/15/Secure-Amazon-Web-Service-Request.aspx
Looks like it should help you out.
Other links that might help:
1) http://developer.amazonwebservices.com/connect/thread.jspa?messageID=132705