How to download the encrypted attachment in corda? - corda

How to download the encrypted attachment in Corda? I have uploaded a file in corda and encrypted it and got the hash. How to download the same from other node?

Consider, you have uploaded Jar as the attachment that contains a single file (e.g. legal agreement pdf), you can extract it as below:
//Get the attachmentJar from node for attachmentHash.
val attachmentJar: InputStream = cordaRPCOps.openAttachment(attachmentHash)
//Read the content of Jar to get file name and data.
var file_name_data: Pair<String, ByteArray>? = null
JarInputStream(attachment).use { jar ->
while (true) {
val nje = jar.nextEntry ?: break
if (nje.isDirectory) {
continue
}
file_name_data = Pair(nje.name, jar.readBytes())
}
}

If an attachment is referenced by hash in a transaction that node A sends to node B, and node B has never seen the attachment corresponding to that hash, they will automatically request the attachment from node A and cache it locally for future reference.

private fun downloadAttachment(proxy: CordaRPCOps, attachmentHash: SecureHash): JarInputStream {
val attachmentDownloadInputStream = proxy.openAttachment(attachmentHash)
return JarInputStream(attachmentDownloadInputStream)
}

Related

Uploading files to Blob & getting error: C:\Program Files (x86)\IIS Express\

I tried to upload file to blob. But I'm getting error like this:
"'C:\Program Files (x86)\IIS
Express\Nominative-Officers-Entry-Form-Stu.docx'."
I don't use HttpPostedFileBase in my code. I just pass object to my controller with files to be uploaded. Lease tell me what I'm doing wrong?
I want to know wt this line means :
"blockBlob.Properties.ContentType = "
This is my code:
public static SaveResponses CreateFile(Blob_Storage_Header docDetails)
{
string storageConnectionString = ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString.ToString();
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(storageConnectionString);
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("test2");
ICollection<Blob_Storage_Details> BlobStorageDetails = docDetails.Blob_Storage_Details;
if (BlobStorageDetails.Count > 0) {
foreach (Blob_Storage_Details item in BlobStorageDetails)
{
string DocUUID = Guid.NewGuid().ToString();
CloudBlockBlob blockBlob = container.GetBlockBlobReference(DocUUID + item.Blob_Name);
var fileName = Path.GetFileName(item.Blob_Name);
//blockBlob.Properties.ContentType = item.ContentType;
// Create or overwrite the "myblob" blob with contents from a local file.
using (var fileStream = File.OpenRead(fileName))
{
blockBlob.UploadFromStream(fileStream);
}
}
}
SaveResponses saveResponse = new SaveResponses();
saveResponse.saveStatus = "true";
saveResponse.messageType = "success";
saveResponse.message = "File Create message";
return (saveResponse);
}
"'C:\Program Files (x86)\IIS Express\Nominative-Officers-Entry-Form-Stu.docx'."
Based on your code, I ran it on my side, then I got the following error:
Remark: In development environment, you could add “<customErrors mode="Off"/>” within the system.web node in your Web.config file, then you could view the detailed errors.
According to this error, I checked and found the specified file path was not existed.
After some trials, I fixed the issue on my side. Please follow the descriptions below to check your code to see whether it works:
a) Please pay attention to the code “Path.GetFileName”, it returns the file name and extension of the specified path string (e.g. settings.job).
b) Make sure that the filename that is used in “File.OpenRead(fileName)” is an absolute file path and the file is existed in your environment.
"blockBlob.Properties.ContentType = "
Azure Storage Client Library for .NET is based on Storage Service REST API,
From the official document we could find that “blockBlob.Properties.ContentType” represents the MIME content type of the blob, the default type is application/octet-stream.
MIME is a way to identity files on internet according to their nature and format. For example, using the "Content-type" header value defined in a HTTP response, the browser can open the file with the proper extension/plugin. For more details about MIME, please refer to this link: http://www.freeformatter.com/mime-types-list.html

How do I upload a file to an Acumatica Screen through HTTP virtual path?

How do I upload a file to an Acumatica Screen through HTTP virtual path?
For example, I would like to upload mysite.com/files/abc.pdf to the Sales orders screen.
Below is a code snippet to achieve your goal.It is reading file from HTTP URL and attaching it to one of the existing Case.
//Graph for file management
PX.SM.UploadFileMaintenance filegraph = PXGraph.CreateInstance<PX.SM.UploadFileMaintenance>();
//Since you need file from HTTP URL - below is a sample
WebRequest request = WebRequest.Create("http://www.pdf995.com/samples/pdf.pdf");
using (System.IO.Stream dataStream = request.GetResponse().GetResponseStream())
{
using (System.IO.MemoryStream mStream = new System.IO.MemoryStream())
{
dataStream.CopyTo(mStream);
byte[] data = mStream.ToArray();
//Create file info, you may check different overloads as per your need
PX.SM.FileInfo fileinfo = new PX.SM.FileInfo("case.pdf", null, data);
if (filegraph.SaveFile(fileinfo))
{
if (fileinfo.UID.HasValue)
{
// To attach the file to case screen - example
CRCaseMaint graphCase = PXGraph.CreateInstance<CRCaseMaint>();
//Locate existing case
graphCase.Case.Current = graphCase.Case.Search<CRCase.caseCD>("<Case to which you want to attach file>");
//To Attach file
PXNoteAttribute.SetFileNotes(graphCase.Case.Cache, graphCase.Case.Current, fileinfo.UID.Value);
//To Attach note
PXNoteAttribute.SetNote(graphCase.Case.Cache, graphCase.Case.Current, "<Note you wish to specify>");
//Save case
graphCase.Save.Press();
}
}
}
}

Cannot upload large (>50MB) files to SharePoint 2010 document library

I'm trying to upload a large file to a document library, but it fails after just a few seconds. The upload single document fails silently, upload multiple just shows a failed message. I've turned up the file size limit on the web application to 500MB, and the IIS request length to the same (from this blog), and increased the IIS timeout for good measure. Are there any other size caps that I've missed?
Update I've tried a few files of various sizes, anything 50MB or over fails, so I assume something somewhere is still set to the webapp default.
Update 2 Just tried uploading using the following powershell:
$web = Get-SPWeb http://{site address}
$folder = $web.GetFolder("Site Documents")
$file = Get-Item "C:\mydoc.txt" // ~ 150MB
$folder.Files.Add("SiteDocuments/mydoc.txt", $file.OpenRead(), $false)
and get this exception:
Exception calling "Add" with "3" argument(s): "<nativehr>0x80070003</nativehr><nativestack></nativestack>There is no file with URL 'http://{site address}/SiteDocuments/mydoc.txt' in this Web."
which strikes me as odd as of course the file wouldn't exist until it's been uploaded? N.B. while the document library has the name Site Documents, it has the URL SiteDocuments. Not sure why...
Are you sure you updated the right webapp? Is the filetype blocked by the server? Is there adequate space in your content database? I would check ULS logs after that and see if there is another error since it seems you hit the 3 spots you would need too update.
for uploading a large file, you can use the PUT method instead of using the other ways to upload a document.
by using a put method you will save the file into content database directly. see the example below
Note: the disadvantage of the code below is you cannot catch the object that is responsible for uploading directly, on other word, you cannot update the additional custom properties of the uploaded document directly.
public static bool UploadFileToDocumentLibrary(string sourceFilePath, string targetDocumentLibraryPath)
{
//Flag to indicate whether file was uploaded successfuly or not
bool isUploaded = true;
try
{
// Create a PUT Web request to upload the file.
WebRequest request = WebRequest.Create(targetDocumentLibraryPath);
//Set credentials of the current security context
request.Credentials = CredentialCache.DefaultCredentials;
request.Method = “PUT”;
// Create buffer to transfer file
byte[] fileBuffer = new byte[1024];
// Write the contents of the local file to the request stream.
using (Stream stream = request.GetRequestStream())
{
//Load the content from local file to stream
using (FileStream fsWorkbook = File.Open(sourceFilePath, FileMode.Open, FileAccess.Read))
{
//Get the start point
int startBuffer = fsWorkbook.Read(fileBuffer, 0, fileBuffer.Length);
for (int i = startBuffer; i > 0; i = fsWorkbook.Read(fileBuffer, 0, fileBuffer.Length))
{
stream.Write(fileBuffer, 0, i);
}
}
}
// Perform the PUT request
WebResponse response = request.GetResponse();
//Close response
response.Close();
}
catch (Exception ex)
{
//Set the flag to indiacte failure in uploading
isUploaded = false;
}
//Return the final upload status
return isUploaded;
}
and here are an example of calling this method
UploadFileToDocumentLibrary(#”C:\test.txt”, #”http://home-vs/Shared Documents/textfile.pdf”);

Create temporary link for download

I use ASP.NET
I need to give user temporary link for downloading file from server.
It should be a temporary link (page), which is available for a short time (12 hours for example). How can I generate this link (or temporary web page with link)?
Here's a reasonably complete example.
First a function to create a short hex string using a secret salt plus an expiry time:
public static string MakeExpiryHash(DateTime expiry)
{
const string salt = "some random bytes";
byte[] bytes = Encoding.UTF8.GetBytes(salt + expiry.ToString("s"));
using (var sha = System.Security.Cryptography.SHA1.Create())
return string.Concat(sha.ComputeHash(bytes).Select(b => b.ToString("x2"))).Substring(8);
}
Then a snippet that generates a link with a one week expiry:
DateTime expires = DateTime.Now + TimeSpan.FromDays(7);
string hash = MakeExpiryHash(expires);
string link = string.Format("http://myhost/Download?exp={0}&k={1}", expires.ToString("s"), hash);
Finally the download page for sending a file if a valid link was given:
DateTime expires = DateTime.Parse(Request.Params["exp"]);
string hash = MakeExpiryHash(expires);
if (Request.Params["k"] == hash)
{
if (expires < DateTime.UtcNow)
{
// Link has expired
}
else
{
string filename = "<Path to file>";
FileInfo fi = new FileInfo(Server.MapPath(filename));
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment;filename=" + filename);
Response.AddHeader("Content-Length", fi.Length.ToString());
Response.WriteFile(fi.FullName);
Response.Flush();
}
}
else
{
// Invalid link
}
Which you should certainly wrap in some exception handling to catch mangled requests.
http://example.com/download/document.pdf?token=<token>
The <token> part is key here. If you don't want to involve a database, encrypt link creation time, convert it to URL-safe Base64 representation and give user that URL. When it's requested, decrypt token and compare date stored in there with current date and time.
Alternatively, you can have a separate DownloadTokens table wich will map said tokens (which can be GUIDs) to expiration dates.
Append a timestamp to the URL, in the querystring:
page.aspx?time=2011-06-22T22:12
Check the timestamp against the current time.
To avoid the user changing the timestamp by himself, also compute some secret hash over the timestamp, and also append that to the querystring:
page.aspx?time=2011-06-22T22:12&timehash=4503285032
As hash you can do something like the sum of all fields in the DateTime modulo some prime number, or the SHA1 sum of the string representation of the time. Now the user will not be able to change the timestamp without knowing the correct hash. In your page.aspx, you check the given hash against the hash of the timestamp.
There's a million ways to do it.
The way I did once for a project was to generate a unique key and use a dynamic downloader script to stream the file. when the file request was made the key was generated and stored in db with a creation time and file requested. you build a link to the download script and passed in the key. from there it was easy enough to keep track of expiration.
llya
I'll assume you're not requiring any authentication and security isn't an issue - that is if anyone gets the URL they will also beable to download the file.
Personally I'd create a HttpHandler and then create some unique string that you can append to the URL.
Then within the ProcessRequest void test the encoded param to see if it's still viable (with in your specified time-frame) if so use BinaryWrite to render the File or if not you can render some HTML using Response.Write("Expired")
Something like :
public class TimeHandler : IHttpHandler, IRequiresSessionState
{
public void ProcessRequest ( HttpContext context )
{
if( this.my_check_has_expired( this.Context.Request.Params["my_token"] ) )
{
// Has Expired
context.Response.Write( "URL Has Expired" );
return;
}
// Render the File
Stream stream = new FileStream( File_Name , FileMode.Open );
/* read the bytes from the file */
byte[] aBytes = new byte[(int)oStream.Length];
stream.Read( aBytes, 0, (int)oStream.Length );
stream.Close( );
// Set Headers
context.Response.AddHeader( "Content-Length", aBytes.Length.ToString( ) );
// ContentType needs to be set also you can force Save As if you require
// Send the buffer
context.Response.BinaryWrite( aBytes );
}
}
You need to then setup the Handler in IIS, but that a bit different depending on the version you're using.

Upload files directly to Amazon S3 from ASP.NET application

My ASP.NET MVC application will take a lot of bandwidth and storage space. How can I setup an ASP.NET upload page so the file the user uploaded will go straight to Amazon S3 without using my web server's storage and bandwidth?
Update Feb 2016:
The AWS SDK can handle a lot more of this now. Check out how to build the form, and how to build the signature. That should prevent you from needing the bandwidth on your end, assuming you need to do no processing of the content yourself before sending it to S3.
If you need to upload large files and display a progress bar you should consider the flajaxian component.
It uses flash to upload files directly to amazon s3, saving your bandwidth.
The best and the easiest way to upload files to amazon S3 via asp.net . Have a look at following blog post by me . i think this one will help. Here i have explained from adding a S3 bucket to creating the API Key, Installing Amazon SDK and writing code to upload files. Following are are the sample code for uploading files to amazon S3 with asp.net C#.
using System
using System.Collections.Generic
using System.Linq
using System.Web
using Amazon
using Amazon.S3
using Amazon.S3.Transfer
///
/// Summary description for AmazonUploader
///
public class AmazonUploader
{
public bool sendMyFileToS3(System.IO.Stream localFilePath, string bucketName, string subDirectoryInBucket, string fileNameInS3)
{
// input explained :
// localFilePath = we will use a file stream , instead of path
// bucketName : the name of the bucket in S3 ,the bucket should be already created
// subDirectoryInBucket : if this string is not empty the file will be uploaded to
// a subdirectory with this name
// fileNameInS3 = the file name in the S3
// create an instance of IAmazonS3 class ,in my case i choose RegionEndpoint.EUWest1
// you can change that to APNortheast1 , APSoutheast1 , APSoutheast2 , CNNorth1
// SAEast1 , USEast1 , USGovCloudWest1 , USWest1 , USWest2 . this choice will not
// store your file in a different cloud storage but (i think) it differ in performance
// depending on your location
IAmazonS3 client = new AmazonS3Client("Your Access Key", "Your Secrete Key", Amazon.RegionEndpoint.USWest2);
// create a TransferUtility instance passing it the IAmazonS3 created in the first step
TransferUtility utility = new TransferUtility(client);
// making a TransferUtilityUploadRequest instance
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest();
if (subDirectoryInBucket == "" || subDirectoryInBucket == null)
{
request.BucketName = bucketName; //no subdirectory just bucket name
}
else
{ // subdirectory and bucket name
request.BucketName = bucketName + #"/" + subDirectoryInBucket;
}
request.Key = fileNameInS3 ; //file name up in S3
//request.FilePath = localFilePath; //local file name
request.InputStream = localFilePath;
request.CannedACL = S3CannedACL.PublicReadWrite;
utility.Upload(request); //commensing the transfer
return true; //indicate that the file was sent
}
}
Here you can use the function sendMyFileToS3 to upload file stream to amazon S3.
For more details check my blog in the following link.
Upload File to Amazon S3 via asp.net
I hope the above mentioned link will help.
Look for a javascript library to handle the client side upload of these files. I stumbled upon a javascript and php example Dojo also seems to offer a clientside s3 file upload.
ThreeSharp is a library to facilitate interactions with Amazon S3 in a .NET environment.
You'll still need to host the logic to upload and send files to s3 in your mvc app, but you won't need to persist them on your server.
Save and GET data in aws s3 bucket in asp.net mvc :-
To save plain text data at amazon s3 bucket.
1.First you need a bucket created on aws than
2.You need your aws credentials like
a)aws key b) aws secretkey c) region
// code to save data at aws
// Note you can get access denied error. to remove this please check AWS account and give //read and write rights
Name space need to add from NuGet package
using Amazon;
using Amazon.S3;
using Amazon.S3.Model;
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
try`
{
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
// simple object put
PutObjectRequest request = new PutObjectRequest()
{
ContentBody = "put your plain text here",
ContentType = "text/plain",
BucketName = "put your bucket name here",
Key = "1"
//put unique key to uniquly idenitify your data
// you can pass here any data with unique id like primary key
//in db
};
PutObjectResponse response = client.PutObject(request);
}
catch(exception ex)
{
//
}
Now go to your AWS account and check the bucket you can get data with "1" Name in the AWS s3 bucket.
Note:- if you get any other issue please ask me a question here will try to resolve it.
To get data from AWS s3 bucket:-
try
{
var credentials = new Amazon.Runtime.BasicAWSCredentials(awsKey, awsSecretKey);
AmazonS3Client client = new AmazonS3Client(credentials, RegionEndpoint.APSouth1);
GetObjectRequest request = new GetObjectRequest()
{
BucketName = bucketName,
Key = "1"// because we pass 1 as unique key while save
//data at the s3 bucket
};
using (GetObjectResponse response = client.GetObject(request))
{
StreamReader reader = new
StreamReader(response.ResponseStream);
vccEncryptedData = reader.ReadToEnd();
}
}
catch (AmazonS3Exception)
{
throw;
}

Resources