I am using Xamarin Android application, Using crossplugi,.device info to taking the photo, as per the requirement I need to strore image in DB not in the devices.
public async void ExecuteCameraCommand()
{
int iCount = 0;
if (!CrossMedia.Current.IsCameraAvailable || !CrossMedia.Current.IsTakePhotoSupported)
{
// await _dialogService.DisplayAlertAsync("No Camera", ":( No camera available.", "OK");
return;
}
iCount = iCount + 1;
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
SaveToAlbum = false,
});
if (file == null)
return;
}
The file object has a GetStream method. You can use this to get the data and then save it to the database. Something like this:
Stream fileStream = file.GetStream();
byte[] buffer = new byte[fileStream.Length];
fileStream.Read(buffer, 0, (int)fileStream.Length);
//Save to the database
I assume you are storing the photos locally on the device in something like a SQLite database. Bear in mind the photos can be several megabytes in size. When I did this I stored the photos on the file system on the device and saved a record in the database of their path and name to avoid bloating the database.
Related
I am developing a process to compress and encrypt a byte array in my desktop application and send it via a WebMethod to my web application, then uncompress/unencrypt it back to a byte array. I am currently attempting to do this with SharpZipLib. The compression of the file seems to be working as expected. I am able to save the file to disk and extract it using 7zip without issue.
The problem I am having is when I receive the byte array on my web server and attempt to extract it.
I use the CompressData method to compress the data on the desktop side.
private byte[] CompressData(byte[] data, string password)
{
MemoryStream input = new MemoryStream(data);
MemoryStream ms = new MemoryStream();
ZipOutputStream os = new ZipOutputStream(ms);
os.SetLevel(9);
if (!string.IsNullOrEmpty(password)) os.Password = password;
ZipEntry entry = new ZipEntry("data")
{
DateTime = DateTime.Now
};
if (!string.IsNullOrEmpty(password)) entry.AESKeySize = 256;
os.PutNextEntry(entry);
StreamUtils.Copy(input, os, new byte[4096]);
os.CloseEntry();
os.IsStreamOwner = false;
os.Close();
ms.Position = 0;
return ms.ToArray();
}
I am using the following code to extract the data on the server end (taken almost verbatim from the SharpZipLib examples):
private byte[] DoRebuildData(byte[] data, string password)
{
MemoryStream inStream = new MemoryStream(data);
MemoryStream outputMemStream = new MemoryStream();
ZipOutputStream zipOut = new ZipOutputStream(outputMemStream)
{
IsStreamOwner = false // False stops the Close also Closing the underlying stream.
};
zipOut.SetLevel(3);
zipOut.Password = password; // optional
RecursiveExtractRebuild(inStream, zipOut);
inStream.Close();
// Must finish the ZipOutputStream to finalise output before using outputMemStream.
zipOut.Close();
outputMemStream.Position = 0;
return outputMemStream.ToArray();
}
// Calls itself recursively if embedded zip
//
private void RecursiveExtractRebuild(Stream str, ZipOutputStream os)
{
ZipFile zipFile = new ZipFile(str)
{
IsStreamOwner = false
};
foreach (ZipEntry zipEntry in zipFile)
{
if (!zipEntry.IsFile)
continue;
String entryFileName = zipEntry.Name; // or Path.GetFileName(zipEntry.Name) to omit folder
// Specify any other filtering here.
Stream zipStream = zipFile.GetInputStream(zipEntry);
// Zips-within-zips are extracted. If you don't want this and wish to keep embedded zips as-is, just delete these 3 lines.
if (entryFileName.EndsWith(".zip", StringComparison.OrdinalIgnoreCase))
{
RecursiveExtractRebuild(zipStream, os);
}
else
{
ZipEntry newEntry = new ZipEntry(entryFileName);
newEntry.DateTime = zipEntry.DateTime;
newEntry.Size = zipEntry.Size;
// Setting the Size will allow the zip to be unpacked by XP's built-in extractor and other older code.
os.PutNextEntry(newEntry);
StreamUtils.Copy(zipStream, os, new byte[4096]);
os.CloseEntry();
}
}
}
The expected result is to get back my original byte array on the server.
On the server, when it comes to the line:
Stream zipStream = zipFile.GetInputStream(zipEntry);
I receive the error 'No password available for AES encrypted stream.'
The only place I see to set a password is in the ZipOutputStream object, and I have checked at runtime, and this is set appropriately.
When unpacking, the password must be assigned to the password-property of the ZipFile-instance, i.e. it must be set in the RecursiveExtractRebuild-method (for this the password has to be added as an additional parameter):
zipFile.Password = password;
as shown in this example.
It should be noted that the current DoRebuildData-method doesn't actually unpack the data, but re-packs it into a new zip. The (optional) line in the DoRebuildData-method:
zipOut.Password = password;
does not specify the password for the unpacking (i.e. for the old zip), but defines the password for the new zip.
How do you Get an existing database from a device or emulator ?
device not rooted
I'm using Microsoft.WindowsAzure.MobileServices
public bool InitialiseDb()
{
try
{
Store = new MobileServiceSQLiteStore(offlineDbPath);
Store.DefineTable<Products>();
_client.SyncContext.InitializeAsync(Store);
this.productTable = _client.GetSyncTable<Products>();
return true;
}
catch (Exception ex)
{
Debug.WriteLine(ex.Message);
return false;
}
}
You can copy the existing database into a folder you can access
Create path to database :
string filepath = "data/data/[package-name]/files/[name-of-db]";
You can get your package name from your android project options
then use the following code to extract it:
string filepath = "data/data/com.foo.foo/files/localstorage.db";
var bytes = System.IO.File.ReadAllBytes(filepath);
var fileCopyName = string.Format("/sdcard/Database_{0:dd-MM-yyyy_HH-mm-ss-tt}.db", System.DateTime.Now);
System.IO.File.WriteAllBytes(fileCopyName, bytes);
i am trying to resize an image using the WebImage helper BEFORE uploading to Azure blob, using the following code, but getting this error:
cannot convert from 'system.web.helpers.webimage' to 'system.io.stream'
The code is as follows:
public async Task<string> UploadPropertyImageAsync(HttpPostedFileBase imageToUpload)
{
string imageFullPath = null;
if (imageToUpload == null || imageToUpload.ContentLength == 0 || imageToUpload.ContentLength >= 8388608)
{
return null;
}
//Image img = System.Drawing.Image.FromStream(imageToUpload);
WebImage img = new WebImage(imageToUpload.InputStream);
if (img.Width > 1000)
img.Resize(1000, 1000);
try
{
CloudStorageAccount cloudStorageAccount = ConnectionString.GetConnectionString();
CloudBlobClient cloudBlobClient = cloudStorageAccount.CreateCloudBlobClient();
CloudBlobContainer cloudBlobContainer = cloudBlobClient.GetContainerReference("property");
if (await cloudBlobContainer.CreateIfNotExistsAsync())
{
await cloudBlobContainer.SetPermissionsAsync(
new BlobContainerPermissions
{
PublicAccess = BlobContainerPublicAccessType.Blob
}
);
}
string imageName = Guid.NewGuid().ToString() + "-" + Path.GetExtension(img.FileName);
CloudBlockBlob cloudBlockBlob = cloudBlobContainer.GetBlockBlobReference(imageName);
cloudBlockBlob.Properties.ContentType = img.ContentType;
await cloudBlockBlob.UploadFromStreamAsync(img);
Any idea where i'm going wrong?
CloudBlockBlob.UploadFromStreamAsync expects a stream which your WebImage object is not and hence you're getting this error.
What you would need to do is convert the image into a stream. I looked up the documentation and there's no direct method to do so.
However you can get the bytes from the WebImage using WebImage.GetBytes and then use CloudBlockBlob.UploadFromByeArrayAsync method to upload that byte array as blob in Azure Storage.
The exception is pretty self explanatory.
This line is expecting a Stream, not a WebImage variable.
await cloudBlockBlob.UploadFromStreamAsync(img);
There are plenty of examples showing how to upload files to Azure, for example:
Uploading to Azure
... and here...
An Introduction to Windows Azure BLOB Storage
I am trying to upload video files Amazon S3 using Multipart upload method in asp.net and I traced the upload progress using logs. It uploads 106496 each time and runs only single thread at a time. I did not notice that multiple threads running. Please clarify me on this why it is running single thread and it's taking long time to upload even for 20Mb file it's taking almost 2 minutes.
Here is my code, which uses UploadPartRequest.
private void UploadFileOnAmazon(string subUrl, string filename, Stream audioStream, string extension)
{
client = new AmazonS3Client(accessKey, secretKey, Amazon.RegionEndpoint.USEast1);
// List to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename
};
InitiateMultipartUploadResponse initResponse =
client.InitiateMultipartUpload(initiateRequest);
// 2. Upload Parts.
//long contentLength = new FileInfo(filePath).Length;
long contentLength = audioStream.Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
InputStream = audioStream
//FilePath = filePath
};
// Upload part and add response to our list.
uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);
uploadResponses.Add(client.UploadPart(uploadRequest));
filePosition += partSize;
}
logger.Info("Done");
// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId,
//PartETags = new List<PartETag>(uploadResponses)
};
completeRequest.AddPartETags(uploadResponses);
CompleteMultipartUploadResponse completeUploadResponse =
client.CompleteMultipartUpload(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId
};
client.AbortMultipartUpload(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e)
{
// Process event.
logger.DebugFormat("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
Is there anything wrong with my code or how to make threads run simultaneously to speed up upload?
Rather than managing the Multipart Upload yourself, try using the TransferUtility that does all the hard work for you!
See: Using the High-Level .NET API for Multipart Upload
The AmazonS3Client internally uses an AmazonS3Config instance to know the buffer size used for transfers (ref 1). This AmazonS3Config (ref 2) has a property named BufferSize whose default value is retrieved from a constant in AWSSDKUtils (ref 3) - which in the current SDK version defaults to 8192 bytes - quite small value IMHO.
You may use a custom instance of AmazonS3Config with an arbitrary BufferSize value. To build an AmazonS3Client instance that respects your custom configs, you have to pass the custom config to the client constructor. Example:
// Create credentials.
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
// Create custom config.
AmazonS3Config config = new AmazonS3Config
{
RegionEndpoint = Amazon.RegionEndpoint.USEast1,
BufferSize = 512 * 1024, // 512 KiB
};
// Pass credentials + custom config to the client.
AmazonS3Client client = new AmazonS3Client(credentials, config);
// They uploaded happily ever after.
I have created a custom pipeline component which transforms a complex excel spreadsheet to XML. The transformation works fine and I can write out the data to check. However when I assign this data to the BodyPart.Data part of the inMsg or a new message I always get a routing failure. When I look at the message in the admin console it appears that the body contains binary data (I presume the original excel) rather than the XML I have assigned - see screen shot below. I have followed numerous tutorials and many different ways of doing this but always get the same result.
My current code is:
public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
{
//make sure we have something
if (inmsg == null || inmsg.BodyPart == null || inmsg.BodyPart.Data == null)
{
throw new ArgumentNullException("inmsg");
}
IBaseMessagePart bodyPart = inmsg.BodyPart;
//create a temporary directory
const string tempDir = #"C:\test\excel";
if (!Directory.Exists(tempDir))
{
Directory.CreateDirectory(tempDir);
}
//get the input filename
string inputFileName = Convert.ToString(inmsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties"));
swTemp.WriteLine("inputFileName: " + inputFileName);
//set path to write excel file
string excelPath = tempDir + #"\" + Path.GetFileName(inputFileName);
swTemp.WriteLine("excelPath: " + excelPath);
//write the excel file to a temporary folder
bodyPart = inmsg.BodyPart;
Stream inboundStream = bodyPart.GetOriginalDataStream();
Stream outFile = File.Create(excelPath);
inboundStream.CopyTo(outFile);
outFile.Close();
//process excel file to return XML
var spreadsheet = new SpreadSheet();
string strXmlOut = spreadsheet.ProcessWorkbook(excelPath);
//now build an XML doc to hold this data
XmlDocument xDoc = new XmlDocument();
xDoc.LoadXml(strXmlOut);
XmlDocument finalMsg = new XmlDocument();
XmlElement xEle;
xEle = finalMsg.CreateElement("ns0", "BizTalk_Test_Amey_Pipeline.textXML",
"http://tempuri.org/INT018_Workbook.xsd");
finalMsg.AppendChild(xEle);
finalMsg.FirstChild.InnerXml = xDoc.FirstChild.InnerXml;
//write xml to memory stream
swTemp.WriteLine("Write xml to memory stream");
MemoryStream streamXmlOut = new MemoryStream();
finalMsg.Save(streamXmlOut);
streamXmlOut.Position = 0;
inmsg.BodyPart.Data = streamXmlOut;
pc.ResourceTracker.AddResource(streamXmlOut);
return inmsg;
}
Here is a sample of writing the message back:
IBaseMessage Microsoft.BizTalk.Component.Interop.IComponent.Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
IBaseMessagePart bodyPart = pInMsg.BodyPart;
if (bodyPart != null)
{
using (Stream originalStrm = bodyPart.GetOriginalDataStream())
{
byte[] changedMessage = ConvertToBytes(ret);
using (Stream strm = new AsciiStream(originalStrm, changedMessage, resManager))
{
// Setup the custom stream to put it back in the message.
bodyPart.Data = strm;
pContext.ResourceTracker.AddResource(strm);
}
}
}
return pInMsg;
}
The AsciiStream used a method like this to read the stream:
override public int Read(byte[] buffer, int offset, int count)
{
int ret = 0;
int bytesRead = 0;
byte[] FixedData = this.changedBytes;
if (FixedData != null)
{
bytesRead = count > (FixedData.Length - overallOffset) ? FixedData.Length - overallOffset : count;
Array.Copy(FixedData, overallOffset, buffer, offset, bytesRead);
if (FixedData.Length == (bytesRead + overallOffset))
this.changedBytes = null;
// Increment the overall offset.
overallOffset += bytesRead;
offset += bytesRead;
count -= bytesRead;
ret += bytesRead;
}
return ret;
}
I would first of all add more logging to your component around the MemoryStream logic - maybe write the file out to the file system so you can make sure the Xml version is correct. You can also attach to the BizTalk process and step through the code for the component which makes debugging a lot easier.
I would try switching the use of MemoryStream to a more basic custom stream that writes the bytes for you. In the BizTalk SDK samples for pipeline components there are some examples for a custom stream. You would have to customize the stream sample so it just writes the stream. I can work on posting an example. So do the additional diagnostics above first.
Thanks,