High response time from new StreamReader(webRsp.GetResponseStream()) - asp.net

I am trying to get response from web url.
but while we are throwing some load on it, lets say 100 user load. this line of code work very slowly. After reading the response from below code I have to send myXML to calling function for some use.
using (StreamReader rspStr = new StreamReader(webRsp.GetResponseStream()))
{
myXML = rspStr.ReadToEnd().Trim();
}
Is there any way to get good response time even after throwing 100 or 1000 users load.

I would try with an Async approach, just to avoid locking the execution on Stream opening or slow network waits, everything is explained here:
Making Asynchronous Requests
snippet:
WebRequest wreq = WebRequest.Create(httpSite);
// Create the state object.
RequestState rs = new RequestState();
// Put the request into the state object so it can be passed around.
rs.Request = wreq;
// Issue the async request.
IAsyncResult r = (IAsyncResult) wreq.BeginGetResponse(
new AsyncCallback(RespCallback), rs);
private static void RespCallback(IAsyncResult ar)
{
// Get the RequestState object from the async result.
RequestState rs = (RequestState) ar.AsyncState;
// Get the WebRequest from RequestState.
WebRequest req = rs.Request;
// Call EndGetResponse, which produces the WebResponse object
// that came from the request issued above.
WebResponse resp = req.EndGetResponse(ar);
// Start reading data from the response stream.
Stream ResponseStream = resp.GetResponseStream();
// Store the response stream in RequestState to read
// the stream asynchronously.
rs.ResponseStream = ResponseStream;
// Pass rs.BufferRead to BeginRead. Read data into rs.BufferRead
IAsyncResult iarRead = ResponseStream.BeginRead(rs.BufferRead, 0,
BUFFER_SIZE, new AsyncCallback(ReadCallBack), rs);
}
private static void ReadCallBack(IAsyncResult asyncResult)
{
// Get the RequestState object from AsyncResult.
RequestState rs = (RequestState)asyncResult.AsyncState;
// Retrieve the ResponseStream that was set in RespCallback.
Stream responseStream = rs.ResponseStream;
// Read rs.BufferRead to verify that it contains data.
int read = responseStream.EndRead( asyncResult );
if (read > 0)
{
// Prepare a Char array buffer for converting to Unicode.
Char[] charBuffer = new Char[BUFFER_SIZE];
// Convert byte stream to Char array and then to String.
// len contains the number of characters converted to Unicode.
int len =
rs.StreamDecode.GetChars(rs.BufferRead, 0, read, charBuffer, 0);
String str = new String(charBuffer, 0, len);
// Append the recently read data to the RequestData stringbuilder
// object contained in RequestState.
rs.RequestData.Append(
Encoding.ASCII.GetString(rs.BufferRead, 0, read));
// Continue reading data until
// responseStream.EndRead returns –1.
IAsyncResult ar = responseStream.BeginRead(
rs.BufferRead, 0, BUFFER_SIZE,
new AsyncCallback(ReadCallBack), rs);
}
else
{
if(rs.RequestData.Length>0)
{
// Display data to the console.
string strContent;
strContent = rs.RequestData.ToString();
}
// Close down the response stream.
responseStream.Close();
// Set the ManualResetEvent so the main thread can exit.
allDone.Set();
}
return;
}

Related

TCP Communication: How to send data from HoloLens by pressing a key and read data in a loop?

I need to establish a two-way communication like TCP between the UWP app (Hololens) as a client and Python on my PC as a server in a way that client sends the data when a key is pressed and receives the data in a while loop (when the data comes from the server).
I went through this link. This blog code is completely in accordance with what I need. The only problem is that it is written with system networking but I need to use windows networking instead.
Part of the code in Link1 of sending and reading functions:
Read stream
private void ListenForData()
{
try
{
socketConnection = new TcpClient("192.168.0.16", 5000);
Debug.Log("Connection successful");
Byte[] bytes = new Byte[1024];
while (true)
{
// Get a stream object for reading
using (NetworkStream stream = socketConnection.GetStream())
{
int length;
// Read incoming stream into byte array.
while ((length = stream.Read(bytes, 0, bytes.Length)) != 0)
{
var incomingData = new byte[length];
Array.Copy(bytes, 0, incomingData, 0, length);
// Convert byte array to string message.
string serverMessage = Encoding.ASCII.GetString(incomingData);
Debug.Log("server message received as: " + serverMessage);
updateText = serverMessage;
}
}
}
}
catch (SocketException socketException)
{
Debug.Log("Socket exception: " + socketException);
}
}
Write stream:
public void SendMessage(string clientMessage)
{
if (socketConnection == null)
{
return;
}
try
{
// Get a stream object for writing.
NetworkStream stream = socketConnection.GetStream();
if (stream.CanWrite)
{
//string clientMessage = "This is a message from one of your clients.";
// Convert string message to byte array.
byte[] clientMessageAsByteArray = Encoding.ASCII.GetBytes(clientMessage);
// Write byte array to socketConnection stream.
stream.Write(clientMessageAsByteArray, 0, clientMessageAsByteArray.Length);
Debug.Log("Client sent message: " + clientMessage);
}
}
catch (SocketException socketException)
{
Debug.Log("Socket exception: " + socketException);
}
}
Other published codes do not put read and write parts in separate functions.

Amazon S3 Multipart UploadPartRequest allows only single thread to upload at the same time using asp.net

I am trying to upload video files Amazon S3 using Multipart upload method in asp.net and I traced the upload progress using logs. It uploads 106496 each time and runs only single thread at a time. I did not notice that multiple threads running. Please clarify me on this why it is running single thread and it's taking long time to upload even for 20Mb file it's taking almost 2 minutes.
Here is my code, which uses UploadPartRequest.
private void UploadFileOnAmazon(string subUrl, string filename, Stream audioStream, string extension)
{
client = new AmazonS3Client(accessKey, secretKey, Amazon.RegionEndpoint.USEast1);
// List to store upload part responses.
List<UploadPartResponse> uploadResponses = new List<UploadPartResponse>();
// 1. Initialize.
InitiateMultipartUploadRequest initiateRequest = new InitiateMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename
};
InitiateMultipartUploadResponse initResponse =
client.InitiateMultipartUpload(initiateRequest);
// 2. Upload Parts.
//long contentLength = new FileInfo(filePath).Length;
long contentLength = audioStream.Length;
long partSize = 5 * (long)Math.Pow(2, 20); // 5 MB
try
{
long filePosition = 0;
for (int i = 1; filePosition < contentLength; i++)
{
UploadPartRequest uploadRequest = new UploadPartRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId,
PartNumber = i,
PartSize = partSize,
FilePosition = filePosition,
InputStream = audioStream
//FilePath = filePath
};
// Upload part and add response to our list.
uploadRequest.StreamTransferProgress += new EventHandler<StreamTransferProgressArgs>(UploadPartProgressEventCallback);
uploadResponses.Add(client.UploadPart(uploadRequest));
filePosition += partSize;
}
logger.Info("Done");
// Step 3: complete.
CompleteMultipartUploadRequest completeRequest = new CompleteMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId,
//PartETags = new List<PartETag>(uploadResponses)
};
completeRequest.AddPartETags(uploadResponses);
CompleteMultipartUploadResponse completeUploadResponse =
client.CompleteMultipartUpload(completeRequest);
}
catch (Exception exception)
{
Console.WriteLine("Exception occurred: {0}", exception.Message);
AbortMultipartUploadRequest abortMPURequest = new AbortMultipartUploadRequest
{
BucketName = bucketName,
Key = subUrl + filename,
UploadId = initResponse.UploadId
};
client.AbortMultipartUpload(abortMPURequest);
}
}
public static void UploadPartProgressEventCallback(object sender, StreamTransferProgressArgs e)
{
// Process event.
logger.DebugFormat("{0}/{1}", e.TransferredBytes, e.TotalBytes);
}
Is there anything wrong with my code or how to make threads run simultaneously to speed up upload?
Rather than managing the Multipart Upload yourself, try using the TransferUtility that does all the hard work for you!
See: Using the High-Level .NET API for Multipart Upload
The AmazonS3Client internally uses an AmazonS3Config instance to know the buffer size used for transfers (ref 1). This AmazonS3Config (ref 2) has a property named BufferSize whose default value is retrieved from a constant in AWSSDKUtils (ref 3) - which in the current SDK version defaults to 8192 bytes - quite small value IMHO.
You may use a custom instance of AmazonS3Config with an arbitrary BufferSize value. To build an AmazonS3Client instance that respects your custom configs, you have to pass the custom config to the client constructor. Example:
// Create credentials.
AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
// Create custom config.
AmazonS3Config config = new AmazonS3Config
{
RegionEndpoint = Amazon.RegionEndpoint.USEast1,
BufferSize = 512 * 1024, // 512 KiB
};
// Pass credentials + custom config to the client.
AmazonS3Client client = new AmazonS3Client(credentials, config);
// They uploaded happily ever after.

Get image from URL and upload to Amazon S3

I'd like to load an image directly from a URL but without saving it on the server, I want to upload it directly from memory to Amazon S3 server.
This is my code:
Dim wc As New WebClient
Dim fileStream As IO.Stream = wc.OpenRead("http://www.domain.com/image.jpg")
Dim request As New PutObjectRequest()
request.BucketName = "mybucket"
request.Key = "file.jpg"
request.InputStream = fileStream
client.PutObject(request)
The Amazon API gives me the error "Could not determine content length". The stream fileStream ends up as "System.Net.ConnectStream" which I'm not sure if it's correct.
The exact same code works with files from the HttpPostedFile but I need to use it in this way now.
Any ideas how I can convert the stream to become what Amazon API is expecting (with the length intact)?
I had the same problem when I'm using the GetObjectResponse() method and its propertie ResponseStream to copy a file from a folder to another in same bucket. I noted that the AWS SDK (2.3.45) have some faults like a another method called WriteResponseStreamToFile in GetObjectResponse() that simply doesn't work. These lacks of functions needs some workarounds.
I solved the problem openning the file in array of bytes and putting it in a MemoryStream object.
Try this (C# code)
WebClient wc = new WebClient();
Stream fileStream = wc.OpenRead("http://www.domain.com/image.jpg");
byte[] fileBytes = fileStream.ToArrayBytes();
PutObjectRequest request = new PutObjectRequest();
request.BucketName = "mybucket";
request.Key = "file.jpg";
request.InputStream = new MemoryStream(fileBytes);
client.PutObject(request);
The extesion method
public static byte[] ToArrayBytes(this Stream input)
{
byte[] buffer = new byte[16 * 1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray();
}
}
You can also create a MemoryStream without an array of bytes. But after the first PutObject in S3, the MemoryStream will be discarted. If you need to put others objects, I recommend the first option
WebClient wc = new WebClient();
Stream fileStream = wc.OpenRead("http://www.domain.com/image.jpg");
MemoryStream fileMemoryStream = fileStream.ToMemoryStream();
PutObjectRequest request = new PutObjectRequest();
request.BucketName = "mybucket";
request.Key = "file.jpg";
request.InputStream = fileMemoryStream ;
client.PutObject(request);
The extesion method
public static MemoryStream ToMemoryStream(this Stream input)
{
byte[] buffer = new byte[16 * 1024];
int read;
MemoryStream ms = new MemoryStream();
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms;
}
I had the same problem in a similar scenario.
The reason for the error is that to upload an object the SDK needs to know the whole content length that is going to be uploaded. To be able to obtain stream length it must be seekable, but the stream returned from WebClient is not. To indicate the expected length set Headers.ContentLength in PutObjectRequest. The SDK will use this value if it cannot determine length from the stream object.
To make your code work, obtain content length from the response headers returned by the call made by WebClient. Then set PutObjectRequest.Headers.ContentLength. Of course this relies on the server returned content length value.
Dim wc As New WebClient
Dim fileStream As IO.Stream = wc.OpenRead("http://www.example.com/image.jpg")
Dim contentLength As Long = Long.Parse(client.ResponseHeaders("Content-Length"))
Dim request As New PutObjectRequest()
request.BucketName = "mybucket"
request.Key = "file.jpg"
request.InputStream = fileStream
request.Headers.ContentLength = contentLength
client.PutObject(request)
I came up with a solution that uses UploadPart when the length is not available by any other means, plus this does not load the entire file into memory.
if (args.DocumentContents.CanSeek)
{
PutObjectRequest r = new PutObjectRequest();
r.InputStream = args.DocumentContents;
r.BucketName = s3Id.BucketName;
r.Key = s3Id.ObjectKey;
foreach (var item in args.CustomData)
{
r.Metadata[item.Key] = item.Value;
}
await S3Client.PutObjectAsync(r);
}
else
{
// if stream does not allow seeking, S3 client will throw error:
// Amazon.S3.AmazonS3Exception : Could not determine content length
// as a work around, if cannot use length property, will chunk
// file into sections and use UploadPart, so do not have to load
// entire file into memory as a single MemoryStream.
var r = new InitiateMultipartUploadRequest();
r.BucketName = s3Id.BucketName;
r.Key = s3Id.ObjectKey;
foreach (var item in args.CustomData)
{
r.Metadata[item.Key] = item.Value;
}
var multipartResponse = await S3Client.InitiateMultipartUploadAsync(r);
try
{
var completeRequest = new CompleteMultipartUploadRequest
{
UploadId = multipartResponse.UploadId,
BucketName = s3Id.BucketName,
Key = s3Id.ObjectKey,
};
// just using this size, because it is the max for Azure File Share, but it could be any size
// for S3, even a configured value
const int blockSize = 4194304;
// BinaryReader gives us access to ReadBytes
using (var reader = new BinaryReader(args.DocumentContents))
{
var partCounter = 1;
while (true)
{
byte[] buffer = reader.ReadBytes(blockSize);
if (buffer.Length == 0)
break;
using (MemoryStream uploadChunk = new MemoryStream(buffer))
{
uploadChunk.Position = 0;
var uploadRequest = new UploadPartRequest
{
BucketName = s3Id.BucketName,
Key = s3Id.ObjectKey,
UploadId = multipartResponse.UploadId,
PartNumber = partCounter,
InputStream = uploadChunk,
};
// could call UploadPart on multiple threads, instead of using await, but that would
// cause more data to be loaded into memory, which might be too much
var part2Task = await S3Client.UploadPartAsync(uploadRequest);
completeRequest.AddPartETags(part2Task);
}
partCounter++;
}
var completeResponse = await S3Client.CompleteMultipartUploadAsync(completeRequest);
}
}
catch
{
await S3Client.AbortMultipartUploadAsync(s3Id.BucketName, s3Id.ObjectKey
, multipartResponse.UploadId);
throw;
}
}

Reading fields from bytearray

I am trying to send a message via tcp. Unfortunately that does not work and hence I have created the following code for test purposes:
public void sendQuestion(String text) {
// Set timestamp.
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss");
TimeZone tz = TimeZone.getTimeZone("GMT+01:00");
df.setTimeZone(tz);
String date = df.format(new Date());
byte[] dateStr = date.getBytes();
// Set payload.
String payloadTemp = date + text;
byte[] payload = payloadTemp.getBytes();
// Send the payload.
clientOutputThread.send(1, payload);
....
}
public void send(byte type, byte[] payload) {
// Calculate and set size of message.
ByteBuffer bufferTemp = ByteBuffer.allocate(4);
bufferTemp.order(ByteOrder.BIG_ENDIAN);
byte[] payloadSize = bufferTemp.putInt(payload.length).array();
byte[] buffer = new byte[5 + payload.length];
System.arraycopy(payloadSize, 0, buffer, 0, 4);
System.arraycopy(payload, 0, buffer, 5, payload.length);
// Set message type.
buffer[4] = type;
// TEST: Try reading values again
ByteBuffer bb = ByteBuffer.wrap(buffer);
// get all the fields:
int payload2 = bb.getInt(0); // bytes 0-3
// byte 4 is the type
byte[] tmp = new byte[19]; // date is 19 bytes
bb.position(5);
bb.get(tmp);
String timestamp = tmp.toString();
byte[] tmp2 = new byte[payload2-19];
bb.get(tmp2); // text
String text = tmp2.toString();
....
}
Unfortunately, what I read as timestamp and text is rubbish, something of the sort of "[B#44f39650". Why? Am I reading wrong?
Thank you!
"[B#44f39650" is the result of calling toString() on an object which is a byte array. Which you are doing here:
String timestamp = tmp.toString();
So don't do that. Convert the byte array to a String, if you must do that at all, using the String constructors provided for the purposes.
However you should really be using the API of DataOutputStream and DataInputStream for this purpose.

Biztalk 2010 Custom Pipeline Component returns binary

I have created a custom pipeline component which transforms a complex excel spreadsheet to XML. The transformation works fine and I can write out the data to check. However when I assign this data to the BodyPart.Data part of the inMsg or a new message I always get a routing failure. When I look at the message in the admin console it appears that the body contains binary data (I presume the original excel) rather than the XML I have assigned - see screen shot below. I have followed numerous tutorials and many different ways of doing this but always get the same result.
My current code is:
public Microsoft.BizTalk.Message.Interop.IBaseMessage Execute(Microsoft.BizTalk.Component.Interop.IPipelineContext pc, Microsoft.BizTalk.Message.Interop.IBaseMessage inmsg)
{
//make sure we have something
if (inmsg == null || inmsg.BodyPart == null || inmsg.BodyPart.Data == null)
{
throw new ArgumentNullException("inmsg");
}
IBaseMessagePart bodyPart = inmsg.BodyPart;
//create a temporary directory
const string tempDir = #"C:\test\excel";
if (!Directory.Exists(tempDir))
{
Directory.CreateDirectory(tempDir);
}
//get the input filename
string inputFileName = Convert.ToString(inmsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties"));
swTemp.WriteLine("inputFileName: " + inputFileName);
//set path to write excel file
string excelPath = tempDir + #"\" + Path.GetFileName(inputFileName);
swTemp.WriteLine("excelPath: " + excelPath);
//write the excel file to a temporary folder
bodyPart = inmsg.BodyPart;
Stream inboundStream = bodyPart.GetOriginalDataStream();
Stream outFile = File.Create(excelPath);
inboundStream.CopyTo(outFile);
outFile.Close();
//process excel file to return XML
var spreadsheet = new SpreadSheet();
string strXmlOut = spreadsheet.ProcessWorkbook(excelPath);
//now build an XML doc to hold this data
XmlDocument xDoc = new XmlDocument();
xDoc.LoadXml(strXmlOut);
XmlDocument finalMsg = new XmlDocument();
XmlElement xEle;
xEle = finalMsg.CreateElement("ns0", "BizTalk_Test_Amey_Pipeline.textXML",
"http://tempuri.org/INT018_Workbook.xsd");
finalMsg.AppendChild(xEle);
finalMsg.FirstChild.InnerXml = xDoc.FirstChild.InnerXml;
//write xml to memory stream
swTemp.WriteLine("Write xml to memory stream");
MemoryStream streamXmlOut = new MemoryStream();
finalMsg.Save(streamXmlOut);
streamXmlOut.Position = 0;
inmsg.BodyPart.Data = streamXmlOut;
pc.ResourceTracker.AddResource(streamXmlOut);
return inmsg;
}
Here is a sample of writing the message back:
IBaseMessage Microsoft.BizTalk.Component.Interop.IComponent.Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
IBaseMessagePart bodyPart = pInMsg.BodyPart;
if (bodyPart != null)
{
using (Stream originalStrm = bodyPart.GetOriginalDataStream())
{
byte[] changedMessage = ConvertToBytes(ret);
using (Stream strm = new AsciiStream(originalStrm, changedMessage, resManager))
{
// Setup the custom stream to put it back in the message.
bodyPart.Data = strm;
pContext.ResourceTracker.AddResource(strm);
}
}
}
return pInMsg;
}
The AsciiStream used a method like this to read the stream:
override public int Read(byte[] buffer, int offset, int count)
{
int ret = 0;
int bytesRead = 0;
byte[] FixedData = this.changedBytes;
if (FixedData != null)
{
bytesRead = count > (FixedData.Length - overallOffset) ? FixedData.Length - overallOffset : count;
Array.Copy(FixedData, overallOffset, buffer, offset, bytesRead);
if (FixedData.Length == (bytesRead + overallOffset))
this.changedBytes = null;
// Increment the overall offset.
overallOffset += bytesRead;
offset += bytesRead;
count -= bytesRead;
ret += bytesRead;
}
return ret;
}
I would first of all add more logging to your component around the MemoryStream logic - maybe write the file out to the file system so you can make sure the Xml version is correct. You can also attach to the BizTalk process and step through the code for the component which makes debugging a lot easier.
I would try switching the use of MemoryStream to a more basic custom stream that writes the bytes for you. In the BizTalk SDK samples for pipeline components there are some examples for a custom stream. You would have to customize the stream sample so it just writes the stream. I can work on posting an example. So do the additional diagnostics above first.
Thanks,

Resources