I'm using Filestream for read big file (> 500 MB) and I get the OutOfMemoryException.
I use Asp.net , .net 3.5, win2003, iis 6.0
I want this in my app:
Read DATA from Oracle
Uncompress file using FileStream and BZip2
Read file uncompressed and send it to asp.net page for download.
When I read file from disk, Fails !!! and get OutOfMemory...
. My Code is:
using (var fs3 = new FileStream(filePath2, FileMode.Open, FileAccess.Read))
{
byte[] b2 = ReadFully(fs3, 1024);
}
// http://www.yoda.arachsys.com/csharp/readbinary.html
public static byte[] ReadFully(Stream stream, int initialLength)
{
// If we've been passed an unhelpful initial length, just
// use 32K.
if (initialLength < 1)
{
initialLength = 32768;
}
byte[] buffer = new byte[initialLength];
int read = 0;
int chunk;
while ((chunk = stream.Read(buffer, read, buffer.Length - read)) > 0)
{
read += chunk;
// If we've reached the end of our buffer, check to see if there's
// any more information
if (read == buffer.Length)
{
int nextByte = stream.ReadByte();
// End of stream? If so, we're done
if (nextByte == -1)
{
return buffer;
}
// Nope. Resize the buffer, put in the byte we've just
// read, and continue
byte[] newBuffer = new byte[buffer.Length * 2];
Array.Copy(buffer, newBuffer, buffer.Length);
newBuffer[read] = (byte)nextByte;
buffer = newBuffer;
read++;
}
}
// Buffer is now too big. Shrink it.
byte[] ret = new byte[read];
Array.Copy(buffer, ret, read);
return ret;
}
Now, I specify my issue better.
Uncompress file using FileStream and BZip2 is OK, all is right.
The Problem is the following:
Read fat big file in disk (> 500 MB) in byte[] and send bytes to Response (asp.net) for download it.
When use
http://www.yoda.arachsys.com/csharp/readbinary.html
public static byte[] ReadFully
I get the error: OutOfMemoryException...
If better BufferedStream than Stream (FileStream, MemoryStream, ...) ??
Using BufferedStream , Can I read big file of 700 MB ?? (any sample code source using BufferedStream for download big file)
I think, this is the question: Not "how to read a 500mb file into memory?" , But "how to send a large file to the ASPNET Response stream?"
I found this code by Cheeso:
using (var fs = new FileStream(filePath, FileMode.Open, FileAccess.Read))
{
Response.BufferOutput= false; // to prevent buffering
byte[] buffer = new byte[1024];
int bytesRead = 0;
while ((bytesRead = fs.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytesRead);
}
}
Is it good code ?? any improvements for high performance ??
A collegue say me, use
Response.TransmitFile(filePath);
Now, another question, better TransmitFile or code by Cheeso ??
Many years ago, in msdn magazine appears great article about it but I cannot access http://msdn.microsoft.com/msdnmag/issues/06/09/WebDownloads/,
Update: You can access using webarchive in the link: https://web.archive.org/web/20070627063111/http://msdn.microsoft.com/msdnmag/issues/06/09/WebDownloads/
Any suggestions, comments, sample code source??
I've created download page which allows user to download up to 4gb (may be more) few months ago. Here is my working snippet:
private void TransmitFile(string fullPath, string outFileName)
{
System.IO.Stream iStream = null;
// Buffer to read 10K bytes in chunk:
byte[] buffer = new Byte[10000];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
// Identify the file to download including its path.
string filepath = fullPath;
// Identify the file name.
string filename = System.IO.Path.GetFileName(filepath);
try
{
// Open the file.
iStream = new System.IO.FileStream(filepath, System.IO.FileMode.Open,
System.IO.FileAccess.Read, System.IO.FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment; filename=" + outFileName);
Response.AddHeader("Content-Length", iStream.Length.ToString());
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the output.
Response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
throw new ApplicationException(ex.Message);
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
}
Response.Close();
}
}
You do not need to hold the whole file in memory just read it and write to the response stream in a loop.
I came across this question in my search to return a FileStreamResult from a controller, as I kept running into issues while working with large streams due to .Net trying to build the entire response all at once. Pavel Morshenyuk's answer was a huge help, but I figured I'd share the BufferedFileStreamResult that I ended up with.
/// <summary>Based upon https://stackoverflow.com/a/3363015/595473 </summary>
public class BufferedFileStreamResult : System.Web.Mvc.FileStreamResult
{
public BufferedFileStreamResult(System.IO.Stream stream, string contentType, string fileDownloadName)
: base(stream, contentType)
{
FileDownloadName = fileDownloadName;
}
public int BufferSize { get; set; } = 16 * 1024 * 1024;//--16MiB
protected override void WriteFile(System.Web.HttpResponseBase response)
{
try
{
response.Clear();
response.Headers.Set("Content-Disposition", $"attachment; filename={FileDownloadName}");
response.Headers.Set("Content-Length", FileStream.Length.ToString());
byte[] buffer;
int bytesRead;
while (response.IsClientConnected)//--Prevent infinite loop if user disconnects
{
buffer = new byte[BufferSize];
//--Read the data in buffer
if ((bytesRead = FileStream.Read(buffer, 0, BufferSize)) == 0)
{
break;//--Stop writing if there's nothing left to write
}
//--Write the data to the current output stream
response.OutputStream.Write(buffer, 0, bytesRead);
//--Flush the data to the output
response.Flush();
}
}
finally
{
FileStream?.Close();
response.Close();
}
}
}
Now, in my controller, I can just
return new BufferedFileStreamResult(stream, contentType, fileDownloadName);
There are more than one solution
1- Using RecyclableMemoryStream instead of MemoryStream solution
You can read more about RecyclableMemoryStream here :
http://www.philosophicalgeek.com/2015/02/06/announcing-microsoft-io-recycablememorystream/
https://github.com/Microsoft/Microsoft.IO.RecyclableMemoryStream
2- Using MemoryTributary instead of MemoryStream
You can read more about MemoryTributary here :
https://www.codeproject.com/Articles/348590/A-replacement-for-MemoryStream?msg=5257615#xx5257615xx
using System;
using System.Collections.Generic;
using System.IO;
using System.Runtime.InteropServices;
namespace LiquidEngine.Tools
{
/// <summary>
/// MemoryTributary is a re-implementation of MemoryStream that uses a dynamic list of byte arrays as a backing store, instead of a single byte array, the allocation
/// of which will fail for relatively small streams as it requires contiguous memory.
/// </summary>
public class MemoryTributary : Stream /* http://msdn.microsoft.com/en-us/library/system.io.stream.aspx */
{
#region Constructors
public MemoryTributary()
{
Position = 0;
}
public MemoryTributary(byte[] source)
{
this.Write(source, 0, source.Length);
Position = 0;
}
/* length is ignored because capacity has no meaning unless we implement an artifical limit */
public MemoryTributary(int length)
{
SetLength(length);
Position = length;
byte[] d = block; //access block to prompt the allocation of memory
Position = 0;
}
#endregion
#region Status Properties
public override bool CanRead
{
get { return true; }
}
public override bool CanSeek
{
get { return true; }
}
public override bool CanWrite
{
get { return true; }
}
#endregion
#region Public Properties
public override long Length
{
get { return length; }
}
public override long Position { get; set; }
#endregion
#region Members
protected long length = 0;
protected long blockSize = 65536;
protected List<byte[]> blocks = new List<byte[]>();
#endregion
#region Internal Properties
/* Use these properties to gain access to the appropriate block of memory for the current Position */
/// <summary>
/// The block of memory currently addressed by Position
/// </summary>
protected byte[] block
{
get
{
while (blocks.Count <= blockId)
blocks.Add(new byte[blockSize]);
return blocks[(int)blockId];
}
}
/// <summary>
/// The id of the block currently addressed by Position
/// </summary>
protected long blockId
{
get { return Position / blockSize; }
}
/// <summary>
/// The offset of the byte currently addressed by Position, into the block that contains it
/// </summary>
protected long blockOffset
{
get { return Position % blockSize; }
}
#endregion
#region Public Stream Methods
public override void Flush()
{
}
public override int Read(byte[] buffer, int offset, int count)
{
long lcount = (long)count;
if (lcount < 0)
{
throw new ArgumentOutOfRangeException("count", lcount, "Number of bytes to copy cannot be negative.");
}
long remaining = (length - Position);
if (lcount > remaining)
lcount = remaining;
if (buffer == null)
{
throw new ArgumentNullException("buffer", "Buffer cannot be null.");
}
if (offset < 0)
{
throw new ArgumentOutOfRangeException("offset",offset,"Destination offset cannot be negative.");
}
int read = 0;
long copysize = 0;
do
{
copysize = Math.Min(lcount, (blockSize - blockOffset));
Buffer.BlockCopy(block, (int)blockOffset, buffer, offset, (int)copysize);
lcount -= copysize;
offset += (int)copysize;
read += (int)copysize;
Position += copysize;
} while (lcount > 0);
return read;
}
public override long Seek(long offset, SeekOrigin origin)
{
switch (origin)
{
case SeekOrigin.Begin:
Position = offset;
break;
case SeekOrigin.Current:
Position += offset;
break;
case SeekOrigin.End:
Position = Length - offset;
break;
}
return Position;
}
public override void SetLength(long value)
{
length = value;
}
public override void Write(byte[] buffer, int offset, int count)
{
long initialPosition = Position;
int copysize;
try
{
do
{
copysize = Math.Min(count, (int)(blockSize - blockOffset));
EnsureCapacity(Position + copysize);
Buffer.BlockCopy(buffer, (int)offset, block, (int)blockOffset, copysize);
count -= copysize;
offset += copysize;
Position += copysize;
} while (count > 0);
}
catch (Exception e)
{
Position = initialPosition;
throw e;
}
}
public override int ReadByte()
{
if (Position >= length)
return -1;
byte b = block[blockOffset];
Position++;
return b;
}
public override void WriteByte(byte value)
{
EnsureCapacity(Position + 1);
block[blockOffset] = value;
Position++;
}
protected void EnsureCapacity(long intended_length)
{
if (intended_length > length)
length = (intended_length);
}
#endregion
#region IDispose
/* http://msdn.microsoft.com/en-us/library/fs2xkftw.aspx */
protected override void Dispose(bool disposing)
{
/* We do not currently use unmanaged resources */
base.Dispose(disposing);
}
#endregion
#region Public Additional Helper Methods
/// <summary>
/// Returns the entire content of the stream as a byte array. This is not safe because the call to new byte[] may
/// fail if the stream is large enough. Where possible use methods which operate on streams directly instead.
/// </summary>
/// <returns>A byte[] containing the current data in the stream</returns>
public byte[] ToArray()
{
long firstposition = Position;
Position = 0;
byte[] destination = new byte[Length];
Read(destination, 0, (int)Length);
Position = firstposition;
return destination;
}
/// <summary>
/// Reads length bytes from source into the this instance at the current position.
/// </summary>
/// <param name="source">The stream containing the data to copy</param>
/// <param name="length">The number of bytes to copy</param>
public void ReadFrom(Stream source, long length)
{
byte[] buffer = new byte[4096];
int read;
do
{
read = source.Read(buffer, 0, (int)Math.Min(4096, length));
length -= read;
this.Write(buffer, 0, read);
} while (length > 0);
}
/// <summary>
/// Writes the entire stream into destination, regardless of Position, which remains unchanged.
/// </summary>
/// <param name="destination">The stream to write the content of this stream to</param>
public void WriteTo(Stream destination)
{
long initialpos = Position;
Position = 0;
this.CopyTo(destination);
Position = initialpos;
}
#endregion
}
}
Related
I want to play video replay from low-end surveillance camera. Replays are saved on the camera in .mp4 format, with moov atom at the end. It's possible to retrieve file via http request using digset authentication. Approximate size of each video file is 20 MB, but download speed is only 3 Mbps, so downloading whole file takes about 60 s. This is to long, so I want to start displaying video before whole file will be downloaded.
Web browsers handles this kind of problem by reading end of file at the begining. I want to achieve same goal using c# and libvlcsharp, so created HttpMediaInput class.
public class HttpMediaInput : MediaInput
{
private static readonly NLog.Logger logger = NLog.LogManager.GetCurrentClassLogger();
private HttpClientHandler _handler;
private HttpClient _httpClient;
private string _url;
Stream _stream = null;
public HttpMediaInput(string url, string username, string password)
{
_url = url;
_handler = new HttpClientHandler() { Credentials = new NetworkCredential(username, password) };
_httpClient = new HttpClient(_handler);
}
public override bool Open(out ulong size)
{
size = ulong.MaxValue;
try
{
_stream = _httpClient.GetStreamAsync(_url).Result;
base.CanSeek = _stream.CanSeek;
return true;
}
catch (Exception ex)
{
logger.Error(ex, $"Exception occurred during sending stream request to url: {_url}");
return false;
}
}
public unsafe override int Read(IntPtr buf, uint len)
{
try
{
byte[] buffer = new byte[len];
int bytesReaded = _stream.Read(buffer, 0, buffer.Length);
logger.Trace($"Bytes readed: {bytesReaded}");
Span<byte> byteSpan = new Span<byte>(buf.ToPointer(), buffer.Length);
buffer.CopyTo(byteSpan);
return bytesReaded;
}
catch (Exception ex)
{
logger.Error(ex, "Stream read exception");
return -1;
}
}
...
}
It works great for mp4 files that have all necessary metadata stored on the beginning, but no video is displayed in case of my camera.
Assuming that I will be able to download moov atom from mp4 using http range requests, how to provide this data to libvlc? Is it even possible?
I'm developing application using C#, WPF, dotnet framework.
VLC cannot play files from camera because http digest auth with md5 is considered to be deprecated (related issue in VLC repo).
However, I was able to resolve this problem following cube45 suggestions, I implemented range requests.
public override bool Open(out ulong size)
{
size = ulong.MaxValue;
try
{
HttpRequestMessage requestMessage = new HttpRequestMessage { RequestUri = new Uri(_url) };
requestMessage.Headers.Range = new System.Net.Http.Headers.RangeHeaderValue();
requestMessage.Method = HttpMethod.Head;
var response = _httpClient.SendAsync(requestMessage).Result;
size = (ulong)response.Content.Headers.ContentLength;
_fileSize = size;
logger.Trace($"Received content lenght | {size}");
base.CanSeek = true;
return true;
}
catch (Exception ex)
{
logger.Error(ex, $"Exception occurred during sending head request to url: {_url}");
return false;
}
}
public unsafe override int Read(IntPtr buf, uint len)
{
try
{
HttpRequestMessage requestMessage = new HttpRequestMessage { RequestUri = new Uri(_url) };
long startReadPosition = (long)_currentPosition;
long stopReadPosition = (long)_currentPosition + ((long)_numberOfBytesToReadInOneRequest - 1);
if ((ulong)stopReadPosition > _fileSize)
{
stopReadPosition = (long)_fileSize;
}
requestMessage.Headers.Range = new System.Net.Http.Headers.RangeHeaderValue(startReadPosition, stopReadPosition);
requestMessage.Method = HttpMethod.Get;
HttpResponseMessage response = _httpClient.SendAsync(requestMessage).Result;
byte[] readedBytes = response.Content.ReadAsByteArrayAsync().Result;
int readedBytesCount = readedBytes.Length;
_currentPosition += (ulong)readedBytesCount;
logger.Trace($"Bytes readed | {readedBytesCount} | startReadPosition {startReadPosition} | stopReadPosition | {stopReadPosition}");
Span<byte> byteSpan = new Span<byte>(buf.ToPointer(), (int)len);
readedBytes.CopyTo(byteSpan);
return readedBytesCount;
}
catch (Exception ex)
{
logger.Error(ex, "Media reading general exception");
return -1;
}
}
public override bool Seek(ulong offset)
{
try
{
logger.Trace($"Seeking media with offset | {offset}");
_currentPosition = offset;
return true;
}
catch (Exception ex)
{
logger.Error(ex, "MediaInput seekeing general error");
return false;
}
}
This solution seams to work, but there are two unresolved problems:
There is about 8s lag between libvlcsharp starts reading stream and video goes live (waiting time in web browser is about 2s).
Some part of video file at the end is not displayed, because the buffer is too short to hold whole file inside. Related thread
I am new to netty and I am trying to design a solution as below for transfer of file from Server to Client over TCP:
1. Zero copy based file transfer in case of non-ssl based transfer (Using default region of the file)
2. ChunkedFile transfer in case of SSL based transfer.
The Client - Server file transfer works in this way:
1. The client sends the location of the file to be transfered
2. Based on the location (sent by the client) the server transfers the file to the client
The file content could be anything (String /image /pdf etc) and any size.
Now, I get this TooLongFrameException: at the Server side, though the server is just decoding the path received from the client, for running the code mentioned below (Server/Client).
io.netty.handler.codec.TooLongFrameException: Adjusted frame length exceeds 65536: 215542494061 - discarded
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.fail(LengthFieldBasedFrameDecoder.java:522)
at io.netty.handler.codec.LengthFieldBasedFrameDecoder.failIfNecessary(LengthFieldBasedFrameDecoder.java:500)
Now, My question is:
Am I wrong with the order of Encoders and Decoders and its configuration? If so, what is the correct way to configure it to receive a file from the server?
I went through few related StackOverflow posts SO Q1,SO Q2 , SO Q3, SO Q4. I got to know about the LengthFieldBasedDecoder, but I didn't get to know how to configure its corresponding LengthFieldPrepender at the Server (Encoding side). Is it even required at all?
Please point me into the right direction.
FileClient:
public final class FileClient {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
static final String HOST = System.getProperty("host", "127.0.0.1");
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the client
EventLoopGroup group = new NioEventLoopGroup();
try {
Bootstrap b = new Bootstrap();
b.group(group)
.channel(NioSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true)
.handler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc(), HOST, PORT));
}
pipeline.addLast("frameDecoder", new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast( new FileClientHandler()); }
});
// Start the server.
ChannelFuture f = b.connect(HOST,PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
// Shut down all event loops to terminate all threads.
group.shutdownGracefully();
}
}
}
FileClientHandler:
public class FileClientHandler extends ChannelInboundHandlerAdapter{
#Override
public void channelActive(ChannelHandlerContext ctx) {
String filePath = "/Users/Home/Documents/Data.pdf";
ctx.writeAndFlush(Unpooled.wrappedBuffer(filePath.getBytes()));
}
#Override
public void channelRead(ChannelHandlerContext ctx, Object msg) throws Exception {
System.out.println("File Client Handler Read method...");
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
}
}
FileServer:
/**
* Server that accept the path of a file and echo back its content.
*/
public final class FileServer {
static final boolean SSL = System.getProperty("ssl") != null;
static final int PORT = Integer.parseInt(System.getProperty("port", SSL ? "8992" : "8023"));
public static void main(String[] args) throws Exception {
// Configure SSL.
final SslContext sslCtx;
if (SSL) {
SelfSignedCertificate ssc = new SelfSignedCertificate();
sslCtx = SslContextBuilder.forServer(ssc.certificate(), ssc.privateKey()).build();
} else {
sslCtx = null;
}
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup(1);
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup).channel(NioServerSocketChannel.class)
.option(ChannelOption.SO_KEEPALIVE, true).handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new ChannelInitializer<SocketChannel>() {
#Override
public void initChannel(SocketChannel ch) throws Exception {
ChannelPipeline pipeline = ch.pipeline();
if (sslCtx != null) {
pipeline.addLast(sslCtx.newHandler(ch.alloc()));
}
pipeline.addLast("frameDecoder",new LengthFieldBasedFrameDecoder(64*1024, 0, 8));
pipeline.addLast("frameEncoder", new LengthFieldPrepender(4));
pipeline.addLast(new ObjectDecoder(ClassResolvers.cacheDisabled(null)));
pipeline.addLast(new ObjectEncoder());
pipeline.addLast(new ChunkedWriteHandler());
pipeline.addLast(new FileServerHandler());
}
});
// Start the server.
ChannelFuture f = b.bind(PORT).sync();
// Wait until the server socket is closed.
f.channel().closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
FileServerHandler:
public class FileServerHandler extends ChannelInboundHandlerAdapter {
#Override
public void channelRead(ChannelHandlerContext ctx, Object obj) throws Exception {
RandomAccessFile raf = null;
long length = -1;
try {
ByteBuf buff = (ByteBuf)obj;
byte[] bytes = new byte[buff.readableBytes()];
buff.readBytes(bytes);
String msg = new String(bytes);
raf = new RandomAccessFile(msg, "r");
length = raf.length();
} catch (Exception e) {
ctx.writeAndFlush("ERR: " + e.getClass().getSimpleName() + ": " + e.getMessage() + '\n');
return;
} finally {
if (length < 0 && raf != null) {
raf.close();
}
}
if (ctx.pipeline().get(SslHandler.class) == null) {
// SSL not enabled - can use zero-copy file transfer.
ctx.writeAndFlush(new DefaultFileRegion(raf.getChannel(), 0, length));
} else {
// SSL enabled - cannot use zero-copy file transfer.
ctx.writeAndFlush(new ChunkedFile(raf));
}
}
#Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
cause.printStackTrace();
System.out.println("Exception server.....");
}
}
I referred Netty In Action and code samples from here
There are multiple things wrong with your server/client. First thing the SSL, for the client you don't need to initialize a SslContext for a server instead you would do something like this:
sslCtx = SslContextBuilder.forClient().trustManager(InsecureTrustManagerFactory.INSTANCE).build();
On the server side of things you use a SelfSignedCertificate which in itself isn't wrong but would like to remind you that it should only be used for debugging purposes and not in production. In addition you use the ChannelOption.SO_KEEPALIVE which isn't recommended since the keepalive interval is OS-dependent. Furthermore you added Object En-/Decoder to your pipeline which in your case don't do anything useful so you can remove them.
Also you configured your LengthFieldBasedFrameDecoder wrong due to an incomplete and wrong parameter list. In the netty docs you need the version of the constructor which defines the lengthFieldLength and initialBytesToStrip. Besides the not stripping the length field you also defined the wrong lengthFieldLength which should be the same as your LengthFieldPrepender's lengthFieldLength which is 4 bytes. In conlusion you could use the constructor like this:
new LengthFieldBasedFrameDecoder(64 * 1024, 0, 4, 0, 4)
In both your handler you don't specify a Charset when en-/decoding your String which could lead to problems because if no ´Charset´ is defined the systems default will be used which could vary. You could do something like this:
//to encode the String
string.getBytes(StandardCharsets.UTF_8);
//to decode the String
new String(bytes, StandardCharsets.UTF_8);
Additionally you tried to use the DefaultFileRegion if no SslHandler was added to the pipeline which would have been fine if you didn't added the LengthFieldHandler since they would need a memory copy of the byte[] to send to added the length field. Moreover I would recommend using the ChunkedNioFile instead of the ChunkedFile because it's nonblocking which is always a good thing. You would do this like that:
new ChunkedNioFile(randomAccessFile.getChannel())
One final thing on how to decode a ChunkedFile as it's split in chunks you can simply assamble them tougether with a simple OutputStream. Here's an old file handler of mine:
public class FileTransferHandler extends SimpleChannelInboundHandler<ByteBuf> {
private final Path path;
private final int size;
private final int hash;
private OutputStream outputStream;
private int writtenBytes = 0;
private byte[] buffer = new byte[0];
protected FileTransferHandler(Path path, int size, int hash) {
this.path = path;
this.size = size;
this.hash = hash;
}
#Override
protected void channelRead0(ChannelHandlerContext ctx, ByteBuf byteBuf) throws Exception {
if(this.outputStream == null) {
Files.createDirectories(this.path.getParent());
if(Files.exists(this.path))
Files.delete(this.path);
this.outputStream = Files.newOutputStream(this.path, StandardOpenOption.CREATE, StandardOpenOption.APPEND);
}
int size = byteBuf.readableBytes();
if(size > this.buffer.length)
this.buffer = new byte[size];
byteBuf.readBytes(this.buffer, 0, size);
this.outputStream.write(this.buffer, 0, size);
this.writtenBytes += size;
if(this.writtenBytes == this.size && MurMur3.hash(this.path) != this.hash) {
System.err.println("Received file has wrong hash");
return;
}
}
#Override
public void channelInactive(ChannelHandlerContext ctx) throws Exception {
if(this.outputStream != null)
this.outputStream.close();
}
}
The Custom Pipeline component developed reads the incoming stream to a folder and pass only some meta data through the MessageBox.I am using the one already availaible in Code Project
using System;
using System.Collections.Generic;
using System.Text;
using Microsoft.BizTalk.Message.Interop;
using Microsoft.BizTalk.Component.Interop;
using System.IO;
namespace SendLargeFilesDecoder
{
[ComponentCategory(CategoryTypes.CATID_PipelineComponent)]
[ComponentCategory(CategoryTypes.CATID_Decoder)]
[System.Runtime.InteropServices.Guid("53fd04d5-8337-42c2-99eb-32ac96d1105a")]
public class SendLargeFileDecoder : IBaseComponent,
IComponentUI,
IComponent,
IPersistPropertyBag
{
#region IBaseComponent
private const string _description = "Pipeline component used to save large files to disk";
private const string _name = "SendLargeFileDecoded";
private const string _version = "1.0.0.0";
public string Description
{
get { return _description; }
}
public string Name
{
get { return _name; }
}
public string Version
{
get { return _version; }
}
#endregion
#region IComponentUI
private IntPtr _icon = new IntPtr();
public IntPtr Icon
{
get { return _icon; }
}
public System.Collections.IEnumerator Validate(object projectSystem)
{
return null;
}
#endregion
#region IComponent
public IBaseMessage Execute(IPipelineContext pContext, IBaseMessage pInMsg)
{
if (_largeFileLocation == null || _largeFileLocation.Length == 0)
_largeFileLocation = Path.GetTempPath();
if (_thresholdSize == null || _thresholdSize == 0)
_thresholdSize = 4096;
if (pInMsg.BodyPart.GetOriginalDataStream().Length > _thresholdSize)
{
Stream originalStream = pInMsg.BodyPart.GetOriginalDataStream();
string srcFileName = pInMsg.Context.Read("ReceivedFileName", "http://schemas.microsoft.com/BizTalk/2003/file-properties").ToString();
string largeFilePath = _largeFileLocation + System.IO.Path.GetFileName(srcFileName);
FileStream fs = new FileStream(largeFilePath, FileMode.Create);
// Write message to disk
byte[] buffer = new byte[1];
int bytesRead = originalStream.Read(buffer, 0, buffer.Length);
while (bytesRead != 0)
{
fs.Flush();
fs.Write(buffer, 0, buffer.Length);
bytesRead = originalStream.Read(buffer, 0, buffer.Length);
}
fs.Flush();
fs.Close();
// Create a small xml file
string xmlInfo = "<MsgInfo xmlns='http://SendLargeFiles'><LargeFilePath>" + largeFilePath + "</LargeFilePath></MsgInfo>";
byte[] byteArray = System.Text.Encoding.UTF8.GetBytes(xmlInfo);
MemoryStream ms = new MemoryStream(byteArray);
pInMsg.BodyPart.Data = ms;
}
return pInMsg;
}
#endregion
#region IPersistPropertyBag
private string _largeFileLocation;
private int _thresholdSize;
public string LargeFileLocation
{
get { return _largeFileLocation; }
set { _largeFileLocation = value; }
}
public int ThresholdSize
{
get { return _thresholdSize; }
set { _thresholdSize = value; }
}
public void GetClassID(out Guid classID)
{
classID = new Guid("CA47347C-010C-4B21-BFCB-22F153FA141F");
}
public void InitNew()
{
}
public void Load(IPropertyBag propertyBag, int errorLog)
{
object val1 = null;
object val2 = null;
try
{
propertyBag.Read("LargeFileLocation", out val1, 0);
propertyBag.Read("ThresholdSize", out val2, 0);
}
catch (ArgumentException)
{
}
catch (Exception ex)
{
throw new ApplicationException("Error reading PropertyBag: " + ex.Message);
}
if (val1 != null)
_largeFileLocation = (string)val1;
if (val2 != null)
_thresholdSize = (int)val2;
}
public void Save(IPropertyBag propertyBag, bool clearDirty, bool saveAllProperties)
{
object val1 = (object)_largeFileLocation;
propertyBag.Write("LargeFileLocation", ref val1);
object val2 = (object)_thresholdSize;
propertyBag.Write("ThresholdSize", ref val2);
}
#endregion
}
}
The issue here is the LargeFileLocation is configurable in the receive pipeline. If I give a location for the first time for example E:\ABC\ the files are sent to the location.
But if I change the location to E:\DEF\ the files are still being sent to the previous location E:\ABC. I tried to create a new biztalk application deleting the old one but still I get the files dropped in to the old location E:\ABC\ not sure why.
Most likely the issue is with respect to Property definition of LargeFileLocation and its implementation and usage in IPersistPropertyBag interfaces. You can try following things:
Check if you have added E:\ABC path in Pipeline at design time. If
yes remove it from there and set in Admin console for first time
also and see how it behaves, my feeling is it will take temp path
location.
Change the Properties and IPersistPropertyBag implementation to use property with declaration such as public string LargeFileName {get;set;} i.e. no local variables _largeFileName.
Have you deleted the dll in %BizTalkFolder%\Pipeline Components\ ?
To refresh the pipeline component, you need delete the old dll file/remove the item in VS toolbox. then restart the VS, deploy it again.
and for this LargeFileLocation , I suggest you make it as a property so you can config it.
I want to keep some different data in one cookie file and write this class, and want to know - is this good? For example - user JS enable.When user open his first page on my site, i write to session his GMT time and write with this manager JS state. (GMT time is ajax request with js). And i want to keep some data in this cookie (up to 10 values). Have any advices or tips?
/// <summary>
/// CookiesSettings
/// </summary>
internal enum CookieSetting
{
IsJsEnable = 1,
}
internal class CookieSettingValue
{
public CookieSetting Type { get; set; }
public string Value { get; set; }
}
/// <summary>
/// Cookies to long time of expire
/// </summary>
internal class CookieManager
{
//User Public Settings
private const string CookieValueName = "UPSettings";
private string[] DelimeterValue = new string[1] { "#" };
//cookie daat
private List<CookieSettingValue> _data;
public CookieManager()
{
_data = LoadFromCookies();
}
#region Save and load
/// <summary>
/// Load from cookie string value
/// </summary>
private List<CookieSettingValue> LoadFromCookies()
{
if (!CookieHelper.RequestCookies.Contains(CookieValueName))
return new List<CookieSettingValue>();
_data = new List<CookieSettingValue>();
string data = CookieHelper.RequestCookies[CookieValueName].ToString();
string[] dels = data.Split(DelimeterValue, StringSplitOptions.RemoveEmptyEntries);
foreach (string delValue in dels)
{
int eqIndex = delValue.IndexOf("=");
if (eqIndex == -1)
continue;
int cookieType = ValidationHelper.GetInteger(delValue.Substring(0, eqIndex), 0);
if (!Enum.IsDefined(typeof(CookieSetting), cookieType))
continue;
CookieSettingValue value = new CookieSettingValue();
value.Type = (CookieSetting)cookieType;
value.Value = delValue.Substring(eqIndex + 1, delValue.Length - eqIndex-1);
_data.Add(value);
}
return _data;
}
public void Save()
{
CookieHelper.SetValue(CookieValueName, ToCookie(), DateTime.UtcNow.AddMonths(6));
}
#endregion
#region Get value
public bool Bool(CookieSetting type, bool defaultValue)
{
CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type);
if (inList == null)
return defaultValue;
return ValidationHelper.GetBoolean(inList.Value, defaultValue);
}
#endregion
#region Set value
public void SetValue(CookieSetting type, int value)
{
CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type);
if (inList == null)
{
inList = new CookieSettingValue();
inList.Type = type;
inList.Value = value.ToString();
_data.Add(inList);
}
else
{
inList.Value = value.ToString();
}
}
public void SetValue(CookieSetting type, bool value)
{
CookieSettingValue inList = _data.SingleOrDefault(x => x.Type == type);
if (inList == null)
{
inList = new CookieSettingValue();
inList.Type = type;
inList.Value = value.ToString();
_data.Add(inList);
}
else
{
inList.Value = value.ToString();
}
}
#endregion
#region Private methods
private string ToCookie()
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < _data.Count; i++)
{
sb.Append((int)_data[i].Type);
sb.Append("=");
sb.Append(_data[i].Value);
sb.Append(DelimeterValue[0]);
}
return sb.ToString();
}
/// <summary>
/// Cookie length in bytes. Max - 4 bytes
/// </summary>
/// <returns></returns>
private int GetLength()
{
return System.Text.Encoding.UTF8.GetByteCount(ToCookie());
}
#endregion
}
P.S. i want to keep many data in one cookies file to compress data and decrease cookies count.
Don't put data into cookies. All cookie data is uploaded from the client on every request to your web site. Even users with good broadband connections often have very limited upload bandwidth, and so storing significant data in cookies can be very bad for perceived performance.
Instead, simply store a value in the cookie that you can use as a lookup to a database table when needed.
Don't put data into cookie. What Joel say is stands and I like to say one more think. Browser some time behave strange if you have a large amount data on your cookie, and you get problems that you do not even imaging where they come from. And this is from my experience.
behave strange:They sow blank white pages, or they can not load the page and you see the cursor wait and wait, or lose the cookie, or you lose data from your cookie and you can not login for example, and other thinks like that.
while encrypting and descripting the string with rsa provider I am getting this error.
RSA Data decryption error.The data to be decrypted exceeds the maximum for this modulus of 64 bytes.
Can any one have idea how to slove this error?
internal sealed class RSAProvider
{
#region key store class
[Serializable]
private struct rsaKey
{
public rsaKey(RSAParameters rsaKeyInfo)
{
D = rsaKeyInfo.D;
DP = rsaKeyInfo.DP;
DQ = rsaKeyInfo.DQ;
Exponent = rsaKeyInfo.Exponent;
InverseQ = rsaKeyInfo.InverseQ;
Modulus = rsaKeyInfo.Modulus;
P = rsaKeyInfo.P;
Q = rsaKeyInfo.Q;
}
public RSAParameters CreateRSAKey()
{
RSAParameters rsaKeyInfo = new RSAParameters();
rsaKeyInfo.D = D;
rsaKeyInfo.DP = DP;
rsaKeyInfo.DQ = DQ;
rsaKeyInfo.Exponent = Exponent;
rsaKeyInfo.InverseQ = InverseQ;
rsaKeyInfo.Modulus = Modulus;
rsaKeyInfo.P = P;
rsaKeyInfo.Q = Q;
return rsaKeyInfo;
}
public byte[] D;
public byte[] DP;
public byte[] DQ;
public byte[] Exponent;
public byte[] InverseQ;
public byte[] Modulus;
public byte[] P;
public byte[] Q;
}
#endregion
private static RSAParameters rsaKeyParameters;
static RSAProvider()
{
string rsaKeyString = System.Configuration.ConfigurationSettings.AppSettings["RSAKey"];
if(rsaKeyString != null)
{
rsaKeyParameters = GetKeyByString(rsaKeyString);
}
}
private RSAProvider()
{
}
private static RSAParameters RSAKeyInfo
{
get
{
return rsaKeyParameters;
}
}
private static bool DoOAEPPadding
{
get
{
return false;
}
}
public static string GenerateKey(int keySize)
{
//Create a new instance of RSACryptoServiceProvider to generate
//public and private key data.
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider(keySize);
RSAParameters rsaKeyInfo = RSA.ExportParameters(true);
return GetKeyString(rsaKeyInfo);
}
#region Encrypt
public static byte[] Encrypt(byte[] dataToEncrypt, string rsaKeyString)
{
RSAParameters rsaKeyInfo = GetKeyByString(rsaKeyString);
return Encrypt(dataToEncrypt, rsaKeyInfo);
}
public static byte[] Encrypt(byte[] dataToEncrypt, RSAParameters rsaKeyInfo)
{
try
{
//Create a new instance of RSACryptoServiceProvider.
// Common.Identity.ImpersonateValidUser("prana", "eetplpvt", "Avdhoota1985");
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider();
//Import the RSA Key information. This only needs
//toinclude the public key information.
RSA.ImportParameters(rsaKeyInfo);
//Encrypt the passed byte array and specify OAEP padding.
//OAEP padding is only available on Microsoft Windows XP or
//later.
//return RSA.Encrypt(dataToEncrypt, DoOAEPPadding);
byte[] data = RSA.Encrypt(dataToEncrypt, DoOAEPPadding);
RSA.Clear();
//Common.Identity.UndoImpersonation();
return data;
}
//Catch and display a CryptographicException
//to the console.
catch(CryptographicException e)
{
// Updated By Divya Bhalodia on 27th June 2008 for Localization task
//throw new Exception("Data encryption error.", e);
Common.EnumLocalization.EnumLocalization loc = new Common.EnumLocalization.EnumLocalization(ASP.BL.ApplicationUsers.ApplicationUserController.CurrentUserCulture.Code, ASP.BL.Applications.ApplicationController.CurrentApplicationInfo.ItemId);
throw new Exception(loc.LocalizeString("RSA Data encryption error.") + e.Message, e);
// end Updated - Divya
}
}
public static byte[] Encrypt(byte[] dataToEncrypt)
{
return Encrypt(dataToEncrypt, RSAKeyInfo);
}
#endregion
#region Decrypt
public static byte[] Decrypt(byte[] dataToDecrypt, string rsaKeyString, bool doOAEPPadding)
{
RSAParameters rsaKeyInfo = GetKeyByString(rsaKeyString);
return Decrypt(dataToDecrypt, rsaKeyInfo, doOAEPPadding);
}
public static byte[] Decrypt(byte[] dataToDecrypt, RSAParameters rsaKeyInfo, bool doOAEPPadding)
{
try
{
//Create a new instance of RSACryptoServiceProvider.
Common.Identity.ImpersonateValidUser();
RSACryptoServiceProvider RSA = new RSACryptoServiceProvider();
//Import the RSA Key information. This needs
//to include the private key information.
RSA.ImportParameters(rsaKeyInfo);
//Decrypt the passed byte array and specify OAEP padding.
//OAEP padding is only available on Microsoft Windows XP or
//later.
//return RSA.Decrypt(dataToDecrypt, doOAEPPadding);
byte[] data = RSA.Decrypt(dataToDecrypt, doOAEPPadding);
RSA.Clear();
Common.Identity.UndoImpersonation();
return data;
}
//Catch and display a CryptographicException
//to the console.
catch(CryptographicException e)
{
// Updated By Divya Bhalodia on 27th June 2008 for Localization task
//throw new Exception("Data decryption error.", e);
Common.EnumLocalization.EnumLocalization loc = new Common.EnumLocalization.EnumLocalization(ASP.BL.ApplicationUsers.ApplicationUserController.CurrentUserCulture.Code, ASP.BL.Applications.ApplicationController.CurrentApplicationInfo.ItemId);
throw new Exception(loc.LocalizeString("RSA Data decryption error.") + e.Message, e);
// end Updated - Divya
}
}
public static byte[] Decrypt(byte[] dataToDecrypt)
{
return Decrypt(dataToDecrypt, RSAKeyInfo, DoOAEPPadding);
}
#endregion
#region Additional functions
private static string GetKeyString(RSAParameters rsaKeyInfo)
{
byte[] tmp;
rsaKey k = new rsaKey(rsaKeyInfo);
BinaryFormatter formater = new BinaryFormatter();
using(MemoryStream stream = new MemoryStream())
{
formater.Serialize(stream, k);
tmp = stream.ToArray();
}
Code(tmp);
return Convert.ToBase64String(tmp);
}
private static RSAParameters GetKeyByString(string rsaKeyString)
{
rsaKey k;
byte[] tmp = Convert.FromBase64String(rsaKeyString);
Code(tmp);
BinaryFormatter formater = new BinaryFormatter();
using(MemoryStream stream = new MemoryStream(tmp))
{
k = (rsaKey)formater.Deserialize(stream);
}
return k.CreateRSAKey();
}
private static void Code(byte[] tmp)
{
byte mask1 = 0x55;
byte mask3 = 0xB9;
byte mask4 = 0xCF;
for(int i = 0; i
I've encoutered similar problems but you can do two things to help yourself overcome them.
You need to ensure that hte data you are encrypting is shorter than the key that you are using. so if your key is 1024 bits then make sure that you are only bassing in say 1000 bits. To do this you need to get chunk your byte array into smaller chunks, encrypt each chunk and then store the encrypeted value in an array or a string. So instead of encrypting 1 string you encrypt say 5 strings.
When storing this information as a string make sure that all numbers are the same length, so if the formatter returns 15 you store the string with 015 so that you just divide by 3 later to get the byte to then put into the array.
To decrypt your data you need to simply read the length of the string and determine how many chunks to decrypt. Decrupt these one by one and then you can recreate the object with the decrupted byte array.
if you would like actual code please contact me personally and I'll be able to help you better with some script that can do this for you.