I am creating a client side application which needs to create a log of the user activity but for various reasons this log must not be human readable.
Currently for my development I am creating a plain text log which looks something like this:
12/03/2009 08:34:21 -> User 'Bob' logged in
12/03/2009 08:34:28 -> Navigated to config page
12/03/2009 08:34:32 -> Option x changed to y
When I deploy my application, the log must not be in plain text, so all text must be encrypted. This doesn't appear to be straightforward to achieve as I need the log file to dynamically update as each entry is added.
The approach I was thinking about was to create a binary file, encrypt each log entry in isolation and then append it to the binary file with some suitable demarcation between each entry.
Does anyone know of any common approaches to this problem, I'm sure there has to be a better solution!
Don't encrypt individual log entries separately and write them to a file as suggested by other posters, because an attacker would easily be able to identify patterns in the log file. See the block cipher modes Wikipedia entry to learn more about this problem.
Instead, make sure that the encryption of a log entry depends on the previous log entries. Although this has some drawbacks (you cannot decrypt individual log entries as you always need to decrypt the entire file), it makes the encryption a lot stronger. For our own logging library, SmartInspect, we use AES encryption and the CBC mode to avoid the pattern problem. Feel free to give SmartInspect a try if a commercial solution would be suitable.
This is not really my thing, I'll admit that readily, but can't you encrypt each entry individually and then append it to the logfile? If you that refrain from encrypting the timestamp, you can easily find entries your are looking for and decrypt those when needed.
My point being mainly that appending individual encrypted entries to a file does not necessarily need to be binary entries appended to a binary file. Encryption with (for example) gpg will yield ascii garble that can be appended to an ascii file. Would that solve you problem?
FWIW, the one time I needed an encrypted logger I used a symmetric key (for performance reasons) to encrypt the actual log entries.
The symmetric 'log file key' was then encrypted under a public key and stored at the beginning of the log file and a separate log reader used the private key to decrypt the 'log file key' and read the entries.
The whole thing was implemented using log4j and an XML log file format (to make it easier for the reader to parse) and each time the log files were rolled over a new 'log file key' was generated.
Assuming you're using some sort of logging framework, e.g., log4j et al, then you should be able to create a custom implementation of Appender (or similar) that encrypts each entry, as #wzzrd suggested.
It is not clear to me wheter your concern is on the security, or the implement.
A simple implement is to hook up with a stream encryptor. A stream encryptor maintains its own state and can encrypt on the fly.
StreamEncryptor<AES_128> encryptor;
encryptor.connectSink(new std::ofstream("app.log"));
encryptor.write(line);
encryptor.write(line2);
...
Very old question and I'm sure the tech world has made much progress, but FWIW Bruce Schneier and John Kelsey wrote a paper on how to do this: https://www.schneier.com/paper-auditlogs.html
The context is not just security but also preventing the corruption or change of existing log file data if the system that hosts the log/audit files is compromised.
Encrypting each log entry individually would decrease the security of your ciphertext a lot, especially because you're working with very predictable plaintext.
Here's what you can do:
Use symmetric encryption (preferably AES)
Pick a random master key
Pick a security window (5 minutes, 10 minutes, etc.)
Then, pick a random temporary key at the beginning of each window (every 5 minutes, every 10 minutes, etc.)
Encrypt each log item separately using the temporary key and append to a temporary log file.
When the window's closed (the predetermined time is up), decrypt each element using the temporary key, decrypt the master log file using the master key, merge the files, and encrypt using the master key.
Then, pick a new temporary key and continue.
Also, change the master key each time you rotate your master log file (every day, every week, etc.)
This should provide enough security.
I'm wondering what kind of application you write. A virus or a Trojan horse? Anyway ...
Encrypt each entry alone, convert it to some string (Base64, for example) and then log that string as the "message".
This allows you to keep parts of the file readable and only encrypt important parts.
Notice that there is another side to this coin: If you create a fully encrypted file and ask the user for it, she can't know what you will learn from the file. Therefore, you should encrypt as little as possible (passwords, IP addresses, costumer data) to make it possible for the legal department to verify what data is leaving.
A much better approach would be to an obfuscator for the log file. That simply replaces certain patterns with "XXX". You can still see what happened and when you need a specific piece of data, you can ask for that.
[EDIT] This story has more implications that you'd think at first glance. This effectively means that a user can't see what's in the file. "User" doesn't necessarily include "cracker". A cracker will concentrate on encrypted files (since they are probably more important). That's the reason for the old saying: As soon as someone gets access to the machine, there is no way to prevent him to do anything on it. Or to say it another way: Just because you don't know how doesn't mean someone else also doesn't. If you think you have nothing to hide, you haven't thought about yourself.
Also, there is the issue of liability. Say, some data leaks on the Internet after you get a copy of the logs. Since the user has no idea what is in the log files, how can you prove in court that you weren't the leak? Bosses could ask for the log files to monitor their pawns, asking to have it encoded so the peasants can't notice and whine about it (or sue, the scum!).
Or look at it from a completely different angle: If there was no log file, no one could abuse it. How about enabling debugging only in case of an emergency? I've configured log4j to keep the last 200 log messages in a buffer. If an ERROR is logged, I dump the 200 messages to the log. Rationale: I really don't care what happens during the day. I only care for bugs. Using JMX, it's simple to set the debug level to ERROR and lower it remotely at runtime when you need more details.
For .Net see Microsoft Application blocks for log and encrypt functionality:
http://msdn.microsoft.com/en-us/library/dd203099.aspx
I would append encrypted log entries to a flat text file using suitable demarcation between each entry for the decryption to work.
I have the exact same need as you. Some guy called 'maybeWeCouldStealAVa' wrote a good implementation in: How to append to AES encrypted file , however this suffered from not being flushable - you would have to close and reopen the file each time you flush a message, to be sure not to lose anything.
So I've written my own class to do this:
import javax.crypto.*;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import java.io.*;
import java.security.*;
public class FlushableCipherOutputStream extends OutputStream
{
private static int HEADER_LENGTH = 16;
private SecretKeySpec key;
private RandomAccessFile seekableFile;
private boolean flushGoesStraightToDisk;
private Cipher cipher;
private boolean needToRestoreCipherState;
/** the buffer holding one byte of incoming data */
private byte[] ibuffer = new byte[1];
/** the buffer holding data ready to be written out */
private byte[] obuffer;
/** Each time you call 'flush()', the data will be written to the operating system level, immediately available
* for other processes to read. However this is not the same as writing to disk, which might save you some
* data if there's a sudden loss of power to the computer. To protect against that, set 'flushGoesStraightToDisk=true'.
* Most people set that to 'false'. */
public FlushableCipherOutputStream(String fnm, SecretKeySpec _key, boolean append, boolean _flushGoesStraightToDisk)
throws IOException
{
this(new File(fnm), _key, append,_flushGoesStraightToDisk);
}
public FlushableCipherOutputStream(File file, SecretKeySpec _key, boolean append, boolean _flushGoesStraightToDisk)
throws IOException
{
super();
if (! append)
file.delete();
seekableFile = new RandomAccessFile(file,"rw");
flushGoesStraightToDisk = _flushGoesStraightToDisk;
key = _key;
try {
cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
byte[] iv = new byte[16];
byte[] headerBytes = new byte[HEADER_LENGTH];
long fileLen = seekableFile.length();
if (fileLen % 16L != 0L) {
throw new IllegalArgumentException("Invalid file length (not a multiple of block size)");
} else if (fileLen == 0L) {
// new file
// You can write a 16 byte file header here, including some file format number to represent the
// encryption format, in case you need to change the key or algorithm. E.g. "100" = v1.0.0
headerBytes[0] = 100;
seekableFile.write(headerBytes);
// Now appending the first IV
SecureRandom sr = new SecureRandom();
sr.nextBytes(iv);
seekableFile.write(iv);
cipher.init(Cipher.ENCRYPT_MODE, key, new IvParameterSpec(iv));
} else if (fileLen <= 16 + HEADER_LENGTH) {
throw new IllegalArgumentException("Invalid file length (need 2 blocks for iv and data)");
} else {
// file length is at least 2 blocks
needToRestoreCipherState = true;
}
} catch (InvalidKeyException e) {
throw new IOException(e.getMessage());
} catch (NoSuchAlgorithmException e) {
throw new IOException(e.getMessage());
} catch (NoSuchPaddingException e) {
throw new IOException(e.getMessage());
} catch (InvalidAlgorithmParameterException e) {
throw new IOException(e.getMessage());
}
}
/**
* Writes one _byte_ to this output stream.
*/
public void write(int b) throws IOException {
if (needToRestoreCipherState)
restoreStateOfCipher();
ibuffer[0] = (byte) b;
obuffer = cipher.update(ibuffer, 0, 1);
if (obuffer != null) {
seekableFile.write(obuffer);
obuffer = null;
}
}
/** Writes a byte array to this output stream. */
public void write(byte data[]) throws IOException {
write(data, 0, data.length);
}
/**
* Writes <code>len</code> bytes from the specified byte array
* starting at offset <code>off</code> to this output stream.
*
* #param data the data.
* #param off the start offset in the data.
* #param len the number of bytes to write.
*/
public void write(byte data[], int off, int len) throws IOException
{
if (needToRestoreCipherState)
restoreStateOfCipher();
obuffer = cipher.update(data, off, len);
if (obuffer != null) {
seekableFile.write(obuffer);
obuffer = null;
}
}
/** The tricky stuff happens here. We finalise the cipher, write it out, but then rewind the
* stream so that we can add more bytes without padding. */
public void flush() throws IOException
{
try {
if (needToRestoreCipherState)
return; // It must have already been flushed.
byte[] obuffer = cipher.doFinal();
if (obuffer != null) {
seekableFile.write(obuffer);
if (flushGoesStraightToDisk)
seekableFile.getFD().sync();
needToRestoreCipherState = true;
}
} catch (IllegalBlockSizeException e) {
throw new IOException("Illegal block");
} catch (BadPaddingException e) {
throw new IOException("Bad padding");
}
}
private void restoreStateOfCipher() throws IOException
{
try {
// I wish there was a more direct way to snapshot a Cipher object, but it seems there's not.
needToRestoreCipherState = false;
byte[] iv = cipher.getIV(); // To help avoid garbage, re-use the old one if present.
if (iv == null)
iv = new byte[16];
seekableFile.seek(seekableFile.length() - 32);
seekableFile.read(iv);
byte[] lastBlockEnc = new byte[16];
seekableFile.read(lastBlockEnc);
cipher.init(Cipher.DECRYPT_MODE, key, new IvParameterSpec(iv));
byte[] lastBlock = cipher.doFinal(lastBlockEnc);
seekableFile.seek(seekableFile.length() - 16);
cipher.init(Cipher.ENCRYPT_MODE, key, new IvParameterSpec(iv));
byte[] out = cipher.update(lastBlock);
assert out == null || out.length == 0;
} catch (Exception e) {
throw new IOException("Unable to restore cipher state");
}
}
public void close() throws IOException
{
flush();
seekableFile.close();
}
}
Here's an example of using it:
import org.junit.Test;
import javax.crypto.Cipher;
import javax.crypto.CipherInputStream;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;
import java.io.*;
import java.io.BufferedWriter;
public class TestFlushableCipher {
private static byte[] keyBytes = new byte[] {
// Change these numbers, lest other StackOverflow readers can decrypt your files.
-53, 93, 59, 108, -34, 17, -72, -33, 126, 93, -62, -50, 106, -44, 17, 55
};
private static SecretKeySpec key = new SecretKeySpec(keyBytes,"AES");
private static int HEADER_LENGTH = 16;
private static BufferedWriter flushableEncryptedBufferedWriter(File file, boolean append) throws Exception
{
FlushableCipherOutputStream fcos = new FlushableCipherOutputStream(file, key, append, false);
return new BufferedWriter(new OutputStreamWriter(fcos, "UTF-8"));
}
private static InputStream readerEncryptedByteStream(File file) throws Exception
{
FileInputStream fin = new FileInputStream(file);
byte[] iv = new byte[16];
byte[] headerBytes = new byte[HEADER_LENGTH];
if (fin.read(headerBytes) < HEADER_LENGTH)
throw new IllegalArgumentException("Invalid file length (failed to read file header)");
if (headerBytes[0] != 100)
throw new IllegalArgumentException("The file header does not conform to our encrypted format.");
if (fin.read(iv) < 16) {
throw new IllegalArgumentException("Invalid file length (needs a full block for iv)");
}
Cipher cipher = Cipher.getInstance("AES/CBC/PKCS5Padding");
cipher.init(Cipher.DECRYPT_MODE, key, new IvParameterSpec(iv));
return new CipherInputStream(fin,cipher);
}
private static BufferedReader readerEncrypted(File file) throws Exception
{
InputStream cis = readerEncryptedByteStream(file);
return new BufferedReader(new InputStreamReader(cis));
}
#Test
public void test() throws Exception {
File zfilename = new File("c:\\WebEdvalData\\log.x");
BufferedWriter cos = flushableEncryptedBufferedWriter(zfilename, false);
cos.append("Sunny ");
cos.append("and green. \n");
cos.close();
int spaces=0;
for (int i = 0; i<10; i++) {
cos = flushableEncryptedBufferedWriter(zfilename, true);
for (int j=0; j < 2; j++) {
cos.append("Karelia and Tapiola" + i);
for (int k=0; k < spaces; k++)
cos.append(" ");
spaces++;
cos.append("and other nice things. \n");
cos.flush();
tail(zfilename);
}
cos.close();
}
BufferedReader cis = readerEncrypted(zfilename);
String msg;
while ((msg=cis.readLine()) != null) {
System.out.println(msg);
}
cis.close();
}
private void tail(File filename) throws Exception
{
BufferedReader infile = readerEncrypted(filename);
String last = null, secondLast = null;
do {
String msg = infile.readLine();
if (msg == null)
break;
if (! msg.startsWith("}")) {
secondLast = last;
last = msg;
}
} while (true);
if (secondLast != null)
System.out.println(secondLast);
System.out.println(last);
System.out.println();
}
}
Related
I have some account info that is being encrypted and written to a file like this:
//imports here
public class Main
public static void main(String[] args) {
try {
String text = "this text will be encrypted";
String key = "Bar12345Bar12345";
//Create key and cipher
Key aesKey = new SecretKeySpec(key.getBytes(), "AES");
Cipher cipher = Cipher.getInstance("AES");
//encrypt text
cipher.init(Cipher.ENCRYPT_MODE, aesKey);
byte[] encrypted = cipher.doFinal(text.getBytes());
write(new String(encrypted));
} catch (NoSuchAlgorithmException | NoSuchPaddingException | InvalidKeyException | IllegalBlockSizeException | BadPaddingException e) {
e.printStackTrace();
}
}
public static void write(String message) {
BufferedWriter bw = null;
FileWriter fw = null;
try {
String data = message;
File file = new File(FILENAME);
if (!file.exists()) {
file.createNewFile();
}
fw = new FileWriter(file.getAbsoluteFile(), true);
bw = new BufferedWriter(fw);
bw.write(data);
} catch (IOException e) {
e.printStackTrace();
} finally {
try {
if (bw != null)
bw.close();
if (fw != null)
fw.close();
} catch (IOException ex) {
ex.printStackTrace();
}
}
}
}
So the contents of the file is one string without any breaks in between. If I wanted to decrypt the string, I would do this:
String key = "Bar12345Bar12345";
Key aesKey = new SecretKeySpec(key.getBytes(), "AES");
Cipher cipher = Cipher.getInstance("AES");
byte[] encrypted = text.getBytes();
cipher.init(Cipher.DECRYPT_MODE, aesKey);
String decrypted = new String(cipher.doFinal(encrypted));
System.err.println(decrypted);
This works fine as long as byte[] encrypted is the same as used as in the encrypting process, but when I try to read the encrypted text from the file using a FileReader and a BufferedReader and change it into a byte using lines.getByte() it throws the exception
javax.crypto.IllegalBlockSizeException: Input length must be multiple of 16 when decrypting with padded cipher
Compare the cipher text from the encrypting process with the cipher text from lines.getByte() before you try to do any decryption. They are most likely different. Try reading the entire file into a byte[] first before decrypting it. Symmetric ciphers need to do their work on blocks of the same size, in this case 16 bytes.
I would also be remiss if I didn't comment on some of the poor protocol choices.
Hard coded key - Your encryption key should never be hard-coded in your application. If you ever need to change your encryption key, you are not able to do so. If your application is distributed to end users, it's easy for them to recover the key using an application like strings.
You are using ECB as your mode of operation. ECB has several ways it can be attacked, such as block shuffling, which can allow an attacker to decrypt data without having the encryption key.
Encryption is hard to get right on your own. Consider solving your problem a different way.
You are trying to treat your encrypted array contents as (platform encoded) text, while it is not - it consists of random bytes.
You either need to create a binary file by writing the contents of encrypted to it directly. That or you can create a text file by first encoding encrypted to base64.
Currently you are trying to read lines, but there aren't any. And if there are some line endings in there they will be stripped from the ciphertext before those bytes can be decrypted.
If you perform new String(encrypted) then it is also possible that you lose part of your data, as unsupported encodings are removed from the string without warning.
Note that the word "ciphertext" is a bit misleading; modern ciphers such as AES handle binary data, not text.
I have video files encrypted with AES stored on server. How to stream them online in exoplayer? I don't want to download the file and decrypt it: waiting for the download to complete and then play decrypted file.
I would suggest taking a look at the UriDataSource or the DataSource interface. You can derive from DataSource and provide an implementation very similar to UriDataSource and pass that into ExoPlayer. That class has access to the read() method which all the bytes pass through. That method allows you to decrypt the files on the fly one buffer at a time.
In ExoPlayer 2.0, you supply your own custom DataSource from your own custom DataSource.Factory which can be passed to an ExtractorMediaSource (or any other MediaSource).
If you're not on ExoPlayer 2.0, you pass the DataSource to the ExtractorSampleSource and then to the VideoRenderer and the AudioRender in the buildRenderers() of a custom RendererBuilder that you implement.
(Also you can Google "custom datasource exoplayer" and that should give more info if what I provided isn't enough - or I can clarify if you can't find anything).
Here's a code snippet of the read() method:
#Override
public int read(byte[] buffer, int offset, int readLength) throws IOException {
if (bytesRemaining == 0) {
return -1;
} else {
int bytesRead = 0;
try {
long filePointer = randomAccessFile.getFilePointer();
bytesRead =
randomAccessFile.read(buffer, offset, (int) Math.min(bytesRemaining, readLength));
// Supply your decrypting logic here
AesEncrypter.decrypt(buffer, offset, bytesRead, filePointer);
} catch (EOFException eof) {
Log.v("Woo", "End of randomAccessFile reached.");
}
if (bytesRead > 0) {
bytesRemaining -= bytesRead;
if (listener != null) {
listener.onBytesTransferred(bytesRead);
}
}
return bytesRead;
}
}
[EDIT] Also just found this SO post which has a similar suggestion.
I have a form that uploads multiple files. My model has a List<HttpPostedFileBase> called SchemaFileBases, which is correctly binded. I need to upload these files to s3 and would like to do it in parallel. I'm unable to use asyc and await because this code is run from both ASP.Net and a queue based application that currently doesn't have async/await support (working on it).
If I change the foreach below to Parallel.ForEach(this.SchemaFileBases, schemaFileBase => {... Then I get some funkiness going on. The two files end up being mashed. Each file will contain some of the other files content after it's uploaded. AwsDocument is being used elsewhere in parallel so I don't think it has to do with that. Each AwsDocument has it's own AmazonS3Client.
public override void UploadToS3(IMetadataParser parser)
{
string hash;
string key;
foreach (var schemaFileBase in this.SchemaFileBases)
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
hash = schemaFileBase.InputStream.Md5Hash().ToByteArray().ToHex();
key = String.Format("{0}/{1}", this.S3Prefix, schemaFileBase.FileName);
Stream inputStream = schemaFileBase.InputStream;
aws.UploadToS3(key, inputStream, hash);
}
}
My coworker suspect's it's something to do with how the InputStream on the HttpPostedFileBase is implemented. Perhaps it is not thread safe, and the streams are both reading from the original request at the same time? I can't imagine MS would do that though.
Multi-threaded version:
public override void UploadToS3(IMetadataParser parser)
{
Parallel.ForEach(this.SchemaFileBases, f =>
{
AwsDocument aws = new AwsDocument(AwsBucket.Received);
string hash = f.InputStream.Md5Hash().ToByteArray().ToHex();
string key = String.Format("{0}/{1}", this.S3Prefix, f.FileName);
Stream inputStream = f.InputStream;
aws.UploadToS3(key, inputStream, hash);
});
}
Above solution is what I tried to multi-thread it. Does not work (files get mixed up all weird).
How can I increase
from my C# code ? I can't do this in Web.config, My application is created to deploy web
application in IIS.
Take a look at http://bytes.com/topic/asp-net/answers/346534-how-i-can-get-httpruntime-section-page
There's how you get access to an instance of HttpRuntimeSection. Then modify the property MaxRequestLength.
An alternative to increasing the max request length is to create an IHttpModule implementation. In the BeginRequest handler, grab the HttpWorkerRequest to process it entirely in your own code, rather than letting the default implementation handle it.
Here is a basic implementation that will handle any request posted to any file called "dropbox.aspx" (in any directory, whether it exists or not):
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
namespace Example
{
public class FileUploadModule: IHttpModule
{
#region IHttpModule Members
public void Dispose() {}
public void Init(HttpApplication context)
{
context.BeginRequest += new EventHandler(context_BeginRequest);
}
#endregion
void context_BeginRequest(object sender, EventArgs e)
{
HttpApplication application = (HttpApplication)sender;
HttpContext context = application.Context;
string filePath = context.Request.FilePath;
string fileName = VirtualPathUtility.GetFileName( filePath );
string fileExtension = VirtualPathUtility.GetExtension(filePath);
if (fileName == "dropbox.aspx")
{
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest wr = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
//HANDLE REQUEST HERE
//Grab data from HttpWorkerRequest instance, as reflected in HttpRequest.GetEntireRawContent method.
application.CompleteRequest(); //bypasses all other modules and ends request immediately
}
}
}
}
You could use something like that, for example, if you're implementing a file uploader, and you want to process the multi-part content stream as it's received, so you can perform authentication based on posted form fields and, more importantly, cancel the request on the server-side before you even receive any file data. That can save a lot of time if you can determine early on in the stream that the upload is not authorized or the file will be too big or exceed the user's disk quota for the dropbox.
This is impossible to do with the default implementation, because trying to access the Form property of the HttpRequest will cause it to try to receive the entire request stream, complete with MaxRequestLength checks. The HttpRequest object has a method called "GetEntireRawContent" which is called as soon as access to the content is needed. That method starts with the following code:
HttpRuntimeSection httpRuntime = RuntimeConfig.GetConfig(this._context).HttpRuntime;
int maxRequestLengthBytes = httpRuntime.MaxRequestLengthBytes;
if (this.ContentLength > maxRequestLengthBytes)
{
if (!(this._wr is IIS7WorkerRequest))
{
this.Response.CloseConnectionAfterError();
}
throw new HttpException(SR.GetString("Max_request_length_exceeded"), null, 0xbbc);
}
The point is that you'll be skipping that code and implementing your own custom content length check instead. If you use Reflector to look at the rest of "GetEntireRawContent" to use it as a model implementation, you'll see that it basically does the following: calls GetPreloadedEntityBody, checks if there's more to load by calling IsEntireEntityBodyIsPreloaded, and finally loops through calls to ReadEntityBody to get the rest of the data. The data read by GetPreloadedEntityBody and ReadEntityBody are dumped into a specialized stream, which automatically uses a temporary file as a backing store once it crosses a size threshold.
A basic implementation would look like this:
MemoryStream request_content = new MemoryStream();
int bytesRemaining = wr.GetTotalEntityBodyLength() - wr.GetPreloadedEntityBodyLength();
byte[] preloaded_data = wr.GetPreloadedEntityBody();
if (preloaded_data != null)
request_content.Write( preloaded_data, 0, preloaded_data.Length );
if (!wr.IsEntireEntityBodyIsPreloaded()) //not a type-o, they use "Is" redundantly in the
{
int BUFFER_SIZE = 0x2000; //8K buffer or whatever
byte[] buffer = new byte[BUFFER_SIZE];
while (bytesRemaining > 0)
{
bytesRead = wr.ReadEntityBody(buffer, Math.Min( bytesRemaining, BUFFER_SIZE )); //Read another set of bytes
bytesRemaining -= bytesRead; // Update the bytes remaining
request_content.Write( buffer, 0, bytesRead ); // Write the chunks to the backing store (memory stream or whatever you want)
}
if (bytesRead == 0) //failure to read or nothing left to read
break;
}
At that point, you'll have your entire request in a MemoryStream. However, rather than download the entire request like that, what I've done is offload that "bytesRemaining" loop into a class with a "ReadEnough( int max_index )" method that is called on demand from a specialized MemoryStream that "loads enough" into the stream to access the byte being accessed.
Ultimately, that architecture allows me to send the request directly to a parser that reads from the memory stream, and the memory stream automatically loads more data from the worker request as needed. I've also implemented events so that as each element of the multi-part content stream is parsed, it fires events when each new part is identified and when each part is completely received.
You can do that in the web.config
<httpRuntime maxRequestLength="11000" />
11000 == 11 mb
I'm attempting to write a Java Servlet to receive binary data requests and reply to them, using HttpServletRequest.getOutputStream() and HttpServletResponse.getInputStream(). This is for a project which involves having a request sent by a Silverlight client to which this servlet responds to through an HTTP POST connection. For the time being, to test the Servlet I'm implementing a client in Java which I'm more familiar with than Silverlight.
The problem is that in my test project I send the data from a Servlet client as a byte array and expect to receive a byte array with the same length -- only it doesn't, and instead I'm getting a single byte. Therefore I'm posting here the relevant code snippets in the hopes that you might point me where I'm doing wrong and hopefully provide relevant bibliography to help me further.
So here goes.
The client servlet handles POST requests from a very simple HTML page with a form which I use as front-end. I'm not too worried about using JSP etc, instead I'm focused on making the inter-Servlet communication work.
// client HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
String firstName = (String) request.getParameter("firstname");
String lastName = (String) request.getParameter("lastname");
String xmlRequest = "<MyRequest><Person><Name Firstname=\""+firstName+"\" Lastname=\""+lastName+"\" /></Person></MyRequest>";
OutputStream writer = null;
InputStream reader = null;
try {
URL url = new URL("http://localhost:8080/project/Server");
URLConnection conn = url.openConnection();
conn.setDoInput(true);
conn.setDoOutput(true);
writer = conn.getOutputStream();
byte[] baXml = xmlRequest.getBytes("UTF-8");
writer.write(baXml, 0,baXml.length);
writer.flush();
// perhaps I should be waiting here? how?
reader = conn.getInputStream();
int available = reader.available();
byte[] data = new byte[available];
reader.read(data,0,available);
String xmlResponse = new String(data,"UTF-8");
PrintWriter print = response.getWriter();
print.write("<html><body>Response:<br/><pre>");
print.write(xmlResponse);
print.write("</pre></body></html>");
print.close();
} finally {
if(writer!=null)
writer.close();
if(reader!=null)
reader.close();
}
}
The server servlet handles HTTP POST requests. This is done by receiving requests the requests from a client Servlet for testing purposes above, but in the future I intend to use it for clients in other languages (specifically, Silverlight).
// server HttpServlet invokes this method from doPost(request,response)
private void process(HttpServletRequest request, HttpServetResponse response)
throws ServletException, IOException {
ServletInputStream sis = null;
try {
sis = request.getInputStream();
// maybe I should be using a BufferedInputStream
// instead of the InputStream directly?
int available = sis.available();
byte[] input = new byte[available];
int readBytes = sis.read(input,0,available);
if(readBytes!=available) {
throw new ServletException("Oops! readBytes!=availableBytes");
}
// I ONLY GET 1 BYTE OF DATA !!!
// It's the first byte of the client message, a '<'.
String msg = "Read "+readBytes+" bytes of "
+available+" available from request InputStream.";
System.err.println("Server.process(HttpServletRequest,HttpServletResponse): "+msg);
String xmlReply = "<Reply><Message>"+msg+"</Message></Reply>";
byte[] data = xmlReply.getBytes("UTF-8");
ServletOutputStream sos = response.getOutputStream();
sos.write(data, 0,data.length);
sos.flush();
sos.close();
} finally {
if(sis!=null)
sis.close();
}
}
I have been sticking to byte arrays instead of using BufferInputStreams so far because I've not decided yet if I'll be using e.g. Base64-encoded strings to transmit data or if I'll be sending binary data as-is.
Thank you in advance.
To copy input stream to output stream use the standard way:
InputStream is=request.getInputStream();
OutputStream os=response.getOutputStream();
byte[] buf = new byte[1000];
for (int nChunk = is.read(buf); nChunk!=-1; nChunk = is.read(buf))
{
os.write(buf, 0, nChunk);
}
The one thing I can think of is that you are reading only request.getInputStream().available() bytes, then deciding that you have had everything. According to the documentation, available() will return the number of bytes that can be read without blocking, but I don't see any mention of whether this is actually guaranteed to be the entire content of the input stream, so am inclined to assume that no such guarantees are made.
I'm not sure how to best find out when there is no more data (maybe Content-Length in the request can help?) without risking blocking indefinitely at EOF, but I would try looping until having read all the data from the input stream. To test that theory, you could always scan the input for a known pattern that occurs further into the stream, maybe a > matching the initial < that you are getting.