Encrypted mp3 streaming to Flash SWF - asp.net

I have an asp.net web site that serves sample MP3 files to client Flash Players (SWF).
These files are downloadable by tons of download tools.
Although only registered members can access the high quality mp3 samples, my client wants to prevent these low quality MP3 files to be downloaded by download tools.
So I thought about this solution:
Convert these MP3 files to bytearrays on server side (ASP.NET)
Do some bitwise XOR operations (Simple encryption)
Write this array to aspx' responsestream
Modify Flash (.fla) to request to this new file/page/aspx
Do some bitwise XOR operations on Flash and convert it to the original MP3 as byte array. (Simple decryption)
Play the MP3
I was able to succeed till step 6. I cannot convert this byte array to a Sound object that Flash can play. I did a bit by bit comparison of the resulting array on the flash and the source array on ASP.NET. They are equal.
I'm open to completely different approaches. But I cannot use Flash Media Server. I need to be using Flash as3 and ASP.NET.
Also very important! The .mp3 must be downloaded/decrypted and played asynchronously (which I coud not succeed in doing)

I agree with Peter Elliot that authentication probably is the easiest way to restrict access to the files. However, if you still need to explore the route of encrypting the files, I thought I'd expand a bit on Alex Vlad's answer.
What you need to do in order to be able to stream the audio file, decrypt it on the fly, and play it asynchronously is to use the URLStream class (docs) in conjunction with the Sound class (docs) and keeping a buffer of the partially downloaded content.
Some pseudocode to illustrate:
class AsyncEncryptedSoundPlayer extends Sound {
var buffer:ByteArray;
var stream:URLStream;
var currSoundPosition:uint = 0;
public function AsyncEncryptedSoundPlayer(url:String) {
buffer = new ByteArray();
stream = new URLStream();
stream.addEventListener(ProgressEvent.PROGRESS, onProgress);
stream.load(new URLRequest(url));
addEventListener(SampleDataEvent.SAMPLE_DATA, onSampleDataRequested);
}
function onProgress(e:ProgressEvent):void {
var tmpData:ByteArray;
stream.readBytes(tmpData, buffer.length, stream.bytesAvailable - buffer.length);
var decryptedData:ByteArray = decryptData(tmpData); // Decrypt loaded data
buffer.writeBytes(decryptedData, buffer.length, decryptedData.length); // Add decrypted data to buffer
}
function onSampleDataRequested(e:ProgressEvent):void {
// Feed samples from the buffer to the Sound instance
// You may have to pause the audio to increase the buffer it the download speed isn't high enough
event.data.writeBytes(buffer, currSoundPosition, 2048);
currSoundPosition += 2048;
}
function decryptedData(data:ByteArray):void {
// Returns decrypted data
}
}
This is obviously a very rough outline of a class, but I hope it will point you in the right direction.

#walkietokyo, thanks a lot for pointing me to the right direction. I succeeded in doing what I wanted. The keyword here was the loadCompressedDataFromByteArray function.
After tens of trial and errors I found out that loadCompressedDataFromByteArray was working in a differential manner.
It appends anything that it converts to the end of the sound object data.
Another issue: sound object doesn't continue playing the parts appended by loadCompressedDataFromByteArray after its play function is called.
So I implemented a sort of double buffering. Where I use 2 sound objects interchangeably.
My final (test) version is listed below. With the encryption (obfuscation) method I used (a simple XOR) no download manager or grabber or sniffer that I tested was able to play the Mp3s.
Flash (Client) side:
import flash.events.DataEvent;
import flash.events.Event;
import flash.events.EventDispatcher;
import flash.events.OutputProgressEvent;
import flash.events.ProgressEvent;
import flash.net.URLRequest;
import flash.net.URLStream;
import flash.utils.ByteArray;
import flashx.textLayout.formats.Float;
var buffer:ByteArray;
var stream:URLStream;
var bufferReadPosition:uint = 0;
var bufferWritePosition:uint = 0;
var url:String = "http://www.blablabla.com/MusicServer.aspx?" + (new Date());
buffer = new ByteArray();
stream = new URLStream();
stream.addEventListener(ProgressEvent.PROGRESS, onProgress);
stream.load(new URLRequest(url));
var s1:Sound = new Sound();
var s2:Sound = new Sound();
var channel1:SoundChannel;
var channel2:SoundChannel;
var pausePosition:int = 0;
var aSoundIsPlaying:Boolean = false;
var lastLoadedS1:Boolean = false;
var lastS1Length:int = 0;
var lastS2Length:int = 0;
function onProgress(e:ProgressEvent):void {
var tmpData:ByteArray = new ByteArray();
stream.readBytes(tmpData, 0, stream.bytesAvailable);
var decryptedData:ByteArray = decryptData(tmpData); // Decrypt loaded data
buffer.position = bufferWritePosition;
buffer.writeBytes(decryptedData, 0, decryptedData.length); // Add decrypted data to buffer
bufferWritePosition += decryptedData.length;
if(lastLoadedS1)
{
buffer.position = lastS2Length;
s2.loadCompressedDataFromByteArray(buffer, buffer.length - lastS2Length);
lastS2Length = buffer.length;
}
else
{
buffer.position = lastS1Length;
s1.loadCompressedDataFromByteArray(buffer, buffer.length - lastS1Length);
lastS1Length = buffer.length;
}
if(!aSoundIsPlaying)
{
DecidePlay();
}
}
function channel1Completed(e:Event):void
{
DecidePlay();
}
function channel2Completed(e:Event):void
{
DecidePlay();
}
function DecidePlay():void
{
aSoundIsPlaying = false;
if(lastLoadedS1)
{
channel1.stop();
if(s2.length - s1.length > 10000)
{
//At least a 10 second buffer
channel2 = s2.play(s1.length);
channel2.addEventListener(Event.SOUND_COMPLETE, channel2Completed);
lastLoadedS1 = false;
aSoundIsPlaying = true;
}
}
else
{
if(channel2 != null)
{
channel2.stop();
}
if(s1.length - s2.length > 10000)
{
//At least a 10 second buffer
channel1 = s1.play(s2.length);
channel1.addEventListener(Event.SOUND_COMPLETE, channel1Completed);
lastLoadedS1 = true;
aSoundIsPlaying = true;
}
}
}
function decryptData(data:ByteArray):ByteArray {
for(var i:int = 0;i<data.length;i++)
{
//Here put in your bitwise decryption code
}
return data;
}
ASP.NET server side (MusicServer.aspx):
protected void Page_Load(object sender, EventArgs e)
{
CopyStream(Mp3ToStream(Server.MapPath("blabla.mp3")), Response.OutputStream);
this.Response.AddHeader("Content-Disposition", "blabla.mp3");
this.Response.ContentType = "audio/mpeg";
this.Response.End();
}
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[32768];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
for (int i = 0; i < read; i++)
{
//Here put in your bitwise encryption code
}
output.Write(buffer, 0, read);
}
}
public Stream Mp3ToStream(string filePath)
{
using (FileStream fileStream = File.OpenRead(filePath))
{
MemoryStream memStream = new MemoryStream();
memStream.SetLength(fileStream.Length);
fileStream.Read(memStream.GetBuffer(), 0, (int)fileStream.Length);
return memStream;
}
}

what might be simpler than encrypting the data coming back from your service is instead authenticating requests so that only your swf can request the files.
You can accomplish this in the same way that say, the Amazon APIs work: build a request that includes a number of parameters, including a timestamp. hash all of these arguments together in an HMAC (HMAC-SHA256 is available in the as3crypto library) along with a private key embedded in your swf. Your server end authenticates this request, ensuring that the hash is valid and that it is close enough to the timestamp. Any requests with a bad hash, or using a request with a timestamp too far in the past (replay attack) are denied.
This is certainly not perfect security. Any sufficiently motivated user could disassemble your swf and pull out your auth key, or grab the mp3 from their browser cache. But then again, any mechanism you are going to use will have those issues. This removes the overhead of having to encrypt and decrypt all of your files, instead moving the work over to the request generation phase.

Flash Sound supports only streaming mp3 playing that is you can play only mp3 by direct link. But you can send swf file with embeded mp3 withing it and this swf can be encrypted in the same way as you encrypt mp3.
as3 code for embedding and using mp3:
public class Sounds
{
[Embed(source="/../assets/sounds/sound1.mp3")]
private static const sound1:Class;
}
after loading this swf by the Loader you can access to the sound in this way:
var domain:ApplicationDomain = ApplicationDomain.currentDomain; // <-- ApplicationDomain where you load sounds.swf
var soundClass:Class = domain.getDefinition("Sounds_sound1");
var sound:Sound = new soundClass();
sound.play();
Be sure, that you do at least one of the follows:
give different names for sound class (sound1)
give different name for holder class (Sounds)
or load sound.swf into different application domains
to prevent class names overlapping.
Unfortunate this approach doesn't allow you to streaming play sound, you have to load whole swf, decrypt it and only after that you will be able to play sound.

please have a look here:
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/events/SampleDataEvent.html
and here:
http://help.adobe.com/en_US/as3/dev/WSE523B839-C626-4983-B9C0-07CF1A087ED7.html

Related

Image resizing script is not returning a proper stream for further handling

Current project:
ASP.NET 4.5.2
MVC 5
I am trying to leverage the TinyPNG API, and if I just pipe the image over to it, it works great. However, since the majority of users will be on a mobile device, and these produce images at a far higher resolution than what is needed, I am hoping to reduce the resolution of these files prior to them being piped over to TinyPNG. It is my hope that these resized images will be considerably smaller than the originals, allowing me to conduct a faster round trip.
My code:
public static async Task<byte[]> TinyPng(Stream input, int aspect) {
using(Stream output = new MemoryStream())
using(var png = new TinyPngClient("kxR5d49mYik37CISWkJlC6YQjFMcUZI0")) {
ResizeImage(input, output, aspect, aspect); // Problem area
var result = await png.Compress(output);
using(var reader = new BinaryReader(await (await png.Download(result)).GetImageStreamData())) {
return reader.ReadBytes(result.Output.Size);
}
}
}
public static void ResizeImage(Stream input, Stream output, int newWidth, int maxHeight) {
using(var srcImage = Image.FromStream(input)) {
var newHeight = srcImage.Height * newWidth / srcImage.Width;
if(newHeight > maxHeight) {
newWidth = srcImage.Width * maxHeight / srcImage.Height;
newHeight = maxHeight;
}
using(var newImage = new Bitmap(newWidth, newHeight))
using(var gr = Graphics.FromImage(newImage)) {
gr.SmoothingMode = SmoothingMode.AntiAlias;
gr.InterpolationMode = InterpolationMode.HighQualityBicubic;
gr.PixelOffsetMode = PixelOffsetMode.HighQuality;
gr.DrawImage(srcImage, new Rectangle(0, 0, newWidth, newHeight));
newImage.Save(output, ImageFormat.Jpeg);
}
}
}
So the ResizeArea is supposed to accept a stream and output a stream, meaning that the TinyPNG .Compress() should work just as well with the output as it would with the original input. Unfortunately, only the .Compress(input) works -- with .Compress(output) TinyPNG throws back an error:
400 - Bad Request. InputMissing, File is empty
I know TinyPNG has its own resizing routines, but I want to do this before the image is sent out over the wire to TinyPNG so that file size (and therefore transmission time) is reduced as much as possible prior to the actual TinyPNG compression.
…Aaaaand I just solved my problem by using another tool entirely.
I found ImageProcessor. Documentation is a royal b**ch to get at because it only comes in a Windows *.chm help file (it’s not online… cue one epic Whisky. Tango. Foxtrot.), but after looking at a few examples it did solve my issue quite nicely:
public static async Task<byte[]> TinyPng(Stream input, int aspect) {
using(var output = new MemoryStream())
using(var png = new TinyPngClient("kxR5d49mYik37CISWkJlC6YQjFMcUZI0")) {
using(var imageFactory = new ImageFactory()) {
imageFactory.Load(input).Resize(new Size(aspect, 0)).Save(output);
}
var result = await png.Compress(output);
using(var reader = new BinaryReader(await (await png.Download(result)).GetImageStreamData())) {
return reader.ReadBytes(result.Output.Size);
}
}
}
and everything is working fine now. Uploads are much faster now as I am not piping a full-sized image straight through to TinyPNG, and since I am storing both final-“full”-sized images as well as thumbnails straight into the database, I am now not piping the whole bloody image twice.
Posted so that other wheel-reinventing chuckleheads like me will actually have something to go on.

In PlayN, how do I use the Storage interface to persist data?

I'm looking for a code example that demonstrates practical real-world usage of the Storage interface. I'm especially interested in HTML5 implementation. I've just started working on my own proof-of-concept, so I'll post that if no better answers arrive before then.
The Storage interface is introduced in this Google presentation here:
http://playn-2011.appspot.com/
Here's some code that demonstrates the use of the storage interface together with PlayN's JSON interface:
private void loadStoredData() {
// storage parameters
String storageKey = "jsonData";
Json.Object jsonData = PlayN.json().createObject();
// to reset storage, uncomment this line
//PlayN.storage().removeItem(storageKey);
// attempt to load stored data
String jsonString = PlayN.storage().getItem(storageKey);
// if not loaded, create stored data
if ( jsonString == null ) {
DemoApi.log("stored data not found");
jsonData.put("firstWrite", new Date().toString());
// else display data
} else {
jsonData = PlayN.json().parse(jsonString);
DemoApi.log("stored data loaded");
DemoApi.log("data first written at " + jsonData.getString("firstWrite"));
DemoApi.log("data last read at " + jsonData.getString("lastRead"));
DemoApi.log("data last written at " + jsonData.getString("lastWrite"));
}
// update last read
jsonData.put("lastRead", new Date().toString());
// write data (this works in Java -- not in HTML)
// see https://stackoverflow.com/q/10425877/1093087
/*
Json.Writer jsonWriter = PlayN.json().newWriter();
jsonWriter.object(jsonData).done();
jsonString = jsonWriter.write();
*/
// alternative write routine
Json.Writer jsonWriter = PlayN.json().newWriter();
jsonWriter.object();
for ( String key : jsonData.keys() ) {
jsonWriter.value(key, jsonData.getString(key));
}
jsonWriter.end();
jsonString = jsonWriter.write();
// store data as json
PlayN.storage().setItem(storageKey, jsonString);
// confirm
if ( PlayN.storage().isPersisted() ) {
DemoApi.log("data successfully persisted");
} else {
DemoApi.log("failed to persist data");
}
}
There's one little hitch with the Json.Writer that seems a bit buggy that I document in this question here: In the HTML version of PlayN, why does the following JSON-handling code throw an exception?

How to convert pdf file into byte array,retrieve byte array into pdf file in flex desktop application?

i am new to flex, i have no idea to convert pdf file into byte array.and also i tried in google also but no results yet.can u prefer how to convert pdf file into byte array and retrieve byte array into pdf file in flex application.
it is urgent....
Thanks in advance.(nothing is impossible)
If you have a Flex (web) app, you will be using the FileReference class
private var ref:FileReference;
//This generally is a mouse click handler, to initiate the process of file reading (i.e. Selection)
public function mc():void {
ref=new FileReference();
ref.addEventListener(Event.SELECT, fileSelected);
ref.browse([new FileFilter("PDF Files (*.pdf)", "*.pdf")]);
}
private function fileSelected(e:Event):void {
ref.removeEventListener(Event.SELECT, fileSelected);
ref.addEventListener(Event.COMPLETE, fileOpen);
ref.load();
}
private function fileOpen(e:Event):void {
var byteArrayToProcess:ByteArray=ref.data;
}
If you have an AIR (desktop / mobile) app, you can directly use the File and FileStream class.
public function mc():void {
var f:File=new File("path/to/file");
var s:FileStream=new FileStream();
s.open(f, FileMode.READ);
var byteArrayToProcess:ByteArray=new ByteArray()
s.readBytes(byteArrayToProcess, 0, s.bytesAvailable);
}

Render image in asp.net MVC

My scenario is this:
I create o custom report based on a stored procedure that returns three columns (person_id[long], name[varchar(100)], age[int], photo[image]). Those are the columns and types in my database table.
Right now i'm using something like this for each image of the report.
<img src="<%= Url.Action("ShowImage", "Reports", new {personId = result["PERSON_ID"]}) %>" />
with ShowImage being
public virtual ActionResult ShowImage(long? personId)
{
try
{
if (personId.HasValue)
{
byte[] imageArray = StudentClient.GetPersonPhotoById(personId.Value);
if (imageArray == null)
return File(noPhotoArray, "image/jpg");
#region Validate that the uploaded picture is an image - temporary code
// Get Mime Type
byte[] buffer = new byte[256];
buffer = imageArray.Take(imageArray.Length >= 256 ? 256 : imageArray.Length).ToArray();
var mimeType = UrlmonMimeType.GetMimeType(buffer);
if (String.IsNullOrEmpty(mimeType) || mimeType.IndexOf("image") == -1)
return File(noPhotoArray, "image/jpg");
#endregion
return File(imageArray, "image/jpg");
}
}
catch
{
return File(noPhotoArray, "image/jpg");
}
}
I would like to use some sort of alternative because this is very stresful due to the fact the ShowImage() calls a service method StudentClient.GetPersonPhotoById(personId.Value); for every single picture, meaning allot of calls to the service and the DB also.
I would like to actually use that photo column that returns a byte array instead of using the Person_id column through the ShowImage controller method.
That would practicaly reduce the number of calls to the service to 0 and use the actual data from the image column. This seems pretty straight forward but I struggle to find a solution.
Thank you!
Simplest solution - use OutputCache. Moreover, you can set cache location to client, and the browser will cache the images once they're downloaded. VaryByParam will give you the ability to cache images depending on personId.
There's quite a neat technique where you can stream the binary data directly from the SQL Server to the client, via the webserver.
This is my code for doing it:
public void StreamFile(Stream stream)
{
DbDataReader dr = LoadDbStream();
if (!dr.Read())
return;
const int BUFFERSIZE = 512;
byte[] Buffer = new byte[BUFFERSIZE];
long StartIndex = 0;
long Read = dr.GetBytes(0, StartIndex, Buffer, 0, BUFFERSIZE);
while (Read == BUFFERSIZE)
{
stream.Write(Buffer, 0, BUFFERSIZE);
StartIndex += BUFFERSIZE;
Read = dr.GetBytes(0, StartIndex, Buffer, 0, BUFFERSIZE);
}
stream.Write(Buffer, 0, (int)Read);
}
private DbDataReader LoadDbStream()
{
DbCommand cmd = Cms.Data.GetCommand("SELECT Data FROM CMS_Files WHERE FileId = #FileId", "#FileId", Id.ToString());
cmd.CommandType = System.Data.CommandType.Text;
cmd.Connection.Open();
return cmd.ExecuteReader(CommandBehavior.SequentialAccess | CommandBehavior.CloseConnection);
}
The command object is an ordinary command object. The key part is the CommandBehavior.SequentialAccess flag as this makes sql server only send data when you ask for. You therefore can only read the columns in the order they are specified in the query. the other point to make is stream should be the outputstream from the request & switch output buffering off.
Couple this with outputcaching and you reduce the memory load on the server.
Simon
You can use this as source form the image.
src="data:image/jpg;base64,<%= System.Convert.ToBase64String(result["PHOTO"] as byte[]) %>"

Pause and resume download in flex?

Is it possible in an air application to start a download, pause it and after that resume it?
I want to download very big files (1-3Gb) and I need to be sure if the connection is interrupted, then the next time the user tries to download the file it's start from the last position.
Any ideas and source code samples would be appreciated.
Yes, you would want to use the URLStream class (URLLoader doesn't support partial downloads) and the HTTP Range header. Note that there are some onerous security restrictions on the Range header, but it should be fine in an AIR application. Here's some untested code that should give you the general idea.
private var _us:URLStream;
private var _buf:ByteArray;
private var _offs:uint;
private var _paused:Boolean;
private var _intervalId:uint;
...
private function init():void {
_buf = new ByteArray();
_offs = 0;
var ur:URLRequest = new URLRequest( ... uri ... );
_us = new URLStream();
_paused = false;
_intervalId = setInterval(500, partialLoad);
}
...
private function partialLoad():void {
var len:uint = _us.bytesAvailable;
_us.readBytes(_buf, _offs, len);
_offs += len;
if (_paused) {
_us.close();
clearInterval(_intervalId);
}
}
...
private function pause():void {
_paused = true;
}
...
private function resume():void {
var ur:URLRequest = new URLRequest(... uri ...);
ur.requestHeaders = [new URLRequestHeader("Range", "bytes=" + _offs + "-")];
_us.load(ur);
_paused = false;
_intervalId = setInterval(500, partialLoad);
}
if you are targeting mobile devices, maybe you should take a look at this native extension: http://myappsnippet.com/download-manager-air-native-extension/ it supports simultaneous resumable downloads with multi-section chunks to download files as fast as possible.

Resources