Return values from exe in javascript - asp.net

i have to call an executable in the client machine in asp.net and get the return parameters, i been looking for an example but i couldn't find it.
it this possible to recover the output parameters from one exe in JavaScript?
i know that can i write:
var WshShell = new ActiveXObject("WScript.Shell");
var oExec = WshShell.Exec("My.exe");
but the clients executable returns 0 or 1 that values are the ones i need to collect
Thanks in advance

Browser-based JavaScript can't call executable files on client machines; to do so would be a catastrophic security problem. If you have to run an executable on the client machine, consider asking the user to install a .NET application, an ActiveX control, or something like Java if you want to be platform-independent.
Depending on what you're trying to do, you may not need to run an EXE on the client machine; you can do a LOT with standard cloud-type scenarios (JS or SilverLight on the client, Web services or WCF on the server). Without more information about your situation, however, it's impossible to tell.
EDIT: Based on your comments that you're using the ActiveXObject.Exec method, you can use the StdOut property of the WshScriptExec object that method returns. From MSDN's article on the StdOut property:
if (!oExec.StdOut.AtEndOfStream)
{
input += oExec.StdOut.Read(1);
//...
}

Related

Difference between starting process from Console applciation and ASP.NET application

I have a Web API application that needs to run a Python script which in turn runs a Perl script:) does some otehr stuff and get the output results from it.
The way I do this is with starting a Process:
var start = new ProcessStartInfo()
{
FileName = _pythonPath, //#"C:\Python27\python.exe",
Arguments = arguments, //#"D:\apps\scripts\Process.py
UseShellExecute = false,
RedirectStandardOutput = true,
RedirectStandardError = true
};
using (Process process = Process.Start(start))
{
using (StreamReader reader = process.StandardOutput)
{
var result = reader.ReadToEnd();
var err = process.StandardError.ReadToEnd();
process.WaitForExit();
return result;
}
}
The script inside tries to connect to Perforce server using P4 Python API and then Perl script call a P4 command as well. When running this code from Console application, everything goes fine. The program automatically gets the Perforce settings (I've got a P4V client with all the settings specified). But when running from ASP.NET Web API, it doesn't get the settigns and says that it cannot conenct to perforce:1666 server (I guess this is the standard value when no settign specified).
I do understand that not so many people use Perforce, especially in such way and can help here, but would like to know what is the difference between running this script from Console app and Web API app that mich cause this different behaviour.
One of the most obvious differences between running code from a console application and running code in IIS* is that, usually, the two pieces of code will be running under different user accounts.
Frequently, if you're experiencing issues where code works in one of those situations and not the other, it's a permissions or a per-user-settings issue. You can verify whether this is the case by either running the console application under the same user account that is configured for the appropriate IIS application pool, or vice verse, configure the application pool to use your own user account, and then see whether the problem persists.
If you've confirmed that it's a permissions issue and/or per-user-settings, then you need to decide how you want to fix it. I'd usually recommend not running IIS application pools under your own user account - if you cannot seem to get the correct settings configured for the existing application pool user, I'd usually recommend creating a separate user account (either locally on the machine or as part of your domain, depending on what's required) and giving it just the permissions/settings that it requires to do the job.
*IIS Express being the exception here because it doesn't do application pools and the code does end up running under your own user account.

COM Exception for client-side object called by Classic ASP page

I have tricky problem, which I'm struggling quite a bit with.
The current solution, consists of a Classic ASP site hosted on some Win2K3 server, that calls some Client-side DLL's on XP machines coded in VB6.
These client-side DLL's then again can call some other COM objects, in this particular case it calls IBM Louts Notes (Lotus Domino Objects 1.2).
Now for different reasons these DLL's has to be converted to .NET (still x86), at this stage, this is the only change to be done. This works quite well except for one piece of code which throws an error.
COMException when calling the Lotus Notes COM object
ASP script calling the DLL
Set objLotus = CreateObject("OpenLotusNotes_FU_v2.clsMain")
sRet = objLotus.OpenLotus_mail()
Client-side DLL
Dim session As NotesSession = New NotesSession() 'works well
Dim objNotesWrkSp As Object
objNotesWrkSp = Activator.CreateInstance(Type.GetTypeFromProgID("Notes.NotesUIWorkspace")) 'crashes
Exception
Retrieving the COM class factory for component with CLSID {29131502-2EED-1069-BF5D-
00DD011186B7} failed due to the following error: 80080005
Server execution failed (Exception from HRESULT: 0x80080005 (CO_E_SERVER_EXEC_FAILURE)).
When I try to run this code in console application on the same computer, it works. So it has to be some permissions(?), I have tried changing basically everything I can think of.
Any help would be much appreciated!
Updated 01.09.2014
What I see is when I trigger the code from ASP, is that it creates a new process of Notes everytime, but only in the background, no UI what so ever. When I trigger the code from a console application, I get the Notes UI, which asks me for password, if I don't already have notes running.
I believe that I'm getting the Exception because it eventually times out.
Have a read of this article , your dealing with an Out Of Process COM component which may not initialize properly for some reason.
Another possibility is that the COM threading model is not supported in a free threaded component which .NET is by default compiled with. You can compile you DLL with a STA attribute, but as far am I'm aware that only effects console applications. You might find some additional information in this article from MS, if you have not already read it of course. Hope something there helps you solve your problem.
Consult with your admins first, but for this scenario you can set no password for the ID the Lotus Notes uses.
I did resolve this, so if anyone else would run in to this…
First of all configure the “Notes Link” Component Service to run as a “The interactive user” and that the process owner has permissions in "Launch and Activation Permissions" and "Access permissions".
Then this should be possible
Dim objNotesWrkSp As Object
Dim objWorkspace As Type = Type.GetTypeFromProgID("Notes.NotesUIWorkspace")
objNotesWrkSp = Activator.CreateInstance(objWorkspace)
As it turned out in this particular case I could only get it to work with late-binding, when I tried to this it just opened a conhost.exe process and then never responed:
Dim session as New NotesSession
session.Initialize()
Among other similar issues... So then I only used late-binding for all communication with Notes.
Dim mailServerPath, mailFile As String
objWorkspace.InvokeMember("OpenDatabase", Reflection.BindingFlags.InvokeMethod, Nothing, objNotesWrkSp, New Object() {mailServerPath, mailFile})
And so on...

Using ffmpeg in asp.net

I needed a audio conversion library. After already pulling my hair..I have given up on the fact that there is no such audio library out there..every library out there has some or the other problem.
The only option left is ffmpeg which is the best but unfortunately you cannot use it in asp.net (not directly I mean). Every user on the website that will convert a file; will launch an exe?; I think I will hit the server memory max soon.
Bottom Line: I will try using ffmpeg.exe and see how many users it can support simultaneously.
I went to the ffmpeg website and in the windows download section I found 3 different version; static, shared and dev.
Does any one know which would be the best? All packed in one exe (static) or dll's separely and exe small, wrt using it in asp.net?
PS: any one has a good library out there..would be great if you can share.
Static builds provide one self-contained .exe file for each program (ffmpeg, ffprobe, ffplay).
Shared builds provide each library as a separate .dll file (avcodec, avdevice, avfilter, etc.), and .exe files that depend on those libraries for each program
Dev packages provide the headers and .lib/.dll.a files required to use the .dll files in other programs.
ffMpeg is the best library out there from what I have used but I wouldn't recommend trying to call it directly from asp.net.
What I have done, is accepted the upload, stored it on the server, or S3 in my case, then have a worker role (if using something like Azure) and a process that continuously looks and monitors for new files to convert.
If you needed a realtime like solution, you could update flags in your database and have an AJAX solution to poll the database to keep providing progress updates, then a link to download once the conversion is complete.
Personally my approach would be
Azure Web Roles
Azure Worker Role
ServiceBus
The WorkerRole starts up and is monitoring the ServiceBus Queue for messages.
The ASP.NET site uploads and stores the file in S3 or Azure
The ASP.NET site then records information in your DB if needed and sends a message to the ServiceBus queue.
The WorkerRole picks this up and converts.
AJAX will be needed on the ASP.NET site if you want a realtime monitoring solution. Otherwise you could send an email when complete if needed.
Using a queuing process also helps you with load as when you are under heavy load people just wait a little longer and it doesn't grind everything to a halt. Also you can scale out your worker roles as needed to balance loads, should it ever become too much for one server.
Here is how I run ffMpeg from C# (you will need to change the parameters for your requirements)
String params = string.Format("-i {0} -s 640x360 {1}", input.Path, "C:\\FilePath\\file.mp4");
RunProcess(params);
private string RunProcess(string Parameters)
{
//create a process info
ProcessStartInfo oInfo = new ProcessStartInfo(this._ffExe, Parameters);
oInfo.UseShellExecute = false;
oInfo.CreateNoWindow = true;
oInfo.RedirectStandardOutput = true;
oInfo.RedirectStandardError = true;
//Create the output and streamreader to get the output
string output = null; StreamReader srOutput = null;
//try the process
try
{
//run the process
Process proc = System.Diagnostics.Process.Start(oInfo);
proc.ErrorDataReceived += new DataReceivedEventHandler(proc_ErrorDataReceived);
proc.OutputDataReceived += new DataReceivedEventHandler(proc_OutputDataReceived);
proc.BeginOutputReadLine();
proc.BeginErrorReadLine();
proc.WaitForExit();
proc.Close();
proc.Dispose();
}
catch (Exception)
{
// Capture Error
}
finally
{
//now, if we succeeded, close out the streamreader
if (srOutput != null)
{
srOutput.Close();
srOutput.Dispose();
}
}
return output;
}

Can asp.net web application communicate with standalone visual c++ application?

Can I integrate a asp.net website with Visual c++ standalone application?
The request should go from asp.net website to visual c++ application and the result should be used by asp.net website?
You can execute a process in the filesystem, independently of the language it was written on.
Like this:
ProcessStartInfo processInfo = new ProcessStartInfo("C++App.exe", "command line arguments like /page getdata.aspx ... ");
processInfo.ErrorDialog = false;
processInfo.UseShellExecute = false;
processInfo.RedirectStandardOutput = true;
processInfo.RedirectStandardError = true;
Process proc = Process.Start(processInfo);
proc.ErrorDataReceived += (sender, errorLine) => { if (errorLine.Data != null) Trace.WriteLine(errorLine.Data); };
proc.OutputDataReceived += (sender, outputLine) => { if (outputLine.Data != null) Trace.WriteLine(outputLine.Data); };
proc.BeginErrorReadLine();
proc.BeginOutputReadLine();
proc.WaitForExit();
Regards.
Your question actually includes part of the answer. The two processes, regardless of the language they are written in, they would have to communicate exchanging data in a client-server fashion. Therefore, the C++ process would have to act as a server and ASP.NET as a client requesting data from the server.
Therefore, you could build a web service, either SOAP or REST using C++ and reference this web service through your ASP.NET process asking for data from the C++ server. Here you could find a tutorial on how to build a web service using C++. Here you could find a .NET tutorial on web services.
Another, simpler but less structured approach would be to use something like a vault, perhaps a file or a database. The results would be written in the vault anyway (even if they were not asked and the ASP.NET process would retrieve them whenever it wants.
Hope I helped!

Adobe AIR HTTP Connection Limit

I'm working on an Adobe AIR application which can upload files to a web server, which is running Apache and PHP. Several files can be uploaded at the same time and the application also calls the web server for various API requests.
The problem I'm having is that if I start two file uploads, while they are in progress any other HTTP requests will time out, which is causing a problem for the application and from a user point of view.
Are Adobe AIR applications limited to 2 HTTP connections, or is something else probably the issue?
From searching about this issue I've not found much but one article did indicated that it wasn't limited to just two connections.
The file uploads are performed by calling the File classes upload method, and the API calls are done using the HTTPService class. The development web server I am using is a WAMP server, however when the application is released it will be talking to a LAMP server.
Thanks,
Grant
Here is the code I'm using to upload the file:
protected function btnAddFile_clickHandler(event:MouseEvent):void
{
// Create a new File object and display the browse file dialog
var uploadFile:File = new File();
uploadFile.browseForOpen("Select File to Upload");
uploadFile.addEventListener(Event.SELECT, uploadFile_SelectedHandler);
}
private function uploadFile_SelectedHandler(event:Event):void
{
// Get the File object which was used to select the file
var uploadFile:File = event.target as File;
uploadFile.addEventListener(ProgressEvent.PROGRESS, file_progressHandler);
uploadFile.addEventListener(IOErrorEvent.IO_ERROR, file_ioErrorHandler);
uploadFile.addEventListener(Event.COMPLETE, file_completeHandler);
// Create the request URL based on the download URL
var requestURL:URLRequest = new URLRequest(AppEnvironment.instance.serverHostname + "upload.php");
requestURL.method = URLRequestMethod.POST;
// Set the post parameters
var params:URLVariables = new URLVariables();
params.name = "filename.ext";
requestURL.data = params;
// Start uploading the file to the server
uploadFile.upload(requestURL, "file");
}
Here is the code for the API calls:
private function sendHTTPPost(apiFile:String, postParams:Object, resultCallback:Function, initialCallerResultCallback:Function):void
{
var httpService:mx.rpc.http.HTTPService = new mx.rpc.http.HTTPService();
httpService.url = AppEnvironment.instance.serverHostname + apiFile;
httpService.method = "POST";
httpService.requestTimeout = 10;
httpService.resultFormat = HTTPService.RESULT_FORMAT_TEXT;
httpService.addEventListener("result", resultCallback);
httpService.addEventListener("fault", httpFault);
var token:AsyncToken = httpService.send(postParams);
// Add the initial caller's result callback function to the token
token.initialCallerResultCallback = initialCallerResultCallback;
}
If you are on a windows system, Adobe AIR is using Microsofts WinINet library to access the web. This library by default limits the number of concurrent connections to a single server to 2:
WinInet limits the number of simultaneous connections that it makes to a single HTTP server. If you exceed this limit, the requests block until one of the current connections has completed. This is by design and is in agreement with the HTTP specification and industry standards.
... Connections to a single HTTP 1.1 server are limited to two simultaneous connections
There is an API to change the value of this limit but I don't know if it is accessible from AIR.
Since this limit also affects page loading speed for web sites, some sites are using multiple DNS names for artifacts such as images, javascripts and stylesheets to allow a browser to open more parallel connections.
So if you are controlling the server part, a workaround could be to create DNS aliases like www.example.com for uploads and api.example.com for API requests.
So as I was looking into this, I came across this info about using File.upload() in the documentation:
Starts the upload of the file to a remote server. Although Flash Player has no restriction on the size of files you can upload or download, the player officially supports uploads or downloads of up to 100 MB. You must call the FileReference.browse() or FileReferenceList.browse() method before you call this method.
Listeners receive events to indicate the progress, success, or failure of the upload. Although you can use the FileReferenceList object to let users select multiple files for upload, you must upload the files one by one; to do so, iterate through the FileReferenceList.fileList array of FileReference objects.
The FileReference.upload() and FileReference.download() functions are
nonblocking. These functions return after they are called, before the
file transmission is complete. In addition, if the FileReference
object goes out of scope, any upload or download that is not yet
completed on that object is canceled upon leaving the scope. Be sure
that your FileReference object remains in scope for as long as the
upload or download is expected to continue.
I wonder if something there could be giving you issues with uploading multiple files. I see that you are using browserForOpen() instead of browse(). It seems like the probably do the same thing... but maybe not.
I also saw this in the File class documentation
Note that because of new functionality added to the Flash Player, when publishing to Flash Player 10, you can have only one of the following operations active at one time: FileReference.browse(), FileReference.upload(), FileReference.download(), FileReference.load(), FileReference.save(). Otherwise, Flash Player throws a runtime error (code 2174). Use FileReference.cancel() to stop an operation in progress. This restriction applies only to Flash Player 10. Previous versions of Flash Player are unaffected by this restriction on simultaneous multiple operations.
When you say that you let users upload multiple files, do you mean subsequent calls to browse() and upload() or do you mean one call that includes multiple files? It seems that if you are trying to do multiple separate calls that that may be an issue.
Anyway, I don't know if this is much help. It definitely seems that what you are trying to do should be possible. I can only guess that what is going wrong is perhaps a problem with implementation. Good luck :)
Reference: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#upload()
http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/net/FileReference.html#browse()
Just because I was thinking about a very similar question because of an error in one of my actual apps, I decided to write down the answer I found.
I instantiated 11
HttpConnections
and was wondering why my Flex 4 Application stopped working and threw an HTTP-Error although it was working pretty good formerly with just 5 simultanious HttpConnections to the same server.
I tested this myself because I did not find anything regarding this in the Flex docs or on the internet.
I found that using more than 5 HTTPConnections was the reason for the Flex application to throw the runtime error.
I decided to instantiate the connections one after another as a temporally workaround: Load the next one after the other has received the data and so on.
Thats of course just temporally since one of the next steps will be to alter the responding server code in that way that it answers a request that contains the results of requests to more then one table in one respond. Of course the client application logic needs to be altered, too.

Resources