I created an ASHX file and use it to handle async file uploads.
Since the site might not be hosted on our servers, I want to check for write permissions and delete permissions and supply the end user (site content editor in this case) with an error they can deal with.
I'm using uploadify for the upload, I'm not sure, but I`m guessing this complicates the return of a message that can be shown on the page, but maybe not.
I ended up using the c# code in ashx file to check for permissions on the directory and returned different status codes as JSON objects.
context.response.write("{success: 'false', message: '" + ex + "'}")
And in the client side JS I just access response.message if response.success = false.
Everything works well.
Thank you!
Before the user is able to attempt an upload, trying writing and reading a small file to the destination on the server (on the server side), if this fails then you can supply them with an appropriate message.
Related
OK, after reading some of the posts here and trying many different ways to download file sent with TransmitFile command Ive decided to write question anyway hoping someone will help me.
I have 2 pages master.aspx and client.aspx. With the master page, I am downloading and uploading files to multiple websites and that way keeping them in sync. Client pages are on those servers and they are handling downloading and uploading content on the particular site itself.
Now to download (send page) from a client page I am using this code (excerpt):
Response.TransmitFile(sPath + sFileName)
Response.Flush() ' Sends all currently buffered output To the client.
Response.SuppressContent = True ' Gets Or sets a value indicating whether To send HTTP content To the client.
ApplicationInstance.CompleteRequest() ' Causes ASP.NET To bypass all events And filtering In the HTTP pipeline chain Of execution And directly execute the EndRequest Event.
Response.End()
to download that content I am using this:
Using client As WebClient = New WebClient()
SourcePath = rootDir + "UploadedFiles\"
If Directory.Exists(SourcePath) = False Then
Directory.CreateDirectory(SourcePath)
End If
client.DownloadFile(lblURL.Text + "?action=download&Path=" + Uri.EscapeDataString(SourcePath) + "&FileName=" + Uri.EscapeDataString(InputFileName) + "&AdminCode=" + Uri.EscapeDataString(cAdminCode), SourcePath + InputFileName)
End Using
But all that I am getting is the HTML content of client.aspx page, it seems that client.DownloadFile can't figure out that I want to download a file not that page itself. How to make client.DownloadFile to download file sent with TransmitFile?
Thanks,
Dejan
UPDATE:
I have found an issue with my path variables, because Ive changed to DownloadFile that has two parameters (second target file path), I was wrongly sending the target file as a parameter to the remote procedure and that's why didn't get the file.
I'm struggling to figure out what exactly is happening. I am using GdPicture to save a scanned document through java script using their COM+ code and source project as my starting ground. Long story short is their function issues a HTTP PUT command specifying the file name to be saved.
When I execute the command I see that the request is getting to my server, and even has the appropriate content size to include the pdf document. I even get a 200 response back to my browser, no errors or anything...... yet the pdf doesn't get saved. Is that because PUT isn't the right way to do this? I don't have the option to POST the file because the transfer is wrapped in GdPicture's api... so with that said.
I have done the following
Ensured that IIS_IUSRS group has write permissions to the "Upload" virtual directory
Added a handler that specifically allows the PUT verb for "*.pdf"
Removed the StaticFileHandler for the "Upload" virtual directory
I aplogize for the links, but I don't have 10 rep points yet
PUT Request from FIDDLER
Response
** Edit **
More information about GdPicture, I have already contacted them and their function is not the problem. The implementation is as simple as
var status = oGdViewer.SaveDocumentToPDF_2("http://domain.com/Annotation/Upload/" + FileName, "user", "pass");
Thanks!
I am struggling with Meteor when using separate client and server directories and was hoping someone could help me.
My server code in the server subdirectory looks like:
Testing = new Meteor.Collection("testing");
Testing.insert({hello1:'world1'});
Testing.insert({hello2:'world2'});
Testing.insert({hello3:'world3'});
Meteor.publish("testing", function() {
console.log('server: ' + Testing.find().count());
return Testing.find();
});
My client code in the client subdirectory looks like:
Meteor.subscribe("testing");
var Testing = new Meteor.Collection("testing");
console.log('count: ' + Testing.find().count());
I have tried this with autopublish on and off.
In my terminal window, I can see the log statement output a number of items as I would expect. But for my client, in the browser console window I always see a count of 0.
Not sure if this is related, but when I modify my subscribe statement and save my changes, I see this error in my console window:
POST http://localhost:3000/sockjs/574/ukpxre9v/xhr 503 (Service Unavailable) sockjs- 0.3.4.js:821
AbstractXHRObject._start sockjs-0.3.4.js:821
(anonymous function)
I'm sure I'm making some stupid mistake, but I haven't had any luck tracking it down. Any help would be greatly appreciated.
You're running console.log('count: ' + Testing.find().count()); too soon Meteor will sync your server collection down to the client but it takes a very short amount of time.
For instance you could run console.log('count: ' + Testing.find().count()); in your web console it should give you a proper result because you would have waited half a second or so for it to load the data down from the server.
You could put this code in a reactive context so it shows the live count correctly, such as Meteor.autorun or a Template helper.
The reason you see that 503 XHR error is when you modify your code and save it, meteor restarts and serves up the new content asap, so the socket between the client and server is temporarily interrupted, until it refreshes the page. This is not really anything wrong with your code.
I've a site set up in IIS. It's allows users to download files from a remote cloud to their own local desktop. HOWEVER, the context seems to be mixed up, because when I access the website externally via the IP, and execute the download, it saves the file to the server hosting the site, and not locally. What's going on??
My relevant lines code:
using (var sw2 = new FileStream(filePath,FileMode.Create))
{
try
{
var request = new RestRequest("drives/{chunk}");
RestResponse resp2 = client.Execute(request);
sw2.Write(resp2.RawBytes, 0, resp2.RawBytes.Length);
}
}
Your code is writing a file to the local filesystem of the server. If you want to send the file to the client, you need to do something like
Response.BinaryWrite(resp2.RawBytes);
The Response object is what you use to send data back to the client who made the request to your page.
I imagine that code snippet you posted is running in some sort of code-behind somewhere. That is running on the server - it's not going to be running on the client. You will need to write those bytes in the Response object and specify what content-type, etc. and allow the user to Save the file himself.
Issue
Msxml2.ServerXMLHTTP keeps returning 401 - Unauthorised errors each time we attempt to read the contents of a file (ASP) from a web server.
Source server is running IIS6, using NTLM integrated login.
This process has been used successfully before, but only in as far as extracting XML files from external websites, not internal ones.
The proxy settings in the registry of the server on which the script is run has also been updated to bypass the website in question, but to no avail.
All paths identified in the VBScript have been checked and tested, and are correct.
User running the script has correct read/write permissions for all locations referenced in the script.
Solution needed
To identify the cause of the HTTP 401 Unauthorised messages, so that the script will work as intended.
Description
Our organisation operates an intranet, where the content is replicated to servers at each of our remote sites. This ensures these sites have continued fast access to important information, documentation and data, even in the event of losing connectivity.
We are in the middle of improving the listing and management of Forms (those pesky pieces of paper that have to be filled in for specific tasks). This involves establising a database of all our forms.
However, as the organisation hasn't been smart enough to invest in MSSQL Server instances at each site, replication of the database and accessing it from the local SQL server isn't an option.
To work around this, I have constructed a series of views (ASP pages) which display the required data. I then intend to use Msxml2.ServerXMLHTTP by VBScript, so I can read the resulting pages and save the output to a static file back on the server.
From there, the existing replication process can stream these files out to the site - with users having no idea that they're looking at a static page that just happened to be generated from database output.
Code
' Forms - Static Page Generator
' Implimented 2011-02-15 by Michael Harris
' Purpose: To download the contents of a page, and save that page to a static file.
' Target category: 1 (Contracts)
' Target Page:
' http://sharename.fpc.wa.gov.au/corporate/forms/generator/index.asp
' Target path: \\servername\sharename\corporate\forms\index.asp
' Resulting URL: http://sharename.fpc.wa.gov.au/corporate/forms/index.asp
' Remove read only
' Remove read only flag on file if present to allow editing
' If file has been set to read only by automated process, turn off read only
Const READ_ONLY = 1
Set objFSO = CreateObject("Scripting.FileSystemObject")
Set objFile = objFSO.GetFile("\\server\sharename\corporate\forms\index.asp")
If objFile.Attributes AND READ_ONLY Then
objFile.Attributes = objFile.Attributes XOR READ_ONLY
End If
Dim webObj, strURL
Set webObj = CreateObject("Msxml2.ServerXMLHTTP")
strURL = "http://sharename.fpc.wa.gov.au/corporate/forms/generator/index.asp"
webObj.Open "GET", strURL
webObj.send
If webObj.Status=200 Then
Set objFso = CreateObject("Scripting.FileSystemObject")
Set txtFile = objFso.OpenTextFile("file:\\servername.fpc.wa.gov.au\sharename\corporate\forms\index.asp", 2, True)
txtFile.WriteLine webObj.responseText
txtFile.close
ElseIf webObj.Status >= 400 And webObj.Status <= 599 Then
MsgBox "Error Occurred : " & webObj.Status & " - " & webObj.statusText
Else
MsgBox webObj.ResponseText
End If
Replace your line:
webObj.Open "GET", strURL
With:
webObj.Open "GET", strURL, False, "username", "password"
In most cases 401 Unauthorized means you haven't supplied credentials. Also you should specifiy False to indicate you don't want async mode.
It sounds like the O.P. got this working with the correct proxy settings in the registry (http://support.microsoft.com/kb/291008 explains why proxy configuration will fix this). Newer versions of ServerXMLHTTP have a setProxy method that can be used to set the necessary proxy configuration in your code instead.
In the O.P. code above, after webObj is created, the following line of code would set up the proxy correctly:
webObj.setProxy 2, "0.0.0.0:80", "*.fpc.wa.gov.au"
ServerXMLHTTP will pass on the credentials of the user running the code if it is configured with a proxy, and if the target URL bypasses that proxy. Since you are bypassing the proxy anyway, you can make it a dummy value "0.0.0.0:80", and make sure your target url is covered by what you specify in the bypass list "*.fpc.wa.gov.au"
I would first test if you can reach your url through a normal browser on the same server X you run your code on (A). I would try then reach the url from another PC. One never used to reach that url but in the same network as server X (B).
If B works but A doesn't I would suspect that for some reason your source server (i.e. that one that serves the url) blocks server X for some reason. Check the security settings of II6 and of NTLM.
If both A and B don't work, there is something wrong more in general with your source server (i.e. it blocks everything or NTML doesn't allow you in).
If A works (B doesn't matter then), the problem has to be somewhere in your code. In that case, I would recommend fiddler. This tool can give you the HTTP requests of both your browser and your code in realtime. You can then compare both. That should give you at least a very strong hint about (if not immediately give you) the solution.