How do I decode data returned in the Remote Resources URL (webfeed.aspx) of an RDP server? - asp.net

How do I decode the rdweb/feed/webfeed.aspx content from a Microsoft remote desktop(RDP) server?
I am having difficulty locating the encoding of the webfeed.aspx or more specifically https:// RDP url /rdweb/feed/webfeed.aspx url of the RDP client. In Microsoft's RDP client, the data resolves to references to directories and applications that can be used as shortcuts for the RDP connection.
The file that I get appears to be a base64 encoded file. From what I have read, this should be an XML file that describes the resources, but it seems to be compressed or encoded somehow. I am having no issue getting the data. I can read it via a browser (though not understand it) and Microsoft's RDP client is pulling the data appropriately, so the data is good. I need to decode/process the data because I am extending an open source RDP tool to do the same as Microsoft's RDP client.
Here is an example,from the text file from a test server's rdweb/feed/webfeed.aspx
46672D19C141995BFAA3317324E7595B8AF001B09CF315A3668E2335F383079AA7397E6E8ADF56379306F18DCCFFB4A542CC4C8B81609D5E9D738F8347BC0372EB7513DD797EF0BFA921F7D6E2A108C6A12F44712D57D6191FB068AF1733256291BC0BD7429AD585DA9E6ECC3D1380562A091E980D6908E2E0EF4184689329686AD132E2D63945810D93F88ECAEC6A0B9460F23B9ABF229F974D3B32D0D7415CD8EAF1B6B93678718C9E658F0CEDA604D5294FF3458FB2ABD798A668E8E6714939C8115EC00A13354F8EF22563CF65F5C6D053306D4C3276032D045752412BA760C683C5

Have you tried something like this?
HttpWebRequest httpWebRequest = (HttpWebRequest)WebRequest.Create("https://RDPurl/rdweb/feed/webfeed.aspx");
HttpWebResponse httpWebResponse = (HttpWebResponse)httpWebRequest.GetResponse();
string connectionXml;
using (StreamReader streamReader = new StreamReader(httpWebResponse.GetResponseStream()))
{
connectionXml = streamReader.ReadToEnd();
}
More detailed code is here.
The resulting connectionXml string should be Resource List Syntax.

Related

copy file from local to remote server

I'm dynamically creating html files on my local system (using HTMLTEXTWRITER, then save them using streamwriter to local file system). I want to copy this file to my remote server withour user interaction, so that my users can read file. I use C#
for instance I want to copy from d:\myfile.html to mysite.com\myfile.html, how can I do it?
I have used this and it worked. may be useful
for holding path of local
rPath = "\\" & Request.UserHostAddress & "\c$\temp\"
for output file
rOutput = Session.SessionID & "_" & Format(Date.Now(), "ddMMyyhhmmss") & ".pdf"
now: report will be created at localhost\c\temp
You can't use the System.IO classes for this (unless you have access to the remote server as a network drive), but you can programmatically POST the file from the client to the remote server over HTTP using System.Net.
Here's a snippet using the WebRequest class:
WebRequest request = WebRequest.Create( url );
request.Timeout = 1000; // some appropriate value
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
request.ContentLength = 0; // set a length here
using( StreamWriter requestStream = new StreamWriter( request.GetRequestStream(), System.Text.Encoding.UTF8) ) {
// write to the stream here using requestStream.Write();
requestStream.Close();
}
More info for HTTP: http://msdn.microsoft.com/en-us/library/debx8sh9.aspx
Alternatively, you could use a protocol designed for transferring files like FTP (or something more secure) which isn't that hard to do in code.
FTP options: http://msdn.microsoft.com/en-us/library/ms229718
Is you remote server based on Windows and in the same workgroup or domain with you working machine ? If so, you can turn on the Windows File Sharing on the server. Then you can copy your file with cmd like this:
copy c:\test.txt \\mysite.com
The path "\\mysite.com" is also valid used by File.Copy in C#.
Otherwise, you need to set up a FTP environment on you server and use the FTP related API in C#.
You could set-up an FTP server and copy the files programmatically via FTP.
An example would be found here or here.
There are three ways by which you can copy the file to remote server.
Using normal file copy mode. Here you need to have access to the the webserver shared path. If the webserver is in same network as your application, then you can share the webroot and provide write access to the user who is running the application. He can then use File.Copy("source.txt", "\\Servername\SharedFolderName\target.txt").
The second approach is to use FTP to copy the file to the remote server. This MSDN example would help you on how to do this. This will work with most of the shared hosting providers.
You can use HTTP POST as noted by Tim. But this would let any user to perform the post. You may have to take care of user provisioning, authentication and authorization. IMO, keep this as last option as provisioning user and providing rights to certain path, may become cumbersome.

iis7 website accessed externally downloads files to server instead of local machine

I've a site set up in IIS. It's allows users to download files from a remote cloud to their own local desktop. HOWEVER, the context seems to be mixed up, because when I access the website externally via the IP, and execute the download, it saves the file to the server hosting the site, and not locally. What's going on??
My relevant lines code:
using (var sw2 = new FileStream(filePath,FileMode.Create))
{
try
{
var request = new RestRequest("drives/{chunk}");
RestResponse resp2 = client.Execute(request);
sw2.Write(resp2.RawBytes, 0, resp2.RawBytes.Length);
}
}
Your code is writing a file to the local filesystem of the server. If you want to send the file to the client, you need to do something like
Response.BinaryWrite(resp2.RawBytes);
The Response object is what you use to send data back to the client who made the request to your page.
I imagine that code snippet you posted is running in some sort of code-behind somewhere. That is running on the server - it's not going to be running on the client. You will need to write those bytes in the Response object and specify what content-type, etc. and allow the user to Save the file himself.

Reading a remote URL in Domino LotusScript

I have a remote RSS feed which has to be transformed into Notes documents using LotusScript.
I've looked through the documentation, but I can't find how to open a remote URL in order to retrieve its contents. In other words, some sort of wget- or curl-like functionality. Can anyone shed some light on how to do this? Using Java is not an option.
Thanks.
Check out the NotesDOMParser class - available in LotusScript - which lets you (indirectly) pull XML from a remote URL and process in a an XML DOM object.
You can pull the XML into a string using the MSXMLHTTP COM object, then use NotesStream to send the XML to the NotesDOMParser.
I have not tested, but the code would look something like this:
...
Set objXML = CreateObject("Microsoft.XMLHTTP")
objXML.open "GET", sURL, False, "", ""
objXML.send("")
sXMLAsText = Trim$(objXML.responseText)
Set inputStream = session.CreateStream
inputStream.Open (sXMLAsText)
Set domParser=session.CreateDOMParser(inputStream, outputStream)
domParser.Process
...
Documentation: http://publib.boulder.ibm.com/infocenter/domhelp/v8r0/index.jsp?topic=/com.ibm.designer.domino.main.doc/H_NOTESDOMPARSER_CLASS.html
You can't open a remote URL (whether it's HTTP or some other protocol) using native Lotusscript: the object library simply doesn't support it. If you're running on a Windows server, you should be able to use the MS XMLHttp DLLs to get a handle on your remote file via a URL, as specified by the previous answer. (Alternatively, this link specifies how to parse and open a UNC path with Lotusscript—again, Windows only).
All that said, if I understand you correctly, you're not using HTTP to access the remote file at all. If the RSS file is just on a simple path, why can't you open the file for parsing in the normal way with Lotusscript?

Blackberry http connection not working on 3g

hi friends i'm a newbie in blackberry programming and have managed to make a small application... The application downloads an xml file through http and parses it and displays it on the screen... now the problem is that though it works fine on my simulator... the client complains that he's getting an error in connection if he connects it through 3G... do i need to add anything other than the following...
// Build a document based on the XML file.
url = <my clients url file>;
DocumentBuilderFactory factory = DocumentBuilderFactory.newInstance();
DocumentBuilder builder = factory.newDocumentBuilder();
hc = (HttpConnection)Connector.open(url+";deviceside=true");
hc.setRequestMethod(HttpConnection.GET);
InputStream inputStream = hc.openInputStream();
hc.getFile();
Document document = builder.parse(inputStream);
hc.close();
inputStream.close();
Do i need to add anything to make it download http content through 3G also??
Specifying "deviceside=true" requires the device have the APN correctly configured, or you include APN specification in the URL. Have a look at this video.
You need to be able to detect what sort of connection the device is using as was said above deviceside=true works only for APN. If you want to just test it out try using
;deviceside=false //for mds
;deviceside=false;ConnectionType=mds-public //for bis-b
;interface=wifi //for wifi

SharePoint stream file for preview

I am looking to stream a file housed in a SharePoint 2003 document library down to the browser. Basically the idea is to open the file as a stream and then to "write" the file stream to the reponse, specifying the content type and content disposition headers. Content disposition is used to preserve the file name, content type of course to clue the browser about what app to open to view the file.
This works all good and fine in a development environment and UAT environment. However, in the production environment, things do not always work as expected,however only with IE6/IE7. FF works great in all environments.
Note that in the production environment SSL is enabled and generally used. (When SSL is not used in the production environment, file streams, is named as expected, and properly dislays.)
Here is a code snippet:
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "test.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] pdfBytes = new byte[byteNum];
fs.Read(pdfBytes, 0, (int)byteNum);
Response.AppendHeader("Content-disposition", "filename=Testme.doc");
Response.CacheControl = "no-cache";
Response.ContentType = "application/msword; charset=utf-8";
Response.Expires = -1;
Response.OutputStream.Write(pdfBytes, 0, pdfBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Like I said, this code snippet works fine on the dev machine and in the UAT environment. A dialog box opens and asks to save, view or cancel Testme.doc. But in production onnly when using SSL, IE 6 & IE7 don't use the name of the attachment. Instead it uses the name of the page that is sending the stream, testheader.apx and then an error is thrown.
IE does provide an advanced setting "Do not save encrypted pages to disk".
I suspect this is part of the problem, the server tells the browser not to cache the file, while IE has the "Do not save encrypted pages to disk" enabled.
Yes I am aware that for larger files, the code snippet above will be a major drag on memory and this implimentation will be problematic. So the real final solution will not open the entire file into a single byte array, but rather will open the file as a stream, and then send the file down to the client in bite size chunks (e.g. perhaps roughly 10K in size).
Anyone else have similar experience "streaming" binary files over ssl? Any suggestions or recommendations?
It might be something really simple, believe it or not I coded exactly the same thing today, i think the issue might be that the content disposition doesnt tell the browser its an attachment and therefore able to be saved.
Response.AddHeader("Content-Disposition", "attachment;filename=myfile.doc");
failing that i've included my code below as I know that works over https://
private void ReadFile(string URL)
{
try
{
string uristring = URL;
WebRequest myReq = WebRequest.Create(uristring);
NetworkCredential netCredential = new NetworkCredential(ConfigurationManager.AppSettings["Username"].ToString(),
ConfigurationManager.AppSettings["Password"].ToString(),
ConfigurationManager.AppSettings["Domain"].ToString());
myReq.Credentials = netCredential;
StringBuilder strSource = new StringBuilder("");
//get the stream of data
string contentType = "";
MemoryStream ms;
// Send a request to download the pdf document and then get the response
using (HttpWebResponse response = (HttpWebResponse)myReq.GetResponse())
{
contentType = response.ContentType;
// Get the stream from the server
using (Stream stream = response.GetResponseStream())
{
// Use the ReadFully method from the link above:
byte[] data = ReadFully(stream, response.ContentLength);
// Return the memory stream.
ms = new MemoryStream(data);
}
}
Response.Clear();
Response.ContentType = contentType;
Response.AddHeader("Content-Disposition", "attachment;");
// Write the memory stream containing the pdf file directly to the Response object that gets sent to the client
ms.WriteTo(Response.OutputStream);
}
catch (Exception ex)
{
throw new Exception("Error in ReadFile", ex);
}
}
Ok, I resolved the problem, several factors at play here.
Firstly this support Microsoft article was beneficial:
Internet Explorer is unable to open Office documents from an SSL Web site.
In order for Internet Explorer to open documents in Office (or any out-of-process, ActiveX document server), Internet Explorer must save the file to the local cache directory and ask the associated application to load the file by using IPersistFile::Load. If the file is not stored to disk, this operation fails.
When Internet Explorer communicates with a secure Web site through SSL, Internet Explorer enforces any no-cache request. If the header or headers are present, Internet Explorer does not cache the file. Consequently, Office cannot open the file.
Secondly, something earlier in the page processing was causing the "no-cache" header to get written. So Response.ClearHeaders needed to be added, this cleared out the no-cache header, and the output of the page needs to allow caching.
Thirdly for good measure, also added on Response.End, so that no other processing futher on in the request lifetime attempts to clear the headers I've set and re-add the no-cache header.
Fourthly, discovered that content expiration had been enabled in IIS. I've left it enabled at the web site level, but since this one aspx page will serve as a gateway for downloading the files, I disabled it at the download page level.
So here is the code snippet that works (there are a couple other minor changes which I believe are inconsequential):
System.IO.FileStream fs = new System.IO.FileStream(Server.MapPath(".") + "\\" + "TestMe.doc", System.IO.FileMode.Open);
long byteNum = fs.Length;
byte[] fileBytes = new byte[byteNum];
fs.Read(fileBytes, 0, (int)byteNum);
Response.ClearContent();
Response.ClearHeaders();
Response.AppendHeader("Content-disposition", "attachment; filename=Testme.doc");
Response.Cache.SetCacheability(HttpCacheability.Public);
Response.ContentType = "application/octet-stream";
Response.OutputStream.Write(fileBytes, 0, fileBytes.Length);
Response.Flush();
Response.Close();
fs.Close();
Response.End();
Keep in mind too, this is just for illustration. The real production code will include exception handling and likely read the file a chunk at a time (perhaps 10K).
Mauro, thanks for catching a detail that was missing from the code as well.

Resources