Great Article on this subject here: https://searchwindevelopment.techtarget.com/tip/Share-session-state-between-ASP-and-ASPNET-apps
Problem is, I cant get it to work without source code. The code snippets in the article show many errors in Visual Studios. The author, Dennis Hurst is unreachable, as it was written in 2004. Anybody out there have the actual Source code they can post ? Or maybe point me in the right direction ? My goal is to pull Classic ASP object data (Application) into ASP.Net code that shares the same folder. I have read that it might be possible using a COMM Wrapper, but that is way out of my skill level. This sounds like the best solution for my problem. Thank You in advance for your help.
// The constructor for this class takes a reference to the HttpContext and derives the URL it will need to send its requests to
public ASPSessionVar(HttpContext oInContext)
{
oContext = oInContext;
ASPSessionVarASP = "SessionVar.asp";
/* We now build a System.Uri object to derive the correct
URL to send the HTTP request to. oContext.Request.Url
will contain a System.Uri object that represents
this ASPXs URL.
*/
System.Uri oURL = oContext.Request.Url;
ASPSessionVarASP = oURL.Scheme + "://"
+ oURL.Host + ":" + oURL.Port.ToString()
+ ASPSessionVarASP;
}
//-------------------------------------------------------------------------//
// The primary function for this example is called GetSessionVar. It does the majority of the work done by this application,
// This includes creating a WebRequest, sending it off to the ASP page, and returning the response.
// First get the Session Cookie
string ASPCookieName = "";
string ASPCookieValue = "";
if (!GetSessionCookie
(out ASPCookieName, out ASPCookieValue))
{
return "";
}
// Initialize the WebRequest.
HttpWebRequest myRequest =
(HttpWebRequest)WebRequest.Create
(ASPSessionVarASP + "?SessionVar=" + ASPSessionVar);
myRequest.Headers.Add
("Cookie: " + ASPCookieName + "=" + ASPCookieValue);
// Send the request and get a response
HttpWebResponse myResponse =
(HttpWebResponse)myRequest.GetResponse();
Stream receiveStream = myResponse.GetResponseStream();
System.Text.Encoding encode =
System.Text.Encoding.GetEncoding("utf-8");
StreamReader readStream =
new StreamReader(receiveStream, encode);
string sResponse = readStream.ReadToEnd();
// Do a bit of cleanup
myResponse.Close();
readStream.Close();
return sResponse;
}
//------------------------------------------------------------------------------------------------------------------------//
// This function simply takes the Request that was passed by the client and extracts the ASP Session cookie from it.
// This function is called by the GetSessionVar function to retrieve the ASPSession cookie.
private bool GetSessionCookie
(out string ASPCookieName, out string ASPCookieValue)
{
int loop1;
HttpCookie myCookie; // Cookie variable
ASPCookieName = "";
ASPCookieValue = "";
// Capture all cookie names into a string array.
String[] CookieArray =
oContext.Request.Cookies.AllKeys;
// Grab individual cookie objects by cookie name.
for (loop1 = 0; loop1 < CookieArray.Length; loop1++)
{
myCookie =
oContext.Request.Cookies[CookieArray[loop1]];
if (myCookie.Name.StartsWith("ASPSESSION"))
{
ASPCookieName = myCookie.Name;
ASPCookieValue = myCookie.Value;
return true;
}
}
return false;
}
//--------------------------------------------------------------------------------------------------------------------------------//
//The ASPX page will instantiate an ASPSessionVar object, passing in the current Context to the construct or.
//The GetSessionVar function is then called, passing in the name of the ASP Session variable that is to be retrieved.
//Create an ASPSessionVar object,
//passing in the current context
SPI.WebUtilities.ASP.ASPSessionVar oASPSessionVar
= new SPI.WebUtilities.ASP.ASPSessionVar(Context);
string sTemp = oASPSessionVar.GetSessionVar("FirstName");
// CLASSIC ASP CODE BELOW !!
//The ASP code for this example was placed in an ASP file called SessionVar.asp.
// It performs two simple tasks. First, it ensures that the request is coming from the server that the ASP page is running on.
// This ensures that the request is valid and coming ONLY from the Web server's IP address.
// The ASP page then returns the session variable it was asked to provide
<%
dim sT
if Request.ServerVariables("REMOTE_ADDR") =
Request.ServerVariables("LOCAL_ADDR") then
sT = Request("SessionVar")
if trim(sT) <> "" then
Response.Write Session(sT)
end if
end if
%>
Related
I have successfully integrated CAS for our different clients. But this time 'samlValidate' response is not consistently supplying the required attribute. Login is failing randomly because of the missing attribute in the ticket validation response. Sometimes when I clear browser history, it's receiving the attribute in the response.
Expected response:
<cas:serviceResponse xmlns:cas='http://www.xxxxx.xxx/tp/cas'>
<cas:authenticationSuccess>
<cas:user>xxxxx</cas:user>
<cas:attributes>
<cas:userNumber>1234567</cas:userNumber>
</cas:attributes>
</cas:authenticationSuccess>
</cas:serviceResponse>
Response receiving randomly:
<cas:serviceResponse xmlns:cas='http://www.xxx.xxx/tp/cas'>
<cas:authenticationSuccess>
<cas:user>xxxxxx</cas:user>
</cas:authenticationSuccess>
</cas:serviceResponse>
Please note: We have created a custom code to integrate CAS with our Asp.Net webforms application.
string userId = string.Empty;
// Look for the "ticket=" after the "?" in the URL
string tkt = HttpContext.Current.Request.QueryString["ticket"];
// Service url is the url of the Researcher Portal
string service ="www.xyz.com";
string CASHOST="https://cas.xyz.ca:8443/cas"
// First time through there is no ticket=, so redirect to CAS login
if (tkt == null || tkt.Length == 0)
{
string redir = CASHOST + "login?" +
"service=" + service;
HttpContext.Current.Response.Redirect(redir);
}
// Second time (back from CAS) there is a ticket= to validate
string validateurl = CASHOST + "serviceValidate?" +
"ticket=" + tkt +
"&service=" + service;
StreamReader Reader = new StreamReader(new WebClient().OpenRead(validateurl));
string resp = Reader.ReadToEnd();
if (isDebuggingMode)
sbDebugString.Append("****Response **** \n " + resp);
// Some boilerplate to set up the parse.
NameTable nt = new NameTable();
XmlNamespaceManager nsmgr = new XmlNamespaceManager(nt);
XmlParserContext context = new XmlParserContext(null, nsmgr, null, XmlSpace.None);
XmlTextReader reader = new XmlTextReader(resp, XmlNodeType.Element, context);
string userNumber = null;
// A very dumb use of XML. Just scan for the "userNumber". If it isn't there, it will return an empty string.
while (reader.Read())
{
if (reader.IsStartElement())
{
string tag = reader.LocalName;
if (isDebuggingMode)
sbDebugString.Append("tag : " + tag + "\n");
if (tag == "userNumber")
{
userNumber = reader.ReadString();
if (isDebuggingMode)
sbDebugString.Append("userNumber : " + userNumber + "\n");
}
}
}
Where "userNumber" attribute is not receiving always so that login fails randomly.
Please share your thoughts to resolve this issue.
Thank you in advance.
If your client application is not receiving attributes, you will need to make sure:
The client is using a version of CAS protocol that is able to
release attributes.
The client, predicated on #1, is hitting the appropriate endpoint for service ticket validation (i.e. /p3/serviceValidate).
The CAS server itself is resolving and retrieving attributes correctly.
The CAS server is authorized to release attributes to that particular client application inside its service registry.
Starting with CAS Protocol 3:
Among all features, the most noticeable update between versions 2.0 and 3.0 is the ability to return the authentication/user attributes through the new /p3/serviceValidate endpoint.
You may also find this post useful:
https://apereo.github.io/2017/06/23/cas-protocol-compatibility/
I use ASP.NET
I need to give user temporary link for downloading file from server.
It should be a temporary link (page), which is available for a short time (12 hours for example). How can I generate this link (or temporary web page with link)?
Here's a reasonably complete example.
First a function to create a short hex string using a secret salt plus an expiry time:
public static string MakeExpiryHash(DateTime expiry)
{
const string salt = "some random bytes";
byte[] bytes = Encoding.UTF8.GetBytes(salt + expiry.ToString("s"));
using (var sha = System.Security.Cryptography.SHA1.Create())
return string.Concat(sha.ComputeHash(bytes).Select(b => b.ToString("x2"))).Substring(8);
}
Then a snippet that generates a link with a one week expiry:
DateTime expires = DateTime.Now + TimeSpan.FromDays(7);
string hash = MakeExpiryHash(expires);
string link = string.Format("http://myhost/Download?exp={0}&k={1}", expires.ToString("s"), hash);
Finally the download page for sending a file if a valid link was given:
DateTime expires = DateTime.Parse(Request.Params["exp"]);
string hash = MakeExpiryHash(expires);
if (Request.Params["k"] == hash)
{
if (expires < DateTime.UtcNow)
{
// Link has expired
}
else
{
string filename = "<Path to file>";
FileInfo fi = new FileInfo(Server.MapPath(filename));
Response.ContentType = "application/octet-stream";
Response.AddHeader("Content-Disposition", "attachment;filename=" + filename);
Response.AddHeader("Content-Length", fi.Length.ToString());
Response.WriteFile(fi.FullName);
Response.Flush();
}
}
else
{
// Invalid link
}
Which you should certainly wrap in some exception handling to catch mangled requests.
http://example.com/download/document.pdf?token=<token>
The <token> part is key here. If you don't want to involve a database, encrypt link creation time, convert it to URL-safe Base64 representation and give user that URL. When it's requested, decrypt token and compare date stored in there with current date and time.
Alternatively, you can have a separate DownloadTokens table wich will map said tokens (which can be GUIDs) to expiration dates.
Append a timestamp to the URL, in the querystring:
page.aspx?time=2011-06-22T22:12
Check the timestamp against the current time.
To avoid the user changing the timestamp by himself, also compute some secret hash over the timestamp, and also append that to the querystring:
page.aspx?time=2011-06-22T22:12&timehash=4503285032
As hash you can do something like the sum of all fields in the DateTime modulo some prime number, or the SHA1 sum of the string representation of the time. Now the user will not be able to change the timestamp without knowing the correct hash. In your page.aspx, you check the given hash against the hash of the timestamp.
There's a million ways to do it.
The way I did once for a project was to generate a unique key and use a dynamic downloader script to stream the file. when the file request was made the key was generated and stored in db with a creation time and file requested. you build a link to the download script and passed in the key. from there it was easy enough to keep track of expiration.
llya
I'll assume you're not requiring any authentication and security isn't an issue - that is if anyone gets the URL they will also beable to download the file.
Personally I'd create a HttpHandler and then create some unique string that you can append to the URL.
Then within the ProcessRequest void test the encoded param to see if it's still viable (with in your specified time-frame) if so use BinaryWrite to render the File or if not you can render some HTML using Response.Write("Expired")
Something like :
public class TimeHandler : IHttpHandler, IRequiresSessionState
{
public void ProcessRequest ( HttpContext context )
{
if( this.my_check_has_expired( this.Context.Request.Params["my_token"] ) )
{
// Has Expired
context.Response.Write( "URL Has Expired" );
return;
}
// Render the File
Stream stream = new FileStream( File_Name , FileMode.Open );
/* read the bytes from the file */
byte[] aBytes = new byte[(int)oStream.Length];
stream.Read( aBytes, 0, (int)oStream.Length );
stream.Close( );
// Set Headers
context.Response.AddHeader( "Content-Length", aBytes.Length.ToString( ) );
// ContentType needs to be set also you can force Save As if you require
// Send the buffer
context.Response.BinaryWrite( aBytes );
}
}
You need to then setup the Handler in IIS, but that a bit different depending on the version you're using.
I'm reading an ASPX file as a string and using the returned HTML as the source for an email message. This is the code:
public string GetEmailHTML(int itemId)
{
string pageUrl = "HTMLEmail.aspx";
StringWriter stringWriter = new StringWriter();
HttpRuntime.ProcessRequest(new SimpleWorkerRequest(pageUrl, "ItemId=" + itemId.ToString(), stringWriter));
stringWriter.Flush();
stringWriter.Close();
return stringWriter.ToString();
}
HTMLEmail.aspx uses the ItemId query string variable to load data from a DB and populate the page with results. I need to secure the HTMLEmail.aspx page so a manipulated query string isn't going to allow just anybody to see the results.
I store the current user like this:
public User AuthenticatedUser
{
get { return Session["User"] as User; }
set { Session["User"] = value; }
}
Because the page request isn't made directly by the browser, but rather the SimpleWorkerRequest, there is no posted SessionId and therefore HTMLEmail.aspx cannot access any session variables. At least, I think that's the problem.
I've read the overview on session variables here: http://msdn.microsoft.com/en-us/library/ms178581.aspx
I'm wondering if I need to implement a custom session identifier. I can get the current SessionId inside the GetEmailHTML method and pass it as a query string param into HTMLEmail.aspx. If I have the SessionId inside HTMLEmail.aspx I could maybe use the custom session identifier to get access to the session variables.
That fix sounds messy. It also removes the encryption layer ASP automatically applies to the SessionId.
Anyone have a better idea?
As far as I can see, your best bet is to pass on all the values you need inside HTMLEmail.aspx to it via the query parameters, just like you do with ItemId.
Apart from that, you can probably get away with just sending in the UserId of the user to that page and make it hit the DB (or wherever you are storing your users) to the User object, instead of trying to read it off the Session variables.
Edit:
Why don't you use:
public string GetEmailHTML(int itemId)
{
string pageUrl = "HTMLEmail.aspx";
StringWriter stringWriter = new StringWriter();
Server.Execute(pageUrl, stringWriter);
stringWriter.Flush();
stringWriter.Close();
return stringWriter.ToString();
}
instead? As far as I can see Server.Execute inherits the same http request.
Our Situation:
Our team needs to retrieve log information from a 3rd party website (Specifically, this log
information is call logs -- our client rents an 866 number. When calls come in, they assist
people and need to make notes accordingly in our application that will correspond with the
current call). Our client has a web account with the 3rd party that allows them to view the
current call logs (date/time, phone number, amount of time on each call, etc).
I contacted the developer of their website and inquired about API or any other means of syncing
our database with their constantly updating database. They currently DO NOT support API. I
informed them of my situation and they are perfectly fine with any way we can retrieve the
information (bot/crawler). *The 3rd party said that they are working on API but could not give
us a general timeline as to when it will be up... and as with every client, they need to start
production ASAP.
I completely understand that if the 3rd party were to change their HTML layout, it may cause a
slight headache for us (sorting the data from the webpage). That being said, this is a temporary
solution to a long term issue. Once they implement their API, we will switch them over to it.
So my question is this:
What is the best way to log into the 3rd party website (see image: http://i903.photobucket.com/albums/ac239/jreedinc/customtf.jpg)
and retrieve certain HTML pages? We have reviewed source codes of webcrawlers, but none of them
have the capability of storing cookies and posting information back to the website (with log in information). We would prefer to do this in ASP.NET.
Is there another way to accomplish logging on to the website, then retrieving said information?
The classes you'll need to use are in the System.Net namespace. Below is some quick and dirty proof of concept code. To login in to a site that uses form login + cookies for security and then scrape the HTML output of a page.
In order to parse the HTML results you'll need to use an additional tool.
Possible HTML parsing tools.
SgmlReader, can convert HTML to XML. You then use .NET's XML features to extract data from the XML.
http://code.msdn.microsoft.com/SgmlReader
HTML Agility Pack, allows XPath queries against HTML documents.
http://htmlagilitypack.codeplex.com/
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
class WebWorker {
/// <summary>
/// Cookies for use by web worker
/// </summary>
private System.Collections.Generic.List `<System.Net.Cookie` > cookies = new List < System.Net.Cookie > ();
public string GetWebPageContent(string url) {
System.Net.HttpWebRequest request = (System.Net.HttpWebRequest) System.Net.WebRequest.Create(url);
System.Net.CookieContainer cookieContainer = new System.Net.CookieContainer();
request.CookieContainer = cookieContainer;
request.Method = "GET";
//add cookies to maintain session state
foreach(System.Net.Cookie c in this.cookies) {
cookieContainer.Add(c);
}
System.Net.HttpWebResponse response = request.GetResponse() as System.Net.HttpWebResponse;
System.IO.Stream responseStream = response.GetResponseStream();
System.IO.StreamReader sReader = new System.IO.StreamReader(responseStream);
System.Diagnostics.Debug.WriteLine("Content:\n" + sReader.ReadToEnd());
return sReader.ReadToEnd();
}
public string Login(string url, string userIdFormFieldName, string userIdValue, string passwordFormFieldName, string passwordValue) {
System.Net.HttpWebRequest request = (System.Net.HttpWebRequest) System.Net.WebRequest.Create(url);
System.Net.CookieContainer cookieContainer = new System.Net.CookieContainer();
request.CookieContainer = cookieContainer;
request.Method = "POST";
request.ContentType = "application/x-www-form-urlencoded";
string postData = System.Web.HttpUtility.UrlEncode(userIdFormFieldName) + "=" + System.Web.HttpUtility.UrlEncode(userIdValue) +
"&" + System.Web.HttpUtility.UrlEncode(passwordFormFieldName) + "=" + System.Web.HttpUtility.UrlEncode(passwordValue);
request.ContentLength = postData.Length;
request.AllowAutoRedirect = false; //allowing redirect seems to loose cookies
byte[] postDataBytes = System.Text.Encoding.UTF8.GetBytes(postData);
System.IO.Stream requestStream = request.GetRequestStream();
requestStream.Write(postDataBytes, 0, postDataBytes.Length);
System.Net.HttpWebResponse response = request.GetResponse() as System.Net.HttpWebResponse;
// System.Diagnostics.Debug.Write(WriteLine(new StreamReader(response.GetResponseStream()).ReadToEnd());
System.IO.Stream responseStream = response.GetResponseStream();
System.IO.StreamReader sReader = new System.IO.StreamReader(responseStream);
System.Diagnostics.Debug.WriteLine("Content:\n" + sReader.ReadToEnd());
this.cookies.Clear();
if (response.Cookies.Count > 0) {
for (int i = 0; i < response.Cookies.Count; i++) {
this.cookies.Add(response.Cookies[i]);
}
}
return "OK";
}
} //end class
//sample to use class
WebWorker worker = new WebWorker();
worker.Login("http://localhost/test/default.aspx", "uid", "bob", "pwd", "secret");
worker.GetWebPageContent("http://localhost/test/default.aspx");
I used a tool recently called WebQL (its a web scraper tool that lets the developer use SQL like syntax to scrape information from web pages.
WebQL on Wikipedia
This is actually a relatively simple operation. What you need to do is get the page that the screenshot posts back to (something like login.php, etc) and then construct a webrequest to that page with the login data you have. You will most likely get back a cookiecontainer that will have your login cookie to use on all subsequent requests.
You can look at this MSDN article for the basics of how to do it, but their write-up is kind of confusing. Look at the community comments at the end for an example of how to post back page variables (like the username and password). You will need to make sure you pass the cookiecontainer around on subsequent requests.
Unfortunately .NET does not natively have something like WWW::Mechanize, but the Webclient does have an "upload value" which might make it easier. You will still have to manually parse the page to figure out what fields you need to pass.
I need to load an external web (not local) page into my site (some link), but only a part of it.
What are the options for doing so?
That depends on whether or not the external page is local, or on a different domain. If it's local, you can use $.load() in the jQuery library. This has an optional parameter to specify which element in the remote-dom to load it:
$("#links").load("/Main_Page #jq-p-Getting-Started li");
If the page is on another domain, you'll need a proxy script. You can do this with PHP and the phpQuery (php port of jQuery) library. You'll just use file_get_contents() to get the actual remote-dom, and then pull out the elements you want based on jQuery-like selectors.
$f = fopen('http://www.quran.az/2/255', 'r');
and so on...
Once you get the whole page as Michael Todd outlined, you will likely need to either use substring methods for a static means to slice up the content or you can use regex's for a more dynamic way to grab the content. An intro article on Regex's in ASP.Net can be found here. Good luck!
To load a web page in .Net, use the HttpWebRequest class.
Example taken from MSDN, here:
private string StringGetWebPage(String uri)
{
const int bufSizeMax = 65536; // max read buffer size conserves memory
const int bufSizeMin = 8192; // min size prevents numerous small reads
StringBuilder sb;
// A WebException is thrown if HTTP request fails
try
{
// Create an HttpWebRequest using WebRequest.Create (see .NET docs)!
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(uri);
// Execute the request and obtain the response stream
HttpWebResponse response = (HttpWebResponse)request.GetResponse();
Stream responseStream = response.GetResponseStream();
// Content-Length header is not trustable, but makes a good hint.
// Responses longer than int size will throw an exception here!
int length = (int)response.ContentLength;
// Use Content-Length if between bufSizeMax and bufSizeMin
int bufSize = bufSizeMin;
if (length > bufSize)
bufSize = length > bufSizeMax ? bufSizeMax : length;
// Allocate buffer and StringBuilder for reading response
byte[] buf = new byte[bufSize];
sb = new StringBuilder(bufSize);
// Read response stream until end
while ((length = responseStream.Read(buf, 0, buf.Length)) != 0)
sb.Append(Encoding.UTF8.GetString(buf, 0, length));
}
catch (Exception ex)
{
sb = new StringBuilder(ex.Message);
}
return sb.ToString();
}
Note that this will return the entire page and not just a portion of it. You'll then need to sift through the page to find the information you're looking for.