I have a DNN 9.2 website that I try to keep responsive to new visitors by using a KeepAlive.aspx web page. This helps when it has not been accessed in a while. My problem is I'm using a third party module that uses Razor Templates that takes 10-15 seconds to compile the first time a visitors goes to the page. It is blazing fast after that. I've written a small vb.net application that I keep running to try an hit the pages that have the Razor Templates.
Dim url As String = "Module Page address"
Dim wReq As HttpWebRequest = DirectCast(WebRequest.Create(url), HttpWebRequest)
Dim oWebResponse As Net.WebResponse
oWebResponse = wReq.GetResponse()
Dim respStream As Stream = oWebResponse.GetResponseStream()
Dim reader As StreamReader = New StreamReader(respStream, Encoding.UTF8)
oWebResponse.Dispose()
respStream.Dispose()
reader.Dispose()
I loop this every 60 seconds. It "seems" to help. Is this a good way to keep it active? I have NO control over the code for the module. End users get very irritated in this day and age of instant response when they have to wait 10+seconds for a simple page to load. Is there a better way?
Related
I am currently tasked with moving one of our older bits of software to a new server. The old server is running 2008 and the new server is on 2019. The code is a mixture of ASP and ASP.NET, both using VB. Unfortunately, I'm a C# developer and my VB knowledge is slight.
The main bit of the code is the older and is all in ASP and works fine on the new server. For a particular set of customers there is an add-on that is more recent and uses ASP.NET. For the new section of code to get the details of the logged in user it uses the code given in this answer. Unfortunately it seems like it is this bit of code that is failing.
We have this bit of code in our site.master.vb
ctx1 = ctx.Request.Url.Scheme
ctx2 = ctx.Request.Url.Host
dom = ctx1 + "://" + ctx2
Dim ASPRequest As HttpWebRequest = WebRequest.Create(New Uri(dom + "/arc/asp2netbridge.asp?sessVar=" + sessionValue))
ASPRequest.ContentType = "text/html"
ASPRequest.Credentials = CredentialCache.DefaultCredentials
If (ASPRequest.CookieContainer Is Nothing) Then
ASPRequest.CookieContainer = New CookieContainer()
End If
The asp2netbridge.asp file is stored in the directory one up from the directory that contains the code and the directory structure looks the same on both servers. The contents of the as2netbridge file are the the same as in the example code linked above with the addition of some extra comments.
It then calls a Stored Procedure on our database with the customer ID from session that should return the customer details as XML, but instead we get a 'Root Element is missing' Error. If I change the Stored Procedure to hard code the customer ID in it, rather than as a parameter then it works as expected.
Is there anything that I need to install on our server to get the system working correctly? Or is there anything else I need to do to get it to work?
EDIT - RESOLVED: the difference was that in the "main" case the download was initiated via a callback cycle, and in the "test" case it was initiated through a server side button click function. My guess is that the download request and the callback cycle interfered with each other, both stopping the download and causing the page to become inactive (as described below). When I rewired the download on the main page to start with a submit instead of a callback, it did initiate the download.
This is in VS2013 Ultimate, Win7Pro, VB.Net, websites (not projects),IISExpress.
I built a test site to develop functionality for creating OpenXML PPTX and XLSX memorystreams and zipping and downloading them using DotNetZip. Got it to work fine. I then merged all that code into my "main" site. Both sites are on the same machine; I can run the test site and the main site at the same time. The main site processing is somewhat more complicated, but only in terms of accessing and downloading more files.
However, the Zip and Download function (below) works fine in the test site, but the exact same code doesn't work in the main site (with or without the test site up and running).
There's an error trap (see below) around the Zip.Save function where the download occurs but no error shows up.
Same overall behavior in Chrome, Firefox and IE11.
One peculiarity that might be a clue is that when the main site download fails, the server side functionality "goes dead". Local JS functions work, but the app doesn't respond to callbacks. When I do an F5 on the browser it works again.
I did a refresh on the DotNetZip package in the main site. The Zip object appears to be working properly, because it generates an error on duplicate file names.
I thought it might be the download function as written, however, it works in the test site. Also, another piece of the main site does a non-zipped download of a memory stream (included as the second code block below) and that works fine.
I thought it might be the data. So I kludged the main site to access, convert to memorystream and download the same file that the is accessed and downloaded in the test site. Still the main site download doesn't work.
When I compare the watch values on the Zip object in the two sites, they look identical. The length of the wrkFS.ContentStream is identical in both cases. The file names are different, however, they are:
Test_2EFVG1THK5.xlsx (main)
6-18_12-46-28_0.xlsx (test)
which are both legal file names.
EDIT: I saved the zip file to disk from the main program, instead of trying to download it, using this:
wrkFilePath = "D:\filepath\test.zip"
wrkZip.Save(wrkFilePath)
And it worked fine. So that possibly isolates the problem to this statement
wrkZip.Save(context.Response.OutputStream)
EDIT: Base on help I received here:
Convert DotNetZip ZipFile to byte array
I used this construct:
Dim ms as New MemoryStream
wrkZip.Save(ms)
wrkBytes = ms.ToArray()
context.Response.BinaryWrite(wrkByteAr)
to get around the ZipFile.Save(to context), and that didn't work either; no download, no error message, and page goes dead. However, at least I can now assume it's not a problem with the ZipFile.Save.
At this point I'm out of ways to diagnose the problem.
Any suggestions would be appreciated.
Here is the code that works in the test site but not in the main site.
Public Sub ZipAndDownloadMemoryStreams(ByVal context As HttpContext) _
Implements IHttpHandler.ProcessRequest
Dim rtn As String = ""
Try
Dim wrkAr As ArrayList
wrkAr = SC.ContentArrayForDownLoad
If wrkAr.Count = 0 Then
Dim wrkStop As Integer = 0
Exit Sub
End If
Dim wrkFS As ZipDownloadContentPair
Using wrkZip As New ZipFile
'----- create zip, add memory stream----------
For n As Integer = 0 To wrkAr.Count - 1
wrkFS = wrkAr(n)
wrkZip.AddEntry(wrkFS.FileName, wrkFS.ContentStream)
Next
context.Response.Clear()
context.Response.ContentType = "application/force-download"
context.Response.AddHeader( _
"content-disposition", _
"attachment; filename=" & "_XYZ_Export.zip")
'---- save context (initiate download)-----
wrkZip.Save(context.Response.OutputStream)
wrkZip.Dispose()
End Using
Catch ex As Exception
Dim exmsg As String = ex.Message
Dim wrkStop As String = ""
End Try
End Sub
Below is the non-zip download function that works fine in the main site.
It might be possible to convert the Zip content to a byte array and try the download that way, however, I'm not sure how that would work.
(SEE EDIT NOTE ABOVE --- I implemented a version of the below, i.e. try to download byte array instead of directly ZipFile.Save(), however, it didn't help; still doesn't download, and still doesn't give any error message)
Public Sub DownloadEncryptedMemoryStream(ByVal context As HttpContext) _
Implements IHttpHandler.ProcessRequest
Dim wrkMemoryStream As New System.IO.MemoryStream()
wrkMemoryStream = SC.ContentForDownload
Dim wrkFileName As String = SC.ExportEncryptedFileName
wrkMemoryStream.Position = 0
Dim wrkBytesInStream As Byte() = New Byte(wrkMemoryStream.Length - 1) {}
wrkMemoryStream.Read(wrkBytesInStream, 0, CInt(wrkMemoryStream.Length))
Dim wrkStr As String = ""
wrkStr = Encoding.UTF8.GetString(wrkMemoryStream.ToArray())
wrkMemoryStream.Close()
context.Response.Clear()
context.Response.ContentType = "application/force-download"
context.Response.AddHeader("content-disposition", "attachment; filename=" & wrkFileName)
context.Response.BinaryWrite(wrkBytesInStream)
wrkBytesInStream = Nothing
context.Response.End()
(Per the note now at the top of the question): The difference was that in the "main" case the download was initiated via a callback cycle, and in the "test" case it was initiated through a server side button click function. My guess is that the download request and the callback cycle interfered with each other, both stopping the download and causing the page to become inactive (as described below). When I rewired the download on the main page to start with a submit instead of a callback, it did initiate the download.
I am Trying to display a website portion on my web page.and for this i dnt want to use iframe.But any other idea which can get data from a website and then display it on my web page like.
this is website
http://www.sugaronline.com/
and want to display this portion on m y web page
http://i.stack.imgur.com/CVsrh.jpg
Please any one tell me is there any way to do this except using iframe?
Update
Shared Function GetHtmlPage(ByVal strURL As String) As String
Dim strResult As String
Dim objResponse As WebResponse
Dim objRequest As WebRequest = HttpWebRequest.Create(strURL)
objResponse = objRequest.GetResponse()
Using sr As New StreamReader(objResponse.GetResponseStream())
strResult = sr.ReadToEnd()
sr.Close()
End Using
Return strResult
End Function
Dim responses As String = GetHtmlPage(theurl)
Refer existing posts
Using HTTPWebRequest in ASP.NET VB
or simply Google this
If you are using PHP, try using CURL, it is a server side request which fetches the data from the target site and then you can embed it in your page.
On the other hand, if you are using .Net, you can use httpwebrequest to do the same.
In short, you will have to make a server side request from your server to the target website to fetch the data and you can embed that data on your page as if it has come from your website.
Please check out this post
How to use httpwebrequest to pull image from website to local file
Here all you need to do is
when you have the image bytes ready, you need to write those bytes to response but before that, you will have to send proper headers to let the browser understand that you are now going to send the image.
I am thinking about using a WebBrowser object to create a screen shot of a page a user has visited (by capturing the URL). Part of the class will look something like tHE below. This is an internal application and the reason is to allow the user to see how the dynamic page looked several months ago when they last visited.
Public Function ConvertPage(ByVal PageUrl As String) As Bitmap
Me.PageUrl = PageUrl
Dim thrCurrent As New Thread(New ThreadStart(AddressOf CreateImage))
thrCurrent.SetApartmentState(ApartmentState.STA)
thrCurrent.Start()
thrCurrent.Join()
CreateImage()
Return ConvertedImage
End Function
Private Sub CreateImage()
Dim BrowsePage As New WebBrowser()
BrowsePage.ScrollBarsEnabled = False
BrowsePage.Navigate(PageUrl)
AddHandler BrowsePage.DocumentCompleted, AddressOf _
WebBrowser_DocumentCompleted
While BrowsePage.ReadyState <> WebBrowserReadyState.Complete
Application.DoEvents()
End While
BrowsePage.Dispose()
End Sub
Earlier today I was reading an entry (I think it was on here) and the answerer advised the questioner to avoid this approach. I do not have a link to this post. Is this a poor apprach bin your view i.e. using a WebBrowser object in an ASP.NET page?
I think this is the link you are after.
problem using winforms WebBrowser in asp.net
This code project page seems to have a solution and talks about the advantages of it
http://www.codeproject.com/Articles/50544/Using-the-WebBrowser-Control-in-ASP-NET
His solutions seems to require three threads to use the control;
perhaps a better options would be to take a via Javascript and then post the information back via an ajax call, have a look at this Javascript library that does it
http://html2canvas.hertzen.com/
if you are going with the webbrowser approach att the bottom of this question is c# code for capturing the image
Take a screenshot of a webpage with JavaScript?
I need to emulate an Excel Web Query in .net Below is the sample code. I get an Error500 when I attempt to do this in .net, however in Excel it works fine. Any ideas on what I am doing wrong? When I change the URI to a normal website it works fine, and returns the html from the page, which i what i am after. I wonder if the problem lies from the fact that I am trying to return a datatable
Dim oHttpWebRequest As System.Net.HttpWebRequest
Dim oStream As System.IO.Stream
Dim sChunk As String
oHttpWebRequest = (System.Net.HttpWebRequest.Create("http://somesite/foo.jsp"))
Dim oHttpWebResponse As System.Net.WebResponse = oHttpWebRequest.GetResponse()
oStream = oHttpWebResponse.GetResponseStream
sChunk = New System.IO.StreamReader(oStream).ReadToEnd()
oStream.Close()
oHttpWebResponse.Close()
Here is the Query from Excel
WEB
1
http:/somesite/foo.jsp
Selection=DataTable
Formatting=None
PreFormattedTextToColumns=True
ConsecutiveDelimitersAsOne=True
SingleBlockTextImport=False
DisableDateRecognition=False
DisableRedirections=False
Edit
I am getting the error when I getReponse from the server
I found the problem I was having.
I used fiddler to figure out the headers that were being sent via excel and compared those with the headers .net was sending
http://www.fiddler2.com/Fiddler2/version.asp
I had to add the following lines of code to add these two headers in order for it to work
oHttpWebRequest.Headers.Add(HttpRequestHeader.Pragma, "no-cache")
oHttpWebRequest.Headers.Add(HttpRequestHeader.AcceptLanguage, "en-us")