I am thinking about using a WebBrowser object to create a screen shot of a page a user has visited (by capturing the URL). Part of the class will look something like tHE below. This is an internal application and the reason is to allow the user to see how the dynamic page looked several months ago when they last visited.
Public Function ConvertPage(ByVal PageUrl As String) As Bitmap
Me.PageUrl = PageUrl
Dim thrCurrent As New Thread(New ThreadStart(AddressOf CreateImage))
thrCurrent.SetApartmentState(ApartmentState.STA)
thrCurrent.Start()
thrCurrent.Join()
CreateImage()
Return ConvertedImage
End Function
Private Sub CreateImage()
Dim BrowsePage As New WebBrowser()
BrowsePage.ScrollBarsEnabled = False
BrowsePage.Navigate(PageUrl)
AddHandler BrowsePage.DocumentCompleted, AddressOf _
WebBrowser_DocumentCompleted
While BrowsePage.ReadyState <> WebBrowserReadyState.Complete
Application.DoEvents()
End While
BrowsePage.Dispose()
End Sub
Earlier today I was reading an entry (I think it was on here) and the answerer advised the questioner to avoid this approach. I do not have a link to this post. Is this a poor apprach bin your view i.e. using a WebBrowser object in an ASP.NET page?
I think this is the link you are after.
problem using winforms WebBrowser in asp.net
This code project page seems to have a solution and talks about the advantages of it
http://www.codeproject.com/Articles/50544/Using-the-WebBrowser-Control-in-ASP-NET
His solutions seems to require three threads to use the control;
perhaps a better options would be to take a via Javascript and then post the information back via an ajax call, have a look at this Javascript library that does it
http://html2canvas.hertzen.com/
if you are going with the webbrowser approach att the bottom of this question is c# code for capturing the image
Take a screenshot of a webpage with JavaScript?
Related
EDIT - RESOLVED: the difference was that in the "main" case the download was initiated via a callback cycle, and in the "test" case it was initiated through a server side button click function. My guess is that the download request and the callback cycle interfered with each other, both stopping the download and causing the page to become inactive (as described below). When I rewired the download on the main page to start with a submit instead of a callback, it did initiate the download.
This is in VS2013 Ultimate, Win7Pro, VB.Net, websites (not projects),IISExpress.
I built a test site to develop functionality for creating OpenXML PPTX and XLSX memorystreams and zipping and downloading them using DotNetZip. Got it to work fine. I then merged all that code into my "main" site. Both sites are on the same machine; I can run the test site and the main site at the same time. The main site processing is somewhat more complicated, but only in terms of accessing and downloading more files.
However, the Zip and Download function (below) works fine in the test site, but the exact same code doesn't work in the main site (with or without the test site up and running).
There's an error trap (see below) around the Zip.Save function where the download occurs but no error shows up.
Same overall behavior in Chrome, Firefox and IE11.
One peculiarity that might be a clue is that when the main site download fails, the server side functionality "goes dead". Local JS functions work, but the app doesn't respond to callbacks. When I do an F5 on the browser it works again.
I did a refresh on the DotNetZip package in the main site. The Zip object appears to be working properly, because it generates an error on duplicate file names.
I thought it might be the download function as written, however, it works in the test site. Also, another piece of the main site does a non-zipped download of a memory stream (included as the second code block below) and that works fine.
I thought it might be the data. So I kludged the main site to access, convert to memorystream and download the same file that the is accessed and downloaded in the test site. Still the main site download doesn't work.
When I compare the watch values on the Zip object in the two sites, they look identical. The length of the wrkFS.ContentStream is identical in both cases. The file names are different, however, they are:
Test_2EFVG1THK5.xlsx (main)
6-18_12-46-28_0.xlsx (test)
which are both legal file names.
EDIT: I saved the zip file to disk from the main program, instead of trying to download it, using this:
wrkFilePath = "D:\filepath\test.zip"
wrkZip.Save(wrkFilePath)
And it worked fine. So that possibly isolates the problem to this statement
wrkZip.Save(context.Response.OutputStream)
EDIT: Base on help I received here:
Convert DotNetZip ZipFile to byte array
I used this construct:
Dim ms as New MemoryStream
wrkZip.Save(ms)
wrkBytes = ms.ToArray()
context.Response.BinaryWrite(wrkByteAr)
to get around the ZipFile.Save(to context), and that didn't work either; no download, no error message, and page goes dead. However, at least I can now assume it's not a problem with the ZipFile.Save.
At this point I'm out of ways to diagnose the problem.
Any suggestions would be appreciated.
Here is the code that works in the test site but not in the main site.
Public Sub ZipAndDownloadMemoryStreams(ByVal context As HttpContext) _
Implements IHttpHandler.ProcessRequest
Dim rtn As String = ""
Try
Dim wrkAr As ArrayList
wrkAr = SC.ContentArrayForDownLoad
If wrkAr.Count = 0 Then
Dim wrkStop As Integer = 0
Exit Sub
End If
Dim wrkFS As ZipDownloadContentPair
Using wrkZip As New ZipFile
'----- create zip, add memory stream----------
For n As Integer = 0 To wrkAr.Count - 1
wrkFS = wrkAr(n)
wrkZip.AddEntry(wrkFS.FileName, wrkFS.ContentStream)
Next
context.Response.Clear()
context.Response.ContentType = "application/force-download"
context.Response.AddHeader( _
"content-disposition", _
"attachment; filename=" & "_XYZ_Export.zip")
'---- save context (initiate download)-----
wrkZip.Save(context.Response.OutputStream)
wrkZip.Dispose()
End Using
Catch ex As Exception
Dim exmsg As String = ex.Message
Dim wrkStop As String = ""
End Try
End Sub
Below is the non-zip download function that works fine in the main site.
It might be possible to convert the Zip content to a byte array and try the download that way, however, I'm not sure how that would work.
(SEE EDIT NOTE ABOVE --- I implemented a version of the below, i.e. try to download byte array instead of directly ZipFile.Save(), however, it didn't help; still doesn't download, and still doesn't give any error message)
Public Sub DownloadEncryptedMemoryStream(ByVal context As HttpContext) _
Implements IHttpHandler.ProcessRequest
Dim wrkMemoryStream As New System.IO.MemoryStream()
wrkMemoryStream = SC.ContentForDownload
Dim wrkFileName As String = SC.ExportEncryptedFileName
wrkMemoryStream.Position = 0
Dim wrkBytesInStream As Byte() = New Byte(wrkMemoryStream.Length - 1) {}
wrkMemoryStream.Read(wrkBytesInStream, 0, CInt(wrkMemoryStream.Length))
Dim wrkStr As String = ""
wrkStr = Encoding.UTF8.GetString(wrkMemoryStream.ToArray())
wrkMemoryStream.Close()
context.Response.Clear()
context.Response.ContentType = "application/force-download"
context.Response.AddHeader("content-disposition", "attachment; filename=" & wrkFileName)
context.Response.BinaryWrite(wrkBytesInStream)
wrkBytesInStream = Nothing
context.Response.End()
(Per the note now at the top of the question): The difference was that in the "main" case the download was initiated via a callback cycle, and in the "test" case it was initiated through a server side button click function. My guess is that the download request and the callback cycle interfered with each other, both stopping the download and causing the page to become inactive (as described below). When I rewired the download on the main page to start with a submit instead of a callback, it did initiate the download.
I am going to do my best to ask this question as clearly as possible. Is it possible to write some asp.net code that can go through and read the source of pop up windows that are normally opened by clicking a javascript:void() link. Basically i want to read the source of said popups, extract specific links and then render those links in a web page. What i am trying to achieve is a way to make it easy to download the videos of the Senate floor - their videos open in a new popup that uses silverlight. The mp4 file location is in the source so i want to read that url. I did something similar using the code below. The difference was that the mp4 links were in main page. The code read the source and then outputted the video links so i could just right click and do a save as. This was for the videos at the German security conference. Apologies if this is not seen as a constructive question and ends up being closed.
Protected Sub btnClick_Click(sender As Object, e As EventArgs) Handles btnClick.Click
Dim r As New Regex("\bhttps?://\S+\.(?:jpg|png|gif|mp3|mp4|3gp)\b", RegexOptions.IgnoreCase)
Dim c As New WebClient
Dim s As String = c.DownloadString(txtUrl.Text)
Dim sb As New StringBuilder
For Each m As Match In r.Matches(s)
sb.Append("" & m.Value & "<br />")
divLinks.InnerHtml = sb.ToString
Next
End Sub
My application is simple, I have 2 pages:
RSSProducer.aspx: A page that generates RSS (XML) feeds
RssConsumer.aspx: A page that retrieves the RSS feeds and displays it to the user in a repeater control. To do this I am using the System.Xml.XmlTextReader to fill a DataSet with tables based on the RSS-XML retrieved from the RSSProducePage. A table within the DataSet is bound to the repeater control.
For example, this is what I have in my RssConsumer.aspx page:
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load
Session("permittedToViewSomeDetail") = True
Dim url = "http://localhost/DevSite/RSSProducer.aspx"
Dim reader As New System.Xml.XmlTextReader(url)
Dim ds As New DataSet()
ds.ReadXml(reader)
myRssRepeater.DataSource = ds.Tables(2)
myRssRepeater.DataBind()
End Sub
My problem is that user-authorization details are stored in Session in the RssConsumer page that need to be accessed in the RSSProducer page (in this example it would be Session("permittedToViewSomeDetail") that I need to access in the RSSProducer page); however, the Session identifier is not common between the two. This means that I cannot access the authorization details in the RSSProducer page.
The reason for why is fairly clear to me:
User's browser makes a request to
the RssConsumer page
Server generates a Session ID (which is stored in a cookie) if
there is no existing Session
Identifer
The RSSConsumer requests the RSSProducer page...which generates a
new Session ID every time because no
session identifier is ever going to
be found.
I tried using cookieless session so that I could pass the SessionID via the URL to the RSSProducer page as an experiment but for some reason the XmlTextReader doesn't work well with this method (but the desired shared session does work).
I've hit a brick wall here.
Does anyone know how to share session between pages when one page makes a request to the other?
Thanks,
-Frinny
I ended up taking a different approach to solving this problem. I moved the code out of the RssProducer.aspx page into the RssConsumer.aspx page. I am now able to apply the correct authorization to the feature and it's actually more efficient this way because I don't need to produce/consume RSS-XML any more.
Thanks to everyone who took the time to help me with this.
-Frinny
I mean, like php'h include...
something like
my_file_to_be_included = "include_me.asp"
-- >
for what I've seen so far, there are a couple of alternatives, but every one of them has some sort of shortcoming...
what I'm trying to figure out is how to make a flexible template system... without having to statically include the whole thing in a single file with a loooooong case statement...
here there are a couple of links
a solution using FileSysmemObject, just lets you include static pages
idem
yet another one
same thing from adobe
this approach uses Server.Execute
but it has some shortcomings I'd like to avoid... seems like (haven't tried yet) Server.Execute code runs in another context, so you can't use it to load a functions your are planning to use in the caller code... nasty...
same thing
I think this one is the same
this looks promising!!!
I'm not sure about it (couldn't test it yet) but it seems like this one dinamycally handles the page to a SSDI component...
any idea???
No you can't do a dyanmic include, period.
Your best shot at this is a server.execute and passing whatever state it needs via a Session variable:-
Session("callParams") = BuildMyParams() 'Creates some sort of string
Server.Execute(my_file_to_be_included)
Session.Contents.Remove("callParams")
Improved version (v2.0):
<%
' **** Dynamic ASP include v.2.0
function fixInclude(content)
out=""
if instr(content,"#include ")>0 then
response.write "Error: include directive not permitted!"
response.end
end if
content=replace(content,"<"&"%=","<"&"%response.write ")
pos1=instr(content,"<%")
pos2=instr(content,"%"& ">")
if pos1>0 then
before= mid(content,1,pos1-1)
before=replace(before,"""","""""")
before=replace(before,vbcrlf,""""&vbcrlf&"response.write vbcrlf&""")
before=vbcrlf & "response.write """ & before & """" &vbcrlf
middle= mid(content,pos1+2,(pos2-pos1-2))
after=mid(content,pos2+2,len(content))
out=before & middle & fixInclude(after)
else
content=replace(content,"""","""""")
content=replace(content,vbcrlf,""""&vbcrlf&"response.write vbcrlf&""")
out=vbcrlf & "response.write """ & content &""""
end if
fixInclude=out
end function
Function getMappedFileAsString(byVal strFilename)
Dim fso,td
Set fso = Server.CreateObject("Scripting.FilesystemObject")
Set ts = fso.OpenTextFile(Server.MapPath(strFilename), 1)
getMappedFileAsString = ts.ReadAll
ts.close
Set ts = nothing
Set fso = Nothing
End Function
execute (fixInclude(getMappedFileAsString("included.asp")))
%>
Sure you can do REAL classic asp dynamic includes. I wrote this a while back and it has opened up Classic ASP for me in a whole new way. It will do exactly what you are after, even though people seem to think it isn't possible!
Any problems just let me know.
I'm a bit rusty on classic ASP, but I'm pretty sure you can use the Server.Execute method to read in another asp page, and then carry on executing the calling page. 15Seconds had some basic stuff about it - it takes me back ...
I am building a web site where it would have been convenient to be able to use dynamic includes. The site is all ajax (no page reloads at all) and while the pure-data JSON-returning calls didn't need it, all the different html content for each different application sub-part (window/pane/area/form etc) seems best to me to be in different files.
My initial idea was to have the ajax call be back to the "central hub" main file (that kicks the application off in the first place), which would then know which sub-file to include. Simply including all the files was not workable after I realized that each call for some possibly tiny piece would have to parse all the ASP code for the entire site! And using the Execute method was not good, both in terms of speed and maintenance.
I solved the problem by making the supposed "child" pages the main pages, and including the "central hub" file in each one. Basically, it's a javascript round-trip include.
This is less costly than it seems since the whole idea of a web page is that the server responds to client requests for "the next page" all the time. The content that is being requested is defined in scope by the page being called.
The only drawback to this is that the "web pieces" of the application have to live partly split apart: most of their content in a real named .asp file, but enough of their structure and relationship defined in the main .asp file (so that, for example, a menu item in one web piece knows the name to use to call or load another web piece and how that loading should be done). In a way, though, this is still an advantage over a traditional site where each page has to know how to load every other page. Now, I can do stuff like "load this part (whether it's a whole page or otherwise) the way it wants to be loaded".
I also set it up so each part can have its own javascript and css (if only that part needs those things). Then, those files are included dynamically through javascript only the first time that part is loaded. Then if the part is loaded repeatedly it won't incur an extra overhead.
Just as an additional note. I was getting weird ASCII characters at the top of the pages that were using dynamic includes so I found that using an ADODB.Stream object to read the include file eliminated this issue.
So my updated code for the getMappedFileAsString function is as follows
Function getMappedFileAsString(byVal strFilename)
Dim fso
Set fso = CreateObject("ADODB.Stream")
fso.CharSet = "utf-8"
fso.Open
fso.LoadFromFile(Server.MapPath(strFilename))
getMappedFileAsString = fso.ReadText()
'Response.write(getMappedFileAsString)
'Response.End
Set fso = Nothing
End Function
I need to do a multilingual website, with urls like
www.domain.com/en/home.aspx for english
www.domain.com/es/home.aspx for spanish
In the past, I would set up two virtual directories in IIS, and then detect the URL in global.aspx and change the language according to the URL
Sub Application_BeginRequest(ByVal sender As Object, ByVal e As EventArgs)
Dim lang As String
If HttpContext.Current.Request.Path.Contains("/en/") Then
lang = "en"
Else
lang = "es"
End If
Thread.CurrentThread.CurrentUICulture = CultureInfo.GetCultureInfo(lang)
Thread.CurrentThread.CurrentCulture = CultureInfo.CreateSpecificCulture(lang)
End Sub
The solution is more like a hack. I'm thinking about using Routing for a new website.
Do you know a better or more elegant way to do it?
edit: The question is about the URL handling, not about resources, etc.
I decided to go with the new ASP.net Routing.
Why not urlRewriting? Because I don't want to change the clean URL that routing gives to you.
Here is the code:
Sub Application_Start(ByVal sender As Object, ByVal e As EventArgs)
' Code that runs on application startup
RegisterRoutes(RouteTable.Routes)
End Sub
Public Sub RegisterRoutes(ByVal routes As RouteCollection)
Dim reportRoute As Route
Dim DefaultLang As String = "es"
reportRoute = New Route("{lang}/{page}", New LangRouteHandler)
'* if you want, you can contrain the values
'reportRoute.Constraints = New RouteValueDictionary(New With {.lang = "[a-z]{2}"})
reportRoute.Defaults = New RouteValueDictionary(New With {.lang = DefaultLang, .page = "home"})
routes.Add(reportRoute)
End Sub
Then LangRouteHandler.vb class:
Public Class LangRouteHandler
Implements IRouteHandler
Public Function GetHttpHandler(ByVal requestContext As System.Web.Routing.RequestContext) As System.Web.IHttpHandler _
Implements System.Web.Routing.IRouteHandler.GetHttpHandler
'Fill the context with the route data, just in case some page needs it
For Each value In requestContext.RouteData.Values
HttpContext.Current.Items(value.Key) = value.Value
Next
Dim VirtualPath As String
VirtualPath = "~/" + requestContext.RouteData.Values("page") + ".aspx"
Dim redirectPage As IHttpHandler
redirectPage = BuildManager.CreateInstanceFromVirtualPath(VirtualPath, GetType(Page))
Return redirectPage
End Function
End Class
Finally I use the default.aspx in the root to redirect to the default lang used in the browser list.
Maybe this can be done with the route.Defaults, but don't work inside Visual Studio (maybe it works in the server)
Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs)
Dim DefaultLang As String = "es"
Dim SupportedLangs As String() = {"en", "es"}
Dim BrowserLang As String = Mid(Request.UserLanguages(0).ToString(), 1, 2).ToLower
If SupportedLangs.Contains(BrowserLang) Then DefaultLang = BrowserLang
Response.Redirect(DefaultLang + "/")
End Sub
Some sources:
* Mike Ormond's blog
* Chris Cavanagh’s Blog
* MSDN
Use urlrewriteing.net for asp.net webforms, or routing with mvc. Rewrite www.site.com/en/something.aspx to url: page.aspx?lang=en.
UrlRewriteing.net can be easily configured via regex in web.config. You can also use routing with webforms now, it's probably similar...
with webforms, let every aspx page inherits from BasePage class, which then inherits from Page class.
In BasePage class override "InitializeCulture()" and set culture info to thread, like you described in question.
It's good to do that in this order: 1. check url for Lang param, 2. check cookie, 3. set default lang
For static content (text, pics url) on pages use LocalResources,or Global if content is repeating across site. You can watch videocast on using global/local res. on www.asp.net
Prepare db for multiple languages. But that's another story.
I personnaly use the resources files.
Very efficient, very simple.
UrlRewriting is the way to go.
There is a good article on MSDN on the best ways to do it.
http://msdn.microsoft.com/en-us/library/ms972974.aspx
Kind of a tangent, but I'd actually avoid doing this with different paths unless the different languages are completely content separate from each other.
For Google rank, or for users sharing URLs (one of the big advantages of ‘clean’ URLs), you want the address to stay as constant as possible.
You can find users’ language preferences from their browser settings:
CultureInfo.CurrentUICulture
Then your URL for English or Spanish:
www.domain.com/products/newproduct
Same address for any language, but the user gets the page in their chosen language.
We use this in Canada to provide systems in English and French at the same time.
To do this with URL Routing, refer to this post:
Friendly URLS with URL Routing
Also, watch out new IIS 7.0 - URL Rewriting. Excellent article here http://learn.iis.net/page.aspx/496/iis-url-rewriting-and-aspnet-routing/
I liked this part
Which Option Should You Use?
If you are developing a new ASP.NET Web application that uses either ASP.NET MVC or ASP.NET Dynamic Data technologies, use ASP.NET routing. Your application will benefit from native support for clean URLs, including generation of clean URLs for the links in your Web pages. Note that ASP.NET routing does not support standard Web Forms applications yet, although there are plans to support it in the future.
If you already have a legacy ASP.NET Web application and do not want to change it, use the URL-rewrite module. The URL-rewrite module allows you to translate search-engine-friendly URLs into a format that your application currently uses. Also, it allows you to create redirect rules that can be used to redirect search-engine crawlers to clean URLs.
http://learn.iis.net/page.aspx/496/iis-url-rewriting-and-aspnet-routing/
Thanks,
Maulik.