I have a strange problem that I see when switching to the ASP.NET designer in VS2010. It doesn't happen every time but once it's happened once it continues every time until I reboot.
Basically, when I click the "Design" button/tab to switch from the HTML to the designer, the text "Requesting Data..." appears in the status bar and the mouse pointer changes to and arrow with a spinning wait indicator. When this happens, the mouse continues to work but the keyboard input starts to fail. I can usually type but can't backspace or delete and I can't cut, copy or paste. Every other application continues to function normally so it's isolated to VS2010. It's also only affecting the designer. If I hit the "Source" button/tab to go back to the HTML, everything returns to normal.
This happens on really basic pages. My pages have a master page but the master page and the normal pages only contain basic html tags and the odd ASP.NET textbox or button.
I do not know why that message shows, but as a workaround, perhaps just create a static htm page in your site, go to design mode, do you editing/formatting, then copy paste the source into the real file?
one guess for message is if there is some sort of data source on the page or in the master page?
Related
What happens to the visual elements on a contentpage, in a shell-application, when you navigate to another page?
The specific pages, seem to remain alive, judging by the constructors only being called on first display and not when subsequently navigating back to the page again, but it seems that some/all the visual elements on a contentpage are refreshed when navigating back to the page.
This is specifically something I see when placing a forms.map or syncfusion.sfmap on a page, and navigating away from the page and then back. The maps obviously reset to initial values, on reload, so they are apparently killed off and recreated. Making a custom renderer for a map shows OnElementChanged being called on reloading, also indicating that it was not "just kept on hand" for when the page would reload.
It seems that the forms classes, encapsulating the deeper controls, do persist, but they disconnect from the lower level controls, and reloads them when becoming visible again? I am somewhat new to mobile development, and may or may not miss the entire point here...
The real question is if I can avoid that behavior. I have a shell app with various pages containing maps. Those maps are populated with various graphics objects and are being panned and zoomed, so it is a real pain to have them reset when briefly moving away from a page. Is there a way to avoid that, so they just quickly pop back to life when the page is displayed again?
We have analyzed the query and we would like to inform that not only Maps controls, simple ContentPage also act like this. It destructs the old pages and re-creates the pages view while come back in navigation.
Hello Im traying to scrape data from https://eservicios2.aguascalientes.gob.mx/sop/geobras/UI/frmObrasTodas.aspx
I can get the data from the main page but I don't know how to get the data from the form,
a) when choose a row and ask for "Detalle" , means detail goes to a form.
b) don't know how to follow the link
Need to get data from each row, can anybody help me.
the main issue and problem is that this is a asp.net web site. So, when you select a row, this likely uses a server side event. you MIGHT be able to write some JavaScript to select a row. But then the next issue is even more of a challenge. Once you select a row, then you have to click on a button. That button is going to run server side code. And that server side code is going to look at and grab the row value selected - VERY likely again server side code. Unlike say a simple web site with hyper-links?
.net sides are full driven from vb.net or c# code. We don't use silly things like hyper-links, or even silly parameters in the web URL.
So, after you select a row (perhaps possible in js), then you would then have to click on the details button. This again can be done with JavaScript
Say, in jQuery like this:
$('#NameOfButton').click();
So asp.net sites don't use simple code like what you see and get from someone who take that 3 day web developer program promising that now you are a experienced web developer. Asp.net sites as a result don't use simple HTML markup code and things like a simple hyper-link to drive the web site. There are no "links" for each row, but only code on the server side that runs to pull the data from the database, and then render that information, and THEN send it down as a html markup.
The bottom line?
The site is not simply HTML and simple hyper-links that you click on. When you click on that button, then the code behind (written in a nice language like c# or vb.net) runs. There is thus no markup code or even JavaScript code that is required here. You talking about clean and nice server side code. (and code written in a fantastic IDE - Visual Studio).
This means that aspx web sites are code behind driven, and as a result they are rather difficult to web scape in a automated fashion. You can get/grab the page you are on, but since there are no hyper-links to the additonal data (such as details), then you don't have a simple URL to follow/trace here.
Worse yet, the setup code (what occurs when you selected a single row) also in most cases has to be run. Only if all values are setup 100% correctly BEFORE hitting the "details" button will this thus work. And even worse, if you note, on the details page, there is no parameters in the URL. This means that not only is code behind required to run BEFORE the 2nd details page launches, but the correct setup code behind has to run. And even worse yet, is the 2nd page URL VERY likely also checks and ensures that the previous URL page was from the same site (as a result you can NOT JUST type in a url for the 2nd page - it will not work.
And in fact, if you look even closer? When you hit details button, the web pages re-loads, re-plots and renders what is CLEARLY a whole new web page and layout.
But note how the URL DOES NOT change!!! They are NOT even using a iframe for this.
This is because they are using what is called a server side re-direct. The key "tell tell" sign is that the URL remains the same, but the whole page layout is 100% different. What occurred is the server side did a re-direct to a 100% whole new page. But since the browser did not and was not causing this navigation? The code behind actually loads + displays a whole new web page and sends it down to the web client side.
However, note how the URL remains the same!!! This is due to the code behind is loading + displaying a whole new different web page - but since the navigation to that new page occurred with server side code?
Well then the server can load + send out anything it wants to the client - include a whole new web page, and you don't get nor see a web url change.
Again, this is typical of asp.net systems in which server side code drives the web site, and not much client side code.
You "might" be able to automate scraping. But you would need some custom code to select a given row, and then some code to click the details button. And that's going to be a REAL challenge, since any changes to the web page code (by you) also tend to be check for, and not allow server side.
The only practical web scrape approach would be to use some desktop tools to create a WHOLE instance of the web browser, let you the user navigate to the given web page that displays the data, and then hit some "capture" button in your application that now reads and parses out the data like you doing now for the main page.
I have a simple ASP net web form that consists of two textboxes one single line and one multiline in addition to a fileupload control and a button.
the page works perfect and the button triggers the post back as usual. But when I copy and paste text from "Web pages" in particular and paste them in the multiline textbox the post back is never triggered, sometimes I get time out error, sometimes I get bad request!!!
This behavior is generated only in IIS, it is not observed in IIS express.
The text is not that long, it is about 30 lines or less, but I noticed that when deleting some lines of text the issue is resolved and no errors is generated, but I can't find anything special in those lines that are deleted, no control characters or any other special characters.
Any idea? has someone encounter this same situation?
That is strange problem, never had experienced it but i think it can be a problem related to browser cache or browser compatibility issue. Make sure you have declared right version of browser in your HTML code. Also, try to debug the problem by placing the breakpoint and see it runs every time or not.
I have a ReportViewer (for SSRS) in an ASP.NET application.
The user enters in parameter information through the web form and then submits it.
The ReportViewer then returns a small report that shows counts of the information requested.
Next to these counts are links (assigned in BIDS) to link to the corresponding report using the parameters already entered.
It all renders fine until I click one of the links in the ReportViewer. It will then give the brief "Loading" dialog and then the ReportViewer disappears.
Not sure how to handle this and I can't find much information on it. I would ideally like the selected report to open in a new window (no URL bar, etc).
Please help!
This ended up being an issue with how ReportViewer was installed on our development and production servers. So when it seems like nothing else can explain what is going on with the ReportViewer... make sure the DLLs are properly installed on your server.
In our CMS, we have a place in which we enable users to play around with their site hierarchy - move pages around, add and remove pages, etc.
We use drag & drop to implement moving pages around.
Each move has to saved in th DB, and exported to many HTML files. If we do that in every move, it will slow down the users. Therefore we thought that it's preferable to let the users play around as much as they want, saving each change to the DB, but only when they leave the page - to export their changes to the HTML files.
We thought of making the user click a "publish" button when they're ready to commit their changes, but we're afraid users won't remember to do that, because from their stand point once they've moved a page to a new place - the action is done. Another problem with the button is that it's inconsistent with the behavior of the other parts of the site (for example, when a user moves a text inside a page, the changes are saved automatically, as there is only 1 HTML file to update)
So how can we automatically save user changes on leaving the page?
You should warn the user when he leaves the page with javascript.
From http://www.siafoo.net/article/67:
Modern browsers have an event called window.beforeunload that is fired right when any event occurs that would cause the page to unload. This includes clicking on a link, submitting a form, or closing the tab or window.
Visit this page for a sample the works in most browsers:
http://www.webreference.com/dhtml/diner/beforeunload/bunload4.html
I think it's bad practice to save the page without asking the user first, thats not how normal web pages work.
Sample:
<SCRIPT LANGUAGE="JavaScript1.2" TYPE="text/javascript">
<!--
function unloadMess(){
mess = "Wait! You haven't finished."
return mess;
}
function setBunload(on){
window.onbeforeunload = (on) ? unloadMess : null;
}
setBunload(true);
//-->
</SCRIPT>
The easiest way I can think of is to store the page info each time the user moves items around using Ajax (e.g. with an UpdatePanel, onUpdated event, let it fire some script that updates the users page config.
Alternatively - .Net's WebParts implementation does this automatically without intervention by the programmer (unless you want to change the storage engine, it uses a local mdb in by default.
Use a "Publish" checkbox/button and when the user interacts with the page in a way that causes them to navigate away ask them if they want to publish if that box is NOT checked/button not clicked. Be aware that there are actions (closing the browser, accessing their favorites menu, etc.) that you will probably not want or not be able to prompt the user.
I would force them to click a button such as publish. That is a 'training' issue.
Automatically saving changes when they leave could have other ramifications. For example if a user opens up a record and plays around with it and has no intention of changing it, they close it, like a word document, excel, etc. . . I would have your site mimic that model.
You also have to remember that the web is a disconnected environment and is not required all web applications run like a windows application.
If the user doesn't click the publish/save button then there changes are not saved and that is up to them to remember to do.