need to copy files on client system, is thr any possible way? - asp.net

I m developin an Online Examination System in C#.net and want to copy files on client machine as soon as exam starts, so that even if internet gets disconnected examinee can continue with test

You may wish to consider a client server solution, such as WPF or winforms as this is more suited to this type of development. You can use one click deployment to have this still launched from the web and updated on every run.
If you do decied to use asp.net this will result in a very javascript heavy site with a very slow load in the first page.
To do this you would load all your test qustions into a javascript datastructure on the first page, when every the user when to the next page you would need to, using javascript, collect all the answers and store in javascript. then rereender the entire page using your definitions of the test in javascript with no trip back to the server. then once the test was complete you would need to send your results back to the server, the internet must be active once you've compleated the test.

You'll have to create a download package and provide a link for the user to click to request the files. You can't force a download.
If your exam in all in one web page, you don't need to do anything. Once the page appears in the users browser, it has already been "copied locally".

Related

How do I download an aspnetForm page with links

I'm trying to download a municipal planning plan together with all the relevant documents.
All documents can be reached from the following link
I've tried the following command (that worked well for other sites) and some variations without success.
wget -E -k -r -l 3 "http://www.mavat.moin.gov.il/MavatPS/Forms/SV4.aspx?tid=4&et=1&mp_id=ppnCWTcsST9gG0%2fa0ayWnjFyZ%2bo14s221Ujlpi7UvR4jIRAHLKhJ8lOLSkomZ%2fvlHk8b2T0oENpI6Wh2hKzxQJCw9BPJP8gav%2ftgiKlk5S0%3d"
The same plan in their new site I can't get the files either,
https://mavat.iplan.gov.il/SV4/1/5000931297/310
I'd appreciate any help.
Well, these days, and especially with .net web sites?
We don't use hyper-links with a simple (full) path name to actual files from the web server. In fact in most cases one will not even give the web server rights to those folders. (they are not exposed to Internet Services).
So, no actual links as a full "url" to documents exist.
What happens is when you click on a button or button link? Then the code behind on the web server runs. (and that is code you don't have). And further more, that code behind can browser, read, retrieve any file from any folder on the server or other servers. But links from the web site don't exist and it not even possible to type in a url to resolve to a actual file name on the server.
So the server side code (not internet services) goes and grabs the document. In fact, the documents could be in a database. So, the code behind on the server side runs and pulls the binary data from the database (which represents a valid PDF file). Or the code behind reads the file from disk and then STREAMS the file for a download.
Now, this is often done for reasons of security. It means that no valid URL exists to get at a document.
Not only is this done for security, but from a developer point of view, it often better to retrieve a row from a database. That row can have the information you SEE rendered on that form, but the web page is not static, and the display of information is thus a developer coding a pull of rows from a database, and then you simply "assign" that data to some type of control - save datagrid, or listview or whatever. (this assignment of data is only 1 or two lines of code, and then the control + web server renders that datagrid control.
So, this is done since the developer thus only assigns the result of a database query to the control when then renders on the form. Thus, to add or remove documents? Then you only have to edit the database for the information on the web page to render.
As a result? There is no direct links to the actual documents on the server. To retrieve a document, you would have to send to the web site the exact command required.
You can hit f12 (most browsers support this). This will put your browser into developer mode. If we do this, and then select elements (select element feature). Now click on a pdf link. You get this:
<img src="../images/ft/file_PDF.gif" style="cursor:pointer"
onclick="openDoc('99000526871729',
'AABA7BE646E182B67DB1C15220E531DF36BBB591D8EEA7757435B2606C08E6F9')">
So, note above. The above code event openDoc is the SERVER side code you have to run to retrive a document. There is thus NO link. And you not going to be able to wire up, or run your OWN web page that hits that server and runs the routine "onclick".
However, the onclick DOES expose the internal database document numbers used to pull/read and retrieve a given document. But the path name, and how the code gets/grabs this file? You have no idea, and HAVE to run server side code (c#, or vb.net) code. That code as noted grabs the file and then uses code to "stream" the file when you download or click on a link.
So for simple HTML like pages? Well, for those that took a one day HTML course? Sure, such web sites will have scr=some path name to a valid url). And these simple systems thus allow you to enter a URL to grab/get a document. And those documents are fully exposed to the web site, and a simple valid URL path name to a file exists. Not so with asp.net, and as noted, this is not only done for security, but it a better over all developer experience to write code that grabs the files as opposed to rendering full path link names to files.
There are many additional benefits. For example, the database that drives this likely has a setting (or some settings) that contain the path names to the documents. If they run out of storage, or say want to move older files to a much slower storage system, which of course is much lower cost? Then can move the files, and update the path name columns in the database. The web site will continue to work, since we NEVER using a exposed URL on the web site. And as noted, actual direct URL's don't exist, and the web server (IIS) as opposed to the code behind will not even have rights to the file names.
As a result?
You not be able to simply pull the web page, and THEN extract the URL's to file names.
What you might be able to do is write code that loads the web page, and then scans all the event code stubs for the links, and have your code click on each button with web browser automation. But, even that don't allow you to enter file names into the download prompts.
So, what you ask is not easy, likely not possible, and a very difficult task. And the simple reason is that site does not use simple HTML and static links to files, and it never actually exposes a direct link to files, and even worse yet is the web server does not have or even allow a URL direct link to a site - they don't exist, and the web site will not even have rights or even allow such URL's to file names. (only the .net code behind does - not internet services).
and grabs the document and then code "streams" the file to to the web site or link you clicked on. So the simple HTML coders in the past would create say a folder (usually a virtual folder) that points to the files on some server/folder. But with .net, it easier (and far more secure).
Modern development tools don't use old fashioned ideas like a URL's to directly retrieve a file - they are designed differently.
In some cases, URL's are allowed or created, and this is done for reasons of sharing links. So if you have a cute video or document? Then the designers of the system will often permit use of parameters in the URL, so you can share a link to someone else. This page has no such provisions. So, you can share a link to the page, but no actual URL to documents or even provisions to allow URL's to a document even exists.
So this quite much means to retrieve a document, you have to go to that web page, and ONLY when you click on a document will the web site "stream" down that one particular document in question.

Make ASP.Net (C#) Web App Available Offline

I have been tasked with making my company's Web App available offline. Before I move to the actual development phase, I want to be sure that my current strategy will not turn out to be a bust.
I first thought about using html5 app cache but after doing some tests I found that it seems to not cache the server side operations but the actual html that is rendered (Please correct me if I'm wrong). This will not work because the rendered html depends upon who is currently logged in. From my tests, it always rendered the html as if the last person that logged in (online) is logging in.
My current strategy is this:
I cache only the login page and an offline (.html) page to correspond to each aspx page that will need to be available offline. Every successful login (online) results in creating or updating Web SQL Database or IndexDB (depending on browser) with all data needed for that person to operate offline including a table that will be used for login credentials. In this way the only requirement for logging in offline is logging in with your login credentials at least one time.
My concern is that I am overcomplicating it. In order to make this work, I will need to create an html page for each current page (a lot of pages) and I will have to rewrite everything that is currently being done on the server in JavaScript including validation, database calls, populating controls such as dropdown lists and data grids, etc. Also everything that I change in the future will require a subsequent offline change.
Is there an established best practice for what I am trying to do that I am overlooking or am I venturing into new ground?
Please refer to these links, which gives you some insight on what is to be achieved. I'm not sure these are best practices, but these will be good starting point.
http://www.c-sharpcorner.com/UploadFile/aravindbenator/offline-mvc3-application/
http://www.developerfusion.com/article/84438/isolated-storage/

Does FileSystemObject know that a file is incomplete?

Yes, I'm still using Classic ASP.
I'm about to write a script that checks a directory on the server, every 5 minutes, for newly uploaded photos, by my office, and to transfer the photos to another location. I'm using ASP and the FileSystemObject as the application and a Windows Schedule calls it.
What I would like to know is: If the user is sending 150 photos, by FTP, my application is not going to know if the user has finished uploading, or not. So then the application will go through the files one-by-one and transfer them. If my user has a slower connection than the speed of my application, the script may eventually come across the file that is currently being uploaded...
Will my application grab that file thinking it's complete or will it know that it's in the middle of upload and leave it alone? If it DOES grab it and transfers half a photo, how can I stop this from happening?
There is no good way to test for that, much depends on how the uploader is working.
Its highly unlikely that a file currently open for write access while the uploader creates it is going to allow your code to move it. An attempt to move it will result in a sharing violation or similar error. So protecting that section of code with an On Error Resume Next would do it. Have your code skip that file in the knowledge that it will be picked up again when the next poll comes round.

How to track a completed file download in ASP.NET

I have this ASP.NET web site that allows users to download program installation packages (just normal files). I want to be able to track when a download is completed (i.e. the file has been fully downloaded to the user's computer) and then invoke a Google Analytics script that reports a completed download as a 'Goal' (obviously, one of my goals is to increase file downloads).
The problem is that I need to support direct file URLs, as opposed to the "redirect page" solution. This is because a lot of traffic comes from software download sites that explicitly demand a direct file URL when submitting a product. Perhaps, they do their own file analysis (i.e. virus checking). But with this set of limitations, a typical scenario is:
The user visits my product listing on a software download site
The user clicks the "Download" button on this site
The "Download" page is typically a redirect that finally brings the user to my file via the direct URL I've initially submitted, i.e. http://www.ko-sw.com/somefile.exe
If under these conditions, an exact solution for monitoring is not possible, maybe there exists a workaround? What comes to my mind is temporarily storing the number of performed downloads on the server and then accessing an administrative page that somehow reports this number to Google Analytics and finally sets it back to zero. With this workaround, there is at least no need to try to attach a javascript handler to a non-HTML resource. But even then there are issues:
How to track if a download has completed?
How to track user geolocation and browser capabilities to make them further visible in the reports?
Thanks everybody in advance
According to awstats aborted download has http status code 206 so if you analyze server log for such code you can get those downloads that were not completed.
#Kerido ~ I'm curious what the business case is here. Are you trying to track installs or downloads? If installs, go with #SamMeiers solution.
However, if you're trying to track downloads, then the next question is what webserver base are you using? IIS? Apache? Something else?
In IIS, assuming you're using 7 (or later), you could (easily?) write a HttpHandler that checks for the last bytes of the file to be sent, and on that, record a log somewhere.
On Apache, just setup logging to tell you how many bytes were transferred (a trivial change in httpd.conf) and then parse the logs daily (awstats [amongst others] is pretty good for this, but you might have to write a sed/awk script) and find out how many full transfers were completed. Just depends on how thorough you're trying to be.
But I go back to, what's the business case for this? What does it matter if there were unfinished downloads?
It's possible to track links as a goal, which may be of use to you. However, this won't track when the download was completed.
http://www.google.com/support/analytics/bin/answer.py?answer=55529
Hope this helps.
Cheers
Tigger
I think the solution of #SamMeiers is very good but you may optimized by calling a web services after the installation complete but you might find a small problem if the use installing the app in an environment without internet but you might force to check if there is an internet or not.
You can create any trigger when you installation start as a start flag then when if finish check if the start flag exists then the app have been downloaded and installed also.

Background File Copy process in ASP

I have an application in classic ASP. On click of a button, it copies a file and its relative folder from one folder to another folder, and displays a link to user for the destination folder. User can click on link and get the file from destination folder. Now, I am facing problem with file and its relative folder size. I have some of them with size greater than 500MB. So, copy process takes so much time that my application gets Time Out error. **Is it possible to create some background process for copy? and when process completes it should fire some event. **
Cheers
This is a pretty lame solution, but a solution nevertheless: you could fire off an Ajax request to a separate ASP script to do the copying, and just put a really long timeout on that script. When this completes, it could, of course, update the calling page with an alert or notification to the user, but that very much depends on the user having enough patience to keep that browser window open.
The options I tried are,
Executing copy command from Shell, not effective because ASP page waits for shell command to finish.
Creating a trigger in SQL database which gets fired when a new row gets added into the table, and then copy the files and send an email to user using TSQL. This affects my overall database performance.
AJAX solution also waits for process to end.
Now the solution I have implemented is, ASP page just creates a request and displays a message to user that user will get an confirmation email, then I created a small windows application which keep on watching for any request generated by ASP page, and as soon as any request comes in, it starts copying the files and at end sends as email to user as confirmation.
This solution is working for my requirements, please do share if you have any better and robust solution for the scenario.
Cheers.
I thought of another idea. I'm not sure of the exact way to do this on an IIS server, but if I were running on a Linux server, I would set up a cron job to run a web script every 5 minutes or so. The script would check for new files and perform the copying. Since copying could take more than 5 minutes, you would probably need to keep track of files in an XML file or db or something.
This would free you from writing/maintaining a separate Windows desktop app.

Resources