Windows 10 Explorer displays wrong values for total size and free space for WebDAV drive - webdav

I am developing a WebDAV connection for a database based ECM system using the IT HIT WebDAV engine and .net 5.
I have mapped WebDAV to a network drive in explorer.
When I list all drives in explorer, the total size and free space of the WebDAV drive incorrectly shows the same values as the C drive.
The responses that the WebDAV server sends back to explorer (Microsoft-WebDAV-MiniRedir/10.0.19042) do not contain any information about total size or free space at all, as far as I have seen.
Is it possible to send this data in the WebDAV response (are there any special properties for this)?
Or is it possible to make the explorer show, if not the correct values, then rather no values at all?

I have the same behavior with any WebDAV server including the one from IT Hit. The drive mounting on Windows is using the Microsoft Mini-redirector driver to mount the WebDAV file system. This driver always shows the size identical to the drive C:. Unfortunately, it does not request the available and free space from the server, even though the server properly reports the free and used space.
I guess one of the reasons for this behavior is that mini-redirector caches file content on the system drive and is probably limited by system drive size.

Related

Umbraco, memory. How to find the cause?

Since two weeks we have a problem with one of our websites, in „rush hours” (Analitycs shows ~170-200 people in real-time) it consumes all available memory (16GB). Normal state is between 2-3GB allocated. Memory growth isn’t constant, sometimes is very rapid, like 4-16 GB/2 min, another time growth is gradual. This behaviour looks the same on both of our servers (Server1 and Server2).
Server configuration:
Traffic between public internet and the actual application servers is being handled by haproxy, currently all traffic is directed to Server1, if Server1 stops responding traffic goes to Server2 (active –backup configuration). Website database (MSSQL 2008 R2) is placed on Server3. On Media server we store all files from application virtual /media folder. That server is powered by Linux and no problems here.
Server1, Server2, Server3 are VMs placed on one physical machine (Linux Debian KVM, latest version of libvirt from backports, machines are rock-stable, especially the DB Server). Media server is a physical storage machine.
Server1:
OS – Windows Server 2012 Standard
CPU – 8x2GHz
RAM -16GB
IIS8
Server2:
OS – Windows Server 2008 Web
CPU – 4x2GHz
RAM – 4GB
IIS7
Common things for both servers:
Site is based on Umbraco 4.7, .net 4.0. Media folder is connected as „network location”
physically placed on Media server (Linux Samba 3.x). In database we have about 25000 nodes.
We observed that website intensively use connection with Media server (up to 200Mbit/s).
We have changed the URLs so that media requests no longer pass through IIS.
Website was moved beetwen Windows Server 2008 and Windows Server 2012, yet the problem remains.
We thought that problem lies in the code, so we rolled back all the changes since last month (using our code repository), that didn’t solve the problem though.
We already have used programs such as DebugDiag and Ants Memory Profiler
http://imageshack.us/a/img823/8151/p4performancemonitor.png
http://imageshack.us/a/img838/2319/p4tasks.png
How else can we check where the problem lies?
Here are some things to check for that I've used when I've had memory issues in the past with Umbraco sites:
1) Is the site using Linq 2 Umbraco in the code anywhere? This has quite a few problems with memory usage, especially under heavy load, so it's possible that this could be the cause of your problems. If you are using it, look at your code for inefficient Linq queries, and consider replacing the code with something using the Node API or XSLT instead.
2) Is the site running any custom .net code that uses the Document API on the front end? In general this can be pretty slow and resource intensive, and should be avoided if possible on the front end (back office is fine).
3) Check any other custom code for potential memory leaks or inefficient use of resources.
4) Have a look through the old issue logs for Umbraco (can't find a link at the moment, sorry!), that should give you an idea of whether you're experiencing a known issue with that particular version of Umbraco. If it is, you may need to upgrade (which may or may not be a major hassle depending on your Umbraco setup).
Hope that helps!

Hosting big files for users

We need to be able to supply big files to our users. The files can easily grow to 2 or 3GB. These files are not movies or similiar. They are software needed to control and develop robots in an educational capacity.
We have some conflict in our project group in how we should approach this challenge. First of all, Bittorrent is not a solution for us (despite the goodness it could bring us). The files will be availiable through HTTP (not FTP) and via a filestream so we can control who gets access to the files.
As a former pirate in the early days of the internet i have often struggled with corrupt files and using filehashes and filesets to minimize the amount of redownload required. I advocate a small application that downloads and verifies a fileset and extracts the big install file once it is completely downloaded and verified.
My colleagues don't think this is nessecary and point to the TCP/IP protocols inherit capabiltities to avoid corrupt downloads. They also mention that Microsoft has moved away from a downloadmanager for their MSDN files.
Are corrupt downloads still a widespread issue or will the amount of time we spend creating a solution to this problem be wasted, compared to the amount of people who will actually be affected by it?
If a download manager is the way to go, what approach would you suggest we take?
-edit-
Just to clearify. Is downloading 3GB of data in one chunk, over HTTP a problem OR should we make our own EXE that downloads the big file in smaller chunks (and verifies them).
You do not need to go for your own download manager. You can use some really smart approach.
Split files in smaller chunks, let's say 100MB each. So even if a download is corrupted, user will end-up downloading with that particular chunk.
Most of web servers are capable of understanding and treating/serving range headers. You can recommend the users to use download manager / browser add-ons which can use this capacity. If your users are using unix/linux systems, wget is such a utility.
Its true that TCP/IP has capacities of preventing corruption but it basically assumes that network is still up and accessible. #2 mentioned above can be one possible work-around to the problems where network was completely down in middle of download.
And finally, it is always good to provide file hash to your users. This is not only to ensure the download but also to ensure the security of the software that you are distributing.
HTH

It is possible to forcibly include IDE's directory (workspace) into Caching SSD?

I have Hybrid storage drive (HDD 500 GB+ and SSD 32 GB). Can I forcibly include Eclipse IDE and Eclipse's work directory (Workspace) into SSD drive (which is used for caching by default)?
No. A hybrid drive works in a completely seamless fashion. While you might know that it's a hybrid disk your computer does not. There's nothing in the SATA spec to tell the hard drive what to store in the SSD cache. It analogous to CPU cache. It's there and it makes your computer faster, but the processor controls it, not the programmer.
Fortunately for you there's no need to force the hard drive to put those things into cache. If you use them a lot they'll get cached automatically. That's the beauty of a hybrid drive.

FoxPro sometime doesn't find files in the LAN

Sometimes a visual FoxPro App doesn't find files in a FileShare, which are there.
for example when checking in a loop File() on a existing file on a Network share about 5% of the tries don't find the file.
This works on most machines but sometimes it doesn't work. In the curren scenario I've a Windows Server 2K8 as file server (perhaps a SMB2 issue?)
I would patch your 2K8 server to SP1 (and any Windows 7 clients too), this will take care of any SMB2 issues. Those issues were around CDX index file corruption, though.
It's also possible that this is due to the caching that SMB2 uses, which can produce 'File Not Found' errors. The client registry settings involved are:
FileInfoCacheLifetime
FileNotFoundCacheLifetime
DirectoryCacheLifetime
There is a discussion regarding this on Alaska Software's website, and a useful MSI installer which can be run per workstation to adjust the settings. This company produces a product called Xbase++ but I would guess it is close enough to Visual FoxPro in terms of low-level file IO and locking.
Not positive if its an issue of Fox, or your network. Going way back in time, I had a client that had problems somewhat similar. Took Foxpro out of the equation and just used Windows Explorer and it would hang for a moment. It ended up that their network cards were set to energy saving mode and would basically time-out / shut down due to inactivity. The network drive share would apparently be released. Until the network card would re-connect and get established again, they had issues. By changing so the network card NEVER went into energy save mode, problem went away for them.
Yes. I have versions of fox pro deployed on various different servers, with various versions of windows server and never experienced an issue as described.
Maybe you could try a similar test using a different programming discipline, .Net , access, Ruby ...., etc
Post you test loop, just out of interest ?

Large file download in background, initiated from the browser

Is there any reasonable method to allow users of a webapp to download large files? I'm looking for something other than the browser's built-in download dialog - the requirements are that the user initiates the download from the browser and then some other application takes over, downloads the file in background and doesn't exit when the browser is closed. It might possibly work over http, ftp or even bittorrent. Platform independence would be a nice thing to have but I'm mostly concerned with Windows.
This might be a suitable use for BitTorrent. It works using a separate program (in most browsers), and will still run after the browser is closed. Not a perfect match, but meets most of your demands.
Maybe BITS is something for you?
Background Intelligent Transfer
Service Purpose
Background Intelligent Transfer
Service (BITS) transfers files
(downloads or uploads) between a
client and server and provides
progress information related to the
transfers. You can also download files
from a peer.
Where Applicable
Use BITS for applications that need
to:
Asynchronously transfer files in the
foreground or background. Preserve
the responsiveness of other network
applications. Automatically resume
file transfers after network
disconnects and computer restarts.
Developer Audience
BITS is designed for C and C++
developers.
Windows only
Try freeDownloadManager. It does integrate with IE and Firefox.
Take a look at this:
http://msdn.microsoft.com/en-us/library/aa753618(VS.85).aspx
It´s only for IE though.
Another way is to write a BandObject for IE, which hooks up on all links and starts your application.
http://www.codeproject.com/KB/shell/dotnetbandobjects.aspx
Depending on how large the files are, pretty much all web-browsers all have built-in download managers.. Just put a link to the file, and the browser will take over when the user clicks.. You could simply recommend people install a download manager before downloading the file, linking to a recommended free client for Windows/Linux/OS X.
Depending on how large the files are, Bittorrent could be an option. You would offer a .torrent file, when people open them in a separate download-client, which is seperate from the browser.
There are drawbacks, mainly depending on your intended audience:
Bittorrent is rarely allowed on corporate or school networks
it can be difficult to use (as it's a new concept to lots of people).. for example, if someone doesn't have a torrent client installed, they get a tiny file they cannot open, which can be confusing
problems with NAT/port-forwarding/firewalls are quite common
You have to use run a torrent tracker, and seed the file
...but, there are also benefits - mainly reduced bandwidth-usage on the server, as people download also seed the file.

Resources