It is not a good idea to use a SQLite database, for write access, on a CIFS share. Understood.
I have a need to do so on a very infrequent basis. The database is written very infrequently on the Windows server (Actually windows 10, and like once ever few weeks) and equally infrequently from the Linux (Ubuntu 16.04.02 if it matters) server. The chances of simultaneous writes is near zero (which is not zero of course).
As I understand it (and I am not sure I do) using the "nobrl" option on the mount allows this to work (and indeed it does work for me), but does so by disabling locking entirely (right? Unless there are other types?).
Is there a technique, without deploying code on the Windows side, to ensure that this is in fact safe -- options for SQLite for example, that might not be the default. Locking the entire database is perfectly acceptable during the update on the ubuntu side, performance is not an issue, and simultaneous access is not required. The main restriction is I cannot change the process on the windows side.
Related
We have an application that can use both Postgres and SQLite as its backend, depending on what the customer requires: High load and concurrency, or easy setup.
But I fear that some customers will be too clever for their own good, and to get around the (slightly) complex Postgres installation, they use SQLite, but put the file on a network disk.
Then the inevitable happens and the database gets corrupted, because SQLite is not meant for that, and we have Postgres support for that very reason.
Is there an ideal way to prevent SQLite from working on a network drive? There are some questionable ideas like looking for a \\ at the beginning, or the colon in "C:\" (it's purely a windows app), or parsing for IPs, but none of these are fool-proof and can be circumvented by making a link to a network disk.
I have the same problem as the poser of this question:
System.Data.OracleClient requires Oracle client software version 8.1.7
I have made the changes to the security settings on the oracle folder, and have to wait for the server to reboot overnight.
My question is why is this reboot necessary? I am getting the same error after making the changes without rebooting, so I don't doubt that it is. Is there an alternative to rebooting the server, like IISRESET? (Although I wouldn't be allowed to run IISRESET during the day either)
Perhaps not an answer to your specific question, but for the record it is for this kind of reasons that I always favor Oracle Instant Client :
You don't have to install anything on the target machines (including dev boxes !). So no tricky manual setup and goat sacrificing.
You can make sure that your application will run with the specific client you picked (version, x86/x64).
You could even easily have multiple applications work with different client versions on the same computer.
As a downside, it adds a significant weight to your application (~19Mb minimum), and you can't participate in distributed transactions.
If you still can switch, this is the way to go IMHO. Check What is the minimum client footprint required to connect C# to an Oracle database? for more information.
Starting with Server 2003 (hosting IIS6) it is enough to restart the service to bring environment changes and security changes into effect.
But this is done with iisreset. What is not allowed too.
Thats a pity, I see no other way as wait.
It's one of those things I see a lot but never really think of. Do you think for the purpose of web application development (specifically ASP.NET WebForms/MVC). Do you think it's advantageous to do such a thing and if so, what kind of advantages come out of it?
By virtualization I mean using products like Hyper-V to separate the server context like your SQL and Web Server, etc.
First question is, virtualization of what? Do you mean server virtualization? Do you mean running VMWare on each dev's laptop with multiple OSes? Do you mean moving everything to the cloud?
Virtualization of servers, in web app context, is not really different from that in general IT - most of the servers on the Internet, including StackOverload's, are bought to handle peak loads and spend most of the time idling away the cycles, so virtualizing them makes sense when you have more than a certain amount.
VMWare on the desktop (or other parallels on other operating systems) is superb because a) your devs can run a full instance of your server environment, including multiple virtual servers connnected in a virtual network - this is about as close to the real thing that you can get, minus hardware costs and minus devs messing with each other's servers. For clients, you can use Linux and multiple Windows installs to test various browsers, font sizes, etc. quickly - also a big win.
Moving everything to the cloud makes sense in many cases, but is probably a topic for a separate full-sized question :)
One big advantage I see is, that every developer can have his/her own sandbox to work on. If someone messes up his/her sandbox he/she can take a clean image and all is OK again. So I guess that means that there is room to experiment without losing valuable time getting back to the normal setup, you can simply do a rollback.
I'm in doubt a bit on whether you should use virtualisation for production environments. Depending on the application of course.
The only time I would use a virtual for ASP.Net development was if the app required specific setup, such as relying on installed software, wierd settings or particular shares. Every developer has their own webserver and can run their own database so if it's a "basic" webapp I don't see much value in virtuals.. it's pretty hard to break anything with a basic web app deployment :)
With a virtual server, you can test your code in a production-like environment. It is also possible to quickly revert back to the original setup. For many applications, it is useful in that time period just after you write the code, but before it goes to production.
I'm a fan of virtualizaion and use it in testing and production (VMWare and Hyper-v) but over the last year I find it less important on a dev machine. TFS provides me with all the backup/rollback ability that I need, multiple versions of .net can now exist on the same machine and VS2008 can target all those versions.
In a development environment a virtual environment is useful to put several different servers on one box, you can have an instance for your web app, one for your services, one for database, etc. That way it mimics your production environment if you are using separate servers.
One of the benefits of using virtualization in production is that your application is not tied to a specific machine. If you wanted to move your web server instance to another box, it is trivial to do so. You don't need to install or configure things on the new server and hope that everything is set up properly.
One problem I have had though in testing virtual instances is that it can run slower for some applications, specifically engineering apps that like running the CPU at 100%. So test before you leap.
Improving Web Development Using Virtualization
https://web.archive.org/web/20090207084158/http://aspnet.4guysfromrolla.com:80/articles/102908-1.aspx
Virtualization is, in essence, creating multiple miniature (virtual) PCs inside of your primary PC. One of the great benefits of this is that it allows you to isolate and test an application or set of applications in an environment that is free of other things to interfere. It used to be that in order to get a new machine with a new development environment on it you had to have another piece of hardware, or you had to rebuild your system to the new environment. With virtualization, you simply install the new environment that you need into one of the virtual machines and you run it as necessary. When you're done you can shut it down.
Virtualization is the ultimate in isolation -- it can allow you to do things on one piece of hardware that are simply not possible without it. For instance, you can install software in a test environment on a member server because it won't run on a domain controller. You simply fire up two virtual machines at the same time -- one being the domain controller and the other being the member server. Both virtual machines can run on the same physical hardware at the same time without either being aware that they are sharing a machine. The result is a quick way to implement testing environments.
Virtualization technology allows for the virtual systems to be frozen in place. In other words, the exact spot in the machine that you are at can be frozen for an indefinite period of time. If you work on one project until it's released and stable and need to come back in a year and start working on it again, you can freeze the system when you stop working on the project and then restart it a year -- or more – later. When the system is restarted it will be like time had not passed. The system will be restored exactly as it was left.
This particular feature is great for developers who support multiple systems including consultants who have different clients with different projects that they will have to support over time. You don't have to worry about recreating an environment to test a bug fix; you simply thaw out your virtual machine and go.
Virtualization programs have a feature described as Undo disks. Undo disks allow you to operate on the system and if you decide that you don't want to save your work you simply don’t' accept the changes in the undo disks. Poof. Like magic everything that you did is undone and it's like it never happened.
I'm planning on doing more coding from home but in order to do so, I need to be able to edit files on a Samba drive on our dev server. The problem I've run into with several editors is that the network latency causes the editor to lock up for long periods of time (Eclipse, TextMate). Some editors cope with this a lot better than others, but are there any file system or other tweaks I can make to minimize the impact of lag?
A few additional points:
There's a policy against having company data on personal machines, so I'd like to avoid checking out the code locally.
The mount is over a PPTP VPN connection.
Mounting to Linux or OS X client
Use a source control system — Subversion, Perforce, Git, Mercurial, Bazaar, etc. — so you're never editing code on a shared server. Instead you should be editing a local work area and committing changes to a repository located on the network.
Also, convince your company to adapt their policy such that company code is allowed on personal machines if it's on an encrypted volume. Encrypted disk images that you can use for this are trivial to create using Disk Utility, and can use strong cryptography. You can get even more security by not storing your encryption passphrase in your keychain, and instead typing it every time you mount the encrypted volume; this means that even if your local user account is compromised, as long as you don't have the volume mounted, nobody else will be able to mount it.
I did this all the time when I was consulting and none of my clients — some of whom had similar rules about company code — ever had a problem with it once I explained how things worked. (I think some of them even started using encrypted disk images even within their offices.)
Remate plugin simply disables this dreadful refresh-on-focus feature.
Download, unpack, doubleclick and choose "Disable Refresh on Regaining Focus" from "Window" menu (you can refresh manually by right-clicking project in drawer). Voila!
If you are accessing the data from your personal computer, it is in your RAM, so we will assume that you just can't store it on your hard drive, floppy, USB stick, etc.
Your solution is a RAM drive. Copy the files you need to edit there using whatever method you prefer (I would suggest source control) and then you can edit them without lag. When you are done commit them back to the server.
As was pointed out your editor may be caching changes to your temp directory, or maybe even your swap file (if it is in memory, then it can get swapped out). The solution to that is get a much larger RAM drive and run a Virtual Machine in the RAM drive. Not sure what OS you are running, but you can get a pretty slim install of most OS's if all you are doing is editing source code.
If you don't have enough RAM, then get a Gigabyte i-RAM solid state drive and remove the battery, that way it will lose everything when you power down.
Set your VMWare to not allow the OS to swap any of the virtual machine. Keep a baseline VM on your hard drive and copy it to your RAM drive before booting it up. Then you can use the hard drive in the VM like a hard drive, even though it is RAM.
Might be a good idea to run a secure erase on your RAM drive before powering down. Also keep in mind that they have found if you super cool a RAM chip before removing it from a functioning computer, and place it in a new computer quick enough, the data may still be intact.
I guess it all comes down to how detailed that policy is, and how it is interpreted.
Good luck!
Short answer: you can do no trick. CIFS is really geared towards LAN with a reasonably calm trafic, so you have zero chance to not suffer intermittent lag accessing a share through a VPN. The editor at some point needs to access the file in blocking IO, because it makes no real sense to do otherwise.
You could switch editor and use Emacs + TRAMP which is geared to work on remote files.