Best practices place to put URL that configs my app? - qt

We have a Qt app that when it starts tries to connect to a servlet to get config parameters that it needs to keep running.
The URL may change frequently because we have to test the application in several environments. Right now (as a temporary solution) the URL is a constant in source code, but it is a little bit ugly.
Where is the best place to mainting this URL, so that we do not need to change the source code every time I want to change the environment target?
In a database table maybe (my application uses a SQLite DB), in a settings file, or in some other way?
Thank you for you replies.

You have a number of options:
Hard coded (like you have already)
Run-time user input
Command line arguments
QSettings
Read from a bespoke file as text.
I would think option 3 would be the most simple to implement without being intrusive, but it does depend on what kind of application you have.

I would keep the list of url in a document, e.g. a XML, stored in a central, well known place, e.g. a known web server, and hardcode the url of the known place in the app.
The list could then be edited externally without recompiling your app;
The app would at startup download and parse the list, pointing to the right servlet based upon an environment specified as a command line parameter.

Related

How to access webpack generated filenames in a plugin?

So, here is my situation. I have a JavaScript application where I'm appending the hashes to the filenames, as is the standard for Webpack output. This way the content can be safely cached by the browser, with the fresh load controlled by the changing file hash.
My problem is I have a situation where I need other applications to access mine, and they won't be able to be updated every time the hash changes. So I need a request like this:
https://my-domain.com/assets/js/app.js
to be redirected to
https://my-domain.com/assets/js/app.ab12cd34.js
My application currently uses nginx to serve up the pages, but nginx is static. I don't know how to configure it to dynamically identify the hashed file name and return it.
The app is being deployed to a Pivotal CloudFoundry environment. PCF supports evaluating dynamic Ruby code in an nginx.conf file, so that seemed like an easy way around this. Unfortunately, my company requires that the nginx.conf go through a special parser to enforce security headers. This parser only knows nginx syntax, and mangles any Ruby code there.
So, that leaves me with Webpack. I started investigating ways for Webpack to modify files during the build process, and I discovered the transform() function in the copy-webpack-plugin. It has the ability to modify the files exactly how I need. What is still a challenge, though, is getting the hash filename.
So, I'm hoping there's some way to gain access to what the hash filename will be in this plugin, so that I can inject it into the nginx.conf.
Alternatively, if someone knows another way to get around my core problem, I'm all ears.
You can use the webpack-manifest-plugin to create a manifest file with a filename -> chunkname/bundlename mapping.
This manifest file can then be consumed by any piece of software that needs it.

Development shell in ASP.NET

I write a lot of code, most of it I throw away eventually when I am done with it; recently I was thinking that if I just kept every small piece of utility script I wrote, named it, tagged it and filed it in a dev shell, I will never loose the code, and on top of that I won't need to redo something I have done already, which is the main motivation, as I keep finding myself writing something I've done earlier.
Is there a ASP.NET shell style environment anywhere?
If not, what would be the best way to go about this?
I am looking to be able to do the following:
Write big or small bits of code.
Derive from or chain together alread written code/libraries/services.
Ability to have everything on my desktop (would that mean IIS on the desktop? or is there an lighter weight mechanism?), sync'ed with the server at home, so if I am on the move I can still access this and make this part of my day-to-day workflow.
You could build a unique solution, with many class library projects inside. Each project would address a specific scenario, something like this:
MyStuff (Solution)
MyStuff.Common
MyStuff.Validation
MyStuff.Web
MyStuff.Encryption
etc.
Then you can put this solution on an online versioning service like bitbucket or assembla, so you can access your source code from anywhere, edit it and commit it back to the server. This way you get the advantages of versioning and you store your code on a remote server so even if your harddisk breaks it's not a problem, cause what's on the server is what matters.
You should either look into a source control system (Git perhaps?) or into a file storage / syncing / sharing service like DropBox.
DropBox would allow you to access code snippets from wherever you are and works really easily (just drop a file into a folder).
If you need versioning and branching you're going to have to look into a source control system. Since you have a server at home, that should be no problem.

concurrent reading and writing image files (asp.net, but applies to most web languages)

I have a .jpg file which represents the current image from a webcam. User's will be downloading this file at an interval of once a second. Because there could be dozens of users reading it, this could be dozens of times a second (which is normal for any web server).
Problem is, this image is updated by a 3rd party application also once a second which "spiders" my local networks webcam portal image. This is so we can build our webcams into our current administration panel.
The problem I am already finding is ASP.net sometimes gets an error it can not access the file because it is open for write permissions by the bot. Likewise, the bot can not access it because IIS is feeding it to the user.
The bot uses io.streamwriter to save the data to the file, and my script uses Response.WriteFile to send the file to the script. (I need to use an actual ASP.net page with a JPG content-type that feeds the file to make sure only users with a active session can view the JPG).
My question is what is the best practices for this? I know why it's happening but what is the best resolution for this? Would storing as a BLOB in a database maybe be smarter since databases are created for concurrent read/writing already? Is there an easier way of doing this with a file I have not thought of yet?
Thanks in advance,
Anthony Greco
Using a BLOB will work if the readers use SNAPSHOT isolation model (SQL Server 2005 and up). See Download and Upload images from SQL Server via ASP.Net MVC for how to stream an image from a BLOB, and see Understanding Row Versioning-Based Isolation Levels for a lecture on SNAPSHOT.
But using a BLOB may be overkill, you could get away with something much simpler. For instance, if you only have one ASP.Net process, then you could have a global volatile variable for the current file name. The writer writes the JPG into a new file, and then updates the global 'current' file name with an Interlocked.CompareExchange operation (it has to be Compare because a newer writer might actually finish faster, outrun a previous writer, and you want to preserve the latest update). There are still some issues left to solve (find out the file name at startup, clean up old files etc) but they are all fairly ease to solve.
If you have a farm of servers, or multiple ASP.Net processes serving the site, then things could get complicated. I would still do a rotating file name and do a try-and-error approach (try to respond with newest file, fall back to previous older one if conflict is detected).
You could get the bot to write the data to a different filename and then do a delete and rename to the filename being served by ASP.Net. This should reduce the file lock time down to the time for a delete and rename to occur. To clarify:
ASP.Net serving image from "webcam.jpg"
bot writes image data to "temp.jpg"
when last image byte written, bot deletes "webcam.jpg" and renames "temp.jpg" to "webcam.jpg"
ASP.Net should check "webcam.jpg" exists, if not wait 10ms (or suitable small increment) and check again.

How to let humans and programs access the same file without stepping on each others' toes

Suppose I have a file, urls.txt, that contains a list of URLs I'm monitoring. My monitoring script edits that file occasionally, say, to indicate whether each URL is reachable. I'd like to also manually edit that file, to add to or change the list of URLs. How can I allow that such that I don't have to think about it when manually editing?
Here are some possible answers. What would you do?
Engage in hackery like having the program check for the lockfiles that vim or emacs create. Since this is just for me, this would actually work.
If the human edits always take precedence, just always have the human clobber the program's changes (eg, ignore the editor's warning that the file has changed on disk). The program can then just redo its changes on its next loop. Still, changing the file while the user edits it is not so nice.
Never let a human touch a file that a program makes ongoing modifications to. Rethink the design and have one file that only the human edits and another file that only the program edits.
Give the human a custom tool to edit the file that does the appropriate file locking. That could be as crude as locking the file and then launching an editor, or a custom interface (perhaps a simple command line interface) for inserting/changing/deleting entries from the file.
Use a database instead of a flat file and then the locking is all taken care of automatically.
(Note that I concocted the URL monitoring example to make this more concrete and because what I actually have in mind is perhaps too weird and distracting -- this question is strictly about how to let humans and programs both modify the same state file.)
I'd use a database since that's basically what you're going to have to build to achieve what you want. Why re-invent the wheel?
If a full-blown DBMS is too much of a load, separate the files into two and synchronize them periodically. Whether the URL is reachable doesn't sound like something the user would be changing, so should not be editable by them.
During the synchronize process (which would have to lock out the monitor and the user although it could be a sub-function of the monitor), remove entries in the monitor file that aren't in the user full. Also, add to the monitor file those that have been added to the user file (and start monitoring them).
But, I'd go the database method with a special front-end for the user, since you can get relatively good light-weight databases nowadays.
Use a sensible version control system!
(Git would work well here).
That said, the nature of the problem implies that a real database would be best - and they will generally have either database-level, table-level, or row-level locking - but then put any scripts you need into version control.
I would go with option 3. In fact, I would have the program read the human-edited input file, and append the results of each query to a log file. In this way, you can also analyse the reachability of sites over time. You can also have the program maintain a file that indicates the current reachability state of each site in the input file, as a snapshot of the current state.
One other option is using two files, one for automated access and one for manual. You'd need a way in the user file to indicate modifications or deletions but you'd have similar problems in some of the other solutions as well.

SCM for ASP.net

As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.

Resources