So, here is my situation. I have a JavaScript application where I'm appending the hashes to the filenames, as is the standard for Webpack output. This way the content can be safely cached by the browser, with the fresh load controlled by the changing file hash.
My problem is I have a situation where I need other applications to access mine, and they won't be able to be updated every time the hash changes. So I need a request like this:
https://my-domain.com/assets/js/app.js
to be redirected to
https://my-domain.com/assets/js/app.ab12cd34.js
My application currently uses nginx to serve up the pages, but nginx is static. I don't know how to configure it to dynamically identify the hashed file name and return it.
The app is being deployed to a Pivotal CloudFoundry environment. PCF supports evaluating dynamic Ruby code in an nginx.conf file, so that seemed like an easy way around this. Unfortunately, my company requires that the nginx.conf go through a special parser to enforce security headers. This parser only knows nginx syntax, and mangles any Ruby code there.
So, that leaves me with Webpack. I started investigating ways for Webpack to modify files during the build process, and I discovered the transform() function in the copy-webpack-plugin. It has the ability to modify the files exactly how I need. What is still a challenge, though, is getting the hash filename.
So, I'm hoping there's some way to gain access to what the hash filename will be in this plugin, so that I can inject it into the nginx.conf.
Alternatively, if someone knows another way to get around my core problem, I'm all ears.
You can use the webpack-manifest-plugin to create a manifest file with a filename -> chunkname/bundlename mapping.
This manifest file can then be consumed by any piece of software that needs it.
Related
I am trying to use nginx for serving static contents(images/css etc.)
I need to span up multiple instances of nginx to support as per the incoming load.
So i am looking for Mongo+gridfs solution to store the static files- since it provides replication and sharding.
I see i can serve contents from gridfs using either of these these modules.
Direct nginx module -
https://github.com/mdirolf/nginx-gridfs
Using Lua scripting language
https://github.com/bigplum/lua-resty-mongol
Question is - can i create UploadImage api in nginx itself to store files in gridfs when user calls a POST method passing the file.
It looks to me that it is possible using lua resty module but not sure. Any idea?
You can use the lua-resty-upload module to handle user uploads, and then pass the data over to lua-resty-mongol for writing to Mongo.
For large files you may be able to write the chunks directly as they are read to avoid buffering all of the data in memory, there's a good example on the page using a file.
I have used the upload resty module along with lua mongol module.. and it works well..
Now i got a suggestion from people around to see if we can use java. instead of lua to do db connections primarily to store retrive static file contents.
I see there is a Java module as well that can be used to do the job, or can use php or python as well in nginx.
Q is What would be the difference in using any of these languages- Lua vs Java vs PHP. and what factors should i need to consider while picking up a language.. Performance, solution usage, packaging, etc. Point of view
We have a Qt app that when it starts tries to connect to a servlet to get config parameters that it needs to keep running.
The URL may change frequently because we have to test the application in several environments. Right now (as a temporary solution) the URL is a constant in source code, but it is a little bit ugly.
Where is the best place to mainting this URL, so that we do not need to change the source code every time I want to change the environment target?
In a database table maybe (my application uses a SQLite DB), in a settings file, or in some other way?
Thank you for you replies.
You have a number of options:
Hard coded (like you have already)
Run-time user input
Command line arguments
QSettings
Read from a bespoke file as text.
I would think option 3 would be the most simple to implement without being intrusive, but it does depend on what kind of application you have.
I would keep the list of url in a document, e.g. a XML, stored in a central, well known place, e.g. a known web server, and hardcode the url of the known place in the app.
The list could then be edited externally without recompiling your app;
The app would at startup download and parse the list, pointing to the right servlet based upon an environment specified as a command line parameter.
I write a lot of code, most of it I throw away eventually when I am done with it; recently I was thinking that if I just kept every small piece of utility script I wrote, named it, tagged it and filed it in a dev shell, I will never loose the code, and on top of that I won't need to redo something I have done already, which is the main motivation, as I keep finding myself writing something I've done earlier.
Is there a ASP.NET shell style environment anywhere?
If not, what would be the best way to go about this?
I am looking to be able to do the following:
Write big or small bits of code.
Derive from or chain together alread written code/libraries/services.
Ability to have everything on my desktop (would that mean IIS on the desktop? or is there an lighter weight mechanism?), sync'ed with the server at home, so if I am on the move I can still access this and make this part of my day-to-day workflow.
You could build a unique solution, with many class library projects inside. Each project would address a specific scenario, something like this:
MyStuff (Solution)
MyStuff.Common
MyStuff.Validation
MyStuff.Web
MyStuff.Encryption
etc.
Then you can put this solution on an online versioning service like bitbucket or assembla, so you can access your source code from anywhere, edit it and commit it back to the server. This way you get the advantages of versioning and you store your code on a remote server so even if your harddisk breaks it's not a problem, cause what's on the server is what matters.
You should either look into a source control system (Git perhaps?) or into a file storage / syncing / sharing service like DropBox.
DropBox would allow you to access code snippets from wherever you are and works really easily (just drop a file into a folder).
If you need versioning and branching you're going to have to look into a source control system. Since you have a server at home, that should be no problem.
I've collected a (hopefully useful) summary of the ways I've researched to accomplish the subject of this post, as well as the problems I have with them. Please tell me if you've found other ways you like better, especially if they resolve the problems that the methods I mention do not.
Leave connection strings in web.config and use XDT/msdeploy transformation to replace them with settings according to my active build configuration (for example, a web.PublicTest.config file). My problem with this is I merge and bury a few server-specific settings into an otherwise globally identical file with many configuration elements. Additionally, I cannot share connection string definitions among multiple peer-level applications.
Specify a configSource="DeveloperLocalConnectionStrings.config" value for connection strings in web.config, and XDT transform this value to point to one of the multiple environment-specific files in my code-base. My problem with this is I send passwords for all my environments to all destinations (in addition to SVN, of course) and have unused config sections sitting on servers waiting to be accidentally used.
Specific connection strings in the machine.config file rather than web.config. Problem: who the heck expects to find connection strings in the machine.config, and the probability of surprise name collisions as a result is high.
Specify a configSource="LocalConnectionStrings.config", do not transform the value, and edit the project xml to exclude deployment of the connection string config. http://msdn.microsoft.com/en-us/library/ee942158.aspx#can_i_exclude_specific_files_or_folders_from_deployment - It's the best solution I've found to address my needs for a proprietary (non-distributed) web application, but I'm paranoid another team member will come one day and copy the production site to test for some reason, and voila! Production database is now being modified during UAT. (Update: I've found I can't use one-click publish in this scenario, only msdeploy command line with the -skip parameter. Excluding a file as above is the same as setting it to "None" compile action instead of "Content", and results in the package deleting it from the deployment target.)
Wire the deployment package up to prompt for a connection string if it isn't already set (I don't know how to do this yet but I understand it is possible). This will have similar results to #4 above.
Specify a configSource="..\ConnectionStrings.config". Would be great for my needs, since I could share the config among the apps I choose, and there would be nothing machine-specific in my application directory. Unfortunately parent paths are not allowed in this attribute (like they are for 'appSettings file=""' - note also that you can spiffily use file= inside a configSource= reference).
p.s. some of these solutions are discussed here: ASP.Net configuration file -> Connection strings for multiple developers and deployment servers
When using SQL Server, you can also use Integrated Security / SSPI and add the WebServer Computer Login to the Sql Server.
That way you dont have to expose anything in the web.config and you can grant roles to that login like you would to any other DB user.
Though you have to understand the implications and security considerations to be taken, because any malicious code executed as THAT machine will have access to the Sql Server.
with regards
Ole
Use the hostname as key for the connectionstring, that way you can choose the datasource automagically. Make sure the choosing routine is not buggy (change hostname - test!)...
Don't put it in the web.config, write an ini file, that way there is no XML encoding.
Encrypt the password therein, with private/public key (RSA/PGP). Don't ever use cleartext, or a symmetric key, which is just as bad.
Check my following blog post: Protecting asp.net machine keys and connection strings
If you do use Quandary's answer, use a key that's not in the site's folder, just like asp.net does with protected config sections.
We manually approve changes to the web.config that go into staging/production. We use integrated instead of username based where possible, but an option we've used in the later case is to just have placeholders for the username/passwords in SVN.
We've used separate config files in the past, but we have run into other type of issues with web.config modifications, so we have been locking it in a single file lately.
As part of my overall development practices review I'm looking at how best to streamline and automate our ASP.net web development practices.
At the moment, our process goes something like this:
Designer builds frontend as static HTML/CSS on a network share. This gets tweaked until signed off. (e.g. http://myserver/acmesite_design)
Once signed off, developer takes over and copies over frontend HTML/CSS to a new directory on the same server (e.g. http://myserver/acmesite_development)
Multiple developers work on local copy until project is complete.
Developer publishes code to an external publicly accessible server for a client to review/signoff.
Edits made locally based on feedback.
Republish to external server.
Signoff
Developer publishes to live public server
What goes wrong? Lots of things!
Version Control — this is obviously a must and is being introduced
Configuration errors — many many times, there are environment specific paths and variables (such as DB names, image upload directories, web server paths etc. etc.) which incorrectly get copied from local to staging to live etc. etc. with very embarrassing results.
I'm pretty confident I've got no.1 under control. What about configuration management? Does anyone have any advice as to how best to manage an applications structure within asp.net apps to minimize these kinds of problems?
I found that using SVN, NAnt and NUnit with Cruise Control.net solves a lot of the issues you describe. I think it works well for small groups and it's all free. Just need to learn how to use them.
CruiseControl.net helps you put together builds and continuous integration.
Use NAnt or MSBuild to do different environment builds (DEV, TEST, PROD, etc).
http://confluence.public.thoughtworks.org/display/CCNET/Welcome+to+CruiseControl.NET
You got the most important part right. Use version control. Subversion is a good choice.
I usually store configuration along with the site; i.e. when coding a PHP-based site I have a file named config.php-dist. If you want the site to work at all you'll have to copy + edit in all the required parameters (this avoids storing passwords in version control). The -dist file should have reasonable defaults.
Upload directories should be relative if possible; actually all directories should be relative. I'm not experienced in ASP.net, but if it's anything like PHP the current directory is always the directory of the file being requested. If you channel all requests through a single file (i.e. index.asp), then this can even be found programmatically. Or you could find it programmatically by using the equivalent of dirname(____FILE____) in your configuration file.
I also recommend installing IIS (or whatever webserver you are using) on all development workstations (including the designers). Makes life easier as noone can step on each others toes. What one has to do is simply add test hosts to the hosts file (\windows\system32\drivers\etc\hosts iirc) in addition to adding a site to the local IIS. This plays well with version control (checkout, add site to IIS and hosts-file, edit edit edit commit).
One thing that really helps is making sure you keep your paths relative where you can and centralise them where you can't, so when I've been working with ASP.Net I have tended to use web.config to store any configuration and path related data that can't be found programmatically. It is quite possible to find information like your current application path programmatically through the Request object - it's worth looking in some detail over what the environment makes available to you.
One way to make sure you don't end up on something that is dependent on the path name is having a continuous integration server executing your test suite against your application. Each time this happens you create a random filepath. As soon as someone introduces a dependency on the filepath it will fail.