Why does nginx need Lua when it works fine without it? - nginx

Why does nginx need Lua when it works fine without Lua and Openresty?
Also, I'd like to know which Lua modules are very important to build large scale web applications.

Okay, those are two questions.
Why does nginx need Lua
Well, it doesn't need it; in fact, many people are using plain nginx just fine. Even though I usually just run openresty, I often find myself doing lots of things with just the nginx features.
That being said, Lua is a scripting language, something that nginx on its own jus doesn't (yet) have. It allows adding functionality to a webserver without having to write C modules and in a way that can be easily changed or reloaded during runtime.
Kong is a good example for this: It uses Lua to script advanced behavior that nginx doesn't really support out of the box.
which Lua modules are very important to build large scale web applications
That really depends on what you want to build. In principle, you can build lots of stuff just with openresty alone, and if you do it right, it will probably be faster than most applications written in other frameworks.
Normally, you'd most likely want at least some sort of templating engine though. Something that allows you to build HTML pages without having to rely on Luas "primitive" string processing features. You will also most likely need some library to interface whatever database you decide to use. From there it really depends mostly on what you want to build.

Related

How to Deploy with a CDN?

I'm surprised I haven't found answers to this by Googling. I currently don't have any CDN and run my deploys through Ansible. I'm thinking of adding a CDN, but since cache is only invalidated on these intermittently, my understanding is that a deploy on my servers wouldn't change the static files served by my cdn for potentially hours. Of course, when I do a deploy, I'd like everything done as fast as possible, so if anything does go wrong (or right), I know about it immediately.
All of that said, how do I actually deploy to a CDN, telling it I'm serving some new set of static files now and that it should refresh these? Is there an Ansible module that does something like this, or an API for some CDN provider? I'd really like to avoid doing this manually on every deploy as this seems to imply for Cloudflare, for example.
Also, I'm currently using CloudFlare for other stuff, so sticking with them would be cool, but I'm willing to switch over to something else if it's better for my use case.
As an aside, this seems like a standard use case with a CDN, but I can't find much documentation or blog posts for how people regularly deploy to CDNs. Am I missing something?
Yeah, you could do a purge/invalidate, but that's not the best. Really, you want to use a tool that compiles* your CSS/SASS/whatever, images into sprites, and compiles your JS. Finally, the tool should understand static hosting, which means it uses a unique url for each publish. That way you don't have to purge, which is expensive for a CDN to do.
Thankfully, there are tools that handle this. I'm aware of Ruby's asset-pipeline, Grails's asset-pipeline, and Python's webassets.
Depending how you build your code and bake your stack, you might use Ansible to upload/deploy the static assets, though most of them have the ability to deploy locally or to s3.
* I'm using "compile", though it's really "minify/munge/compress" or "preprocess" or whatever.

Access DataStoreAPI with third-party app

I want to access the CKAN DataStoreAPI with ruby. And i have some questions. Is it possible to access the DataStoreAPI with ruby? I want to create, read and update resources in the datastore and i prefer ruby.
As the CKAN API uses HTTP, you can access it with whatever language you prefer simply by hitting the correct URIs. For example, there are libraries for PHP (https://github.com/opencolorado/PHP-Wrapper-for-CKAN-API), and .NET (https://github.com/opencolorado/.NET-Wrapper-for-CKAN-API). They look abandoned, though.
Although I also prefer Ruby, unless you have a really good reason for using it, I recommend you to stick with Python. There're a few good libraries that would help a lot when using the API like https://github.com/open-data/ckanapi and https://github.com/dgraziotin/libckan. As far as I know, there's no library for CKAN in Ruby, so you'll have to build your own.

How do I keep compiled code libraries up-to-date across multiple web sites using version control?

Currently, we have a long list of various websites throughout our company's intranet. Most are inside a firewall and require an Active Directory account to access. One of our problems, as of late, has been the increase in the number of websites and the addition of a common code library that stores our database access classes, common helper functions, serialization methods, etc. The goal is to use that framework across all websites throughout the company.
Currently, we have upgraded the in-house data entry application with these changes consistently. It is up-to-date. The problem, however, is maintaining all of the other websites. Is there a best practice or way in which I find out versions on each website and upgrade accordingly? Can I have a centralized place where I keep these DLLs and sites reference them? What's the best way to go about finding out what versions are on these websites without having to go through each and every single website, find out the version, and upgrade after every change?
Keep in mind, we run the newest TFS and are a .NET development team.
At my job we have a similar setup to you, lots of internal applications that use common libraries, and I have spent the best part of a year sorting this all out.
The first thing to note is that nothing you mentioned really has anything to do with TFS, but is really a symptom of the way your applications, and their components, are packaged and deployed.
Here are some ideas to get you started:
Setup automated/continuous builds
This is the first thing you need to do. Use the build facility in TFS if you must, or make the investment into something like TeamCity (which is great). Evaluate everything. Find something which you love and that everyone else can live with. The reason why you need to find something you love is because you will ultimately be responsible for it.
The reason why setting up automated builds is so important is because that's your jumping off point to solve the rest of your issues.
Setup automated deployment
Every deployable artifact should now be being built by your build server. No more manual deployment. No more deployment from workstations. No more visual studio Publish feature. It's hard to step away from this, but it's worth it.
If you have lots of web projects then look into either using web deploy which can be easily automated using either msbuild/powershell or go fancy and try something like octopus deploy.
Package common components using nuget
By now your common code should have its own automated builds, but how do you automatically deploy a common component? Package it up into nuget and either put it on a share for consumption or host it in a nuget server (TeamCity has one built in). A good build server can automatically update your nuget packages for you (if you always need to be on the latest version), and you can inspect which version you are referencing by checking your packages.config.
I know this is a lot to take in, but it is in its essence the fundamentals of moving towards continuous delivery (http://continuousdelivery.com/).
Please beware that getting this right will take you a long time, but that the process is incremental and you can evolve it over time. However, the longer you wait the harder it will be. Don't feel like you need to upgrade all your projects at the same time, you don't. Just the ones that are causing the most pain.
I hope this helps.
I'd just like to step outside the space of a specific solution for your problem and address the underlying desire you have to consolidate your workload.
Be aware that any patching/upgrading scenario will have costs that you must address - there is no magic pill.
Particularly, what you want to achieve will typically incur either a build/deploy overhead (as jonnii has outlined), or a runtime overhead (in validating the new versions to ensure everything works as expected).
In your case, because you have already built your products, I expect you will go the build/deploy route.
Just remember that even with binary equivalence (everything compiles, and unit tests pass), there is still the risk that the application will behave somehow differently after an upgrade, so you will not be able to avoid at least some rudimentary testing across all of your applications (the GAC approach is particularly vulnerable to this risk).
You might find it easier to accept that just because you have built a new version of a binary, doesn't mean that it should be rolled out to all web applications, even ones that are already functioning correctly (if something ain't broke...).
If that is acceptable, then you will reduce your workload by only incurring resource expense on testing applications that actually need to be touched.

How to make two web sites appear as one - What features are important?

I am about to write a tender. The solution might be a PHP based CMS. Later I might want to integrate an ASP.NET framework and make it look like one site.
What features would make this relatively easy.
Would OpenId and similar make a difference?
In the PHP world Joomla is supposed to be more integrative than Druapal. What are the important differences here?
Are there spesific frameworks in ASP.NET, Python or Ruby that are more open to integration than others?
The most important thing is going to be putting as much of the look-and-feel in a format that can be shared by any platforms. That means you should develop a standard set of CSS files and (X)HTML files which can be imported (or directly presented) in any of those platform options. Think about it as writing a dynamic library that can be loaded by different programs.
Using OpenID for authentication, if all of your platform options support it, would be nice, but remember that each platform is going to require additional user metadata be stored for each user (preferences, last login, permissions/roles, etc) which you'll still have to wrangle between them. OpenID only solves the authentication problem, not the authorization or preferences problems.
Lastly, since there are so many options, I would stick to cross-platform solutions. That will leave you the most options going forward. There's no compelling advantage IMHO to using ASP.NET if there's a chance you may one day integrate with other systems or move to another system.
I think that most important thing is to choose the right server. The server needs to have adequate modules. Apache would be good choice as it supports all that you want, including mod_aspnet (which I didn't test, but many people say it works).
If you think asp.net integration is certanly going to come, I would choose Windows as OS as it will certanly be easier.
You could also install reverse proxy that would decide which server to render content based on request - if user request aspx page, proxy will connect to the IIS and windoze page, if it asks for php it can connect to other server. The problem with this approach is shared memory & state, which could be solved with carefull design to support this - like shared database holding all state information and model data....
OpenID doesn't make a difference - there are libs for any framework you choose.

Is it commonplace/appropriate for third party components to make undocumented use of the filesystem?

I have been utilizing two third party components for PDF document generation (in .NET, but i think this is a platform independent topic). I will leave the company's names out of it for now, but I will say, they are not extremely well known vendors.
I have found that both products make undocumented use of the filesystem (i.e. putting temp files on disk). This has created a problem for me in my ASP.NET web application as I now have to identify the file locations and set permissions on them as appropriate. Since my web application is setup for impersonation using Windows authentication, this essentially means I have to assign write permissions to a few file locations on my web server.
Not that big a deal, once I figured out why the components were failing, but...I see this as a maintenance issue. What happens when we upgrade our servers to some OS that changes one of the temporary file locations? What happens if the vendor decides to change the temporary file location? Our application will "break" without changing a line of our code. Related, but if we have to stand this application up in a "fresh" machine (regardless of environment), we have to know about this issue and set permissions appropriately.
Unfortunately, the components do not provide a way to make this temporary file path "configurable", which would certainly at least make it more explicit about what is going on under the covers.
This isn't really a question that I need answered, but more of a kick off for conversation about whether what these component vendors are doing is appropriate, how this should be documented/communicated to users, etc.
Thoughts? Opinions? Comments?
First, I'd ask whether these PDF generation tools are designed to be run within ASP.NET apps. Do they make claims that this is something they support? If so, then they should provide documentation on how they use the file system and what permissions they need.
If not, then you're probably using an inappropriate tool set. I've been here and done that. I worked on a project where a "well known address lookup tool" was used, but the version we used was designed for desktop apps. As such, it wasn't written to cope with 100's of requests - many simultaneous - and it caused all sorts of hard to repro errors.
Commonplace? yes. Appropriate? usually not.
Temp Files are one of the appropriate uses IMHO, as long as they use the proper %TEMP% folder or even better, use the integrated Path.GetTempPath/Path.GetTempFileName Functions.
In an ideal world, each Third Party component comes with a Code Access Security description, listing in detail what is needed (and for what purpose), but CAS is possibly one of the most-ignored features of .net...
Writing temporary files would not be considered outside the normal functioning of any piece of software. Unless it is writing temp files to a really bizarre place, this seems more likely something they never thought to document rather than went out of their way to cause you trouble. I would simply contact the vendor explain what your are doing and ask if they can provide documentation.
Also Martin makes a good point about whether it is a app that should run with Asp.net or a desktop app.

Resources