I'm aware of Meteor's server-side rendering and spiderable packages, but neither of those really address my issue.
I like Meteor so much that I want to use it as my primary development environment even for static brochure-style sites (which don't even connect to any dynamic sources).
I would like to have access to all Meteor's features (such as collections and packages) during my dev process though and only convert the whole thing to a static "brochure" when I'm done.
Is there a way to do that?
Related
We have a large, complex Kentico build which uses Kentico's Continuous Integration locally, and Kentico's Staging module to push Kentico object changes through various environments.
We have a large internal dev team and have found that occasionally (probably due to Git merging issues) certain staging tasks aren't logged. When dealing with large deployments this is often not obvious until something breaks on the target server.
What I'd like is to write a custom module which can pull certain data from a target server (e.g. a collection of serialized web parts). I can then use this to compare with the source server to identify where objects are not correctly synchronized. I'd hoped this might be possible using the web services already exposed by Kentico which handle the staging sync tasks.
I've been hunting through a few namespaces in the Kentico API (CMS.Synchronization, CMS.Synchronization.WSE3 etc.) but it's not clear if what I'm trying to do is even possible. Has anyone tried anything similar. If so, could you point me in the right direction?
Instead of writing your own code/tool for this I'd suggest taking advantage of what someone else has already done. This is like Red Gate's SQL Compare for Kentico BUT on steroids. It compares, database data, schema AND file system changes on staging and target servers.
Compare for Kentico
I'm surprised I haven't found answers to this by Googling. I currently don't have any CDN and run my deploys through Ansible. I'm thinking of adding a CDN, but since cache is only invalidated on these intermittently, my understanding is that a deploy on my servers wouldn't change the static files served by my cdn for potentially hours. Of course, when I do a deploy, I'd like everything done as fast as possible, so if anything does go wrong (or right), I know about it immediately.
All of that said, how do I actually deploy to a CDN, telling it I'm serving some new set of static files now and that it should refresh these? Is there an Ansible module that does something like this, or an API for some CDN provider? I'd really like to avoid doing this manually on every deploy as this seems to imply for Cloudflare, for example.
Also, I'm currently using CloudFlare for other stuff, so sticking with them would be cool, but I'm willing to switch over to something else if it's better for my use case.
As an aside, this seems like a standard use case with a CDN, but I can't find much documentation or blog posts for how people regularly deploy to CDNs. Am I missing something?
Yeah, you could do a purge/invalidate, but that's not the best. Really, you want to use a tool that compiles* your CSS/SASS/whatever, images into sprites, and compiles your JS. Finally, the tool should understand static hosting, which means it uses a unique url for each publish. That way you don't have to purge, which is expensive for a CDN to do.
Thankfully, there are tools that handle this. I'm aware of Ruby's asset-pipeline, Grails's asset-pipeline, and Python's webassets.
Depending how you build your code and bake your stack, you might use Ansible to upload/deploy the static assets, though most of them have the ability to deploy locally or to s3.
* I'm using "compile", though it's really "minify/munge/compress" or "preprocess" or whatever.
I'm thinking of adopting Meteor for my hobby project and I'm wondering something.
On one or two websites that were comparing various frameworks listed Meteor as one having both client and server libraries (a better term could be used here, but I can't think of one!).
So my question is...
Where can I find more info about this client library and its capabilities?
Can I drop the default library (whatever it may be) for something like Knockout? Do I need or want to?
Meteor has a database everywhere approach which means that you can use the same interface for accessing your data on both client and server. (i.e. MyCollection.find() would do a find whether you're on client or server.
As for client side libraries, meteor supports adding packages such as other templating engines like meteor-knockout. As well meteor comes with the Handlebars templating engine, which is what is used in the docs. See here
In general, if you want to know anything about Meteor, the docs are the best place to go.
There is a lot of action in the CSS/JS bundling+minification space with MVC4 and things like Cassette, but I can't seem to find one that supports uploading to a CDN natively.
We use Rackspace Cloud Files and it requires that we upload (via their API no less) our assets directly - it doesn't do an origin-pull.
Right now, we have MSBuild script that does this for us, but it is very difficult to maintain and work with.
If you could map a drive, I think RequestReduce MIGHT get you what you want out of the box. It performs bundling and minification at runtime and provides some configuration options that allow you to specify the drop location of generated assets to any UNC path. The intent of this config is for web farm scenarios that have a dedicated share for static assets. I'm wondering if this might work for you. It also exposes an interface that allows you to essentially take over the process of saving and retrieving assets from any durable store. It comes with a local disk store and there is a SqlServer store provided as a separate Nuget package. I've had others propose writing ones for Azure blob or amazon ec3. Its a bit involved but not too horrible. At any rate its free, it provides background image spriting and optimization which few others provide and there is another Nuget package that adds Less/Sass/Coffee compiling. Its used by Microsoft on alot of the MSDN/Technet properties.
I run the project and would be happy to answer any questions via the Github Issues page.
I really like drupal somehow. But what disturbs me most is that i can't figure out a clear way of deployment. Drupal stores a lot of stuff inside the database (views, cck, workflow, trigger etc) that needs to be updated.
I've seen some modules that could be used for this task (eg features) and I'm not sure if they are sufficient. Yet they are only for drupal6 and i currently have to work on a drupal5 site where updating is not yet an option.
Any ideas?
This is a weakness. Drupal doesn't have the developer tools built in that make development and deployment easy like Rails does (for example). One problem is Drupal isn't aware of it's environment natively. Secondly, there are too many different methods and modules that require special care. It can get very confusing. But things are getting better with drush and drush make.
I'm assuming here that you have a development environment on your local machine and a live or staging server you upload to.
The first thing you have to do is work out how to get your database fixture and your code to and from your server to your development environment very quickly. You need to make this proceedure as painless as possible so you can keep different versions of your site in sync without much effort. This will mean you will hopefully be able to manage less change every time you deploy. Hopefully...
Moving the database around isn't too hard. You could use phpMyadmin or mysqldump but the backup migrate module is my favorite tool.
To upload code from your local repository or site can be done in a few ways. If you use a version control system like git, you can commit on your local machine and check out again on the staging server. There are also special deployment tools like capistrano you should take a look at. (if you know this stuff already it may benefit others to read). If you're using FTP you should probably try something different.
If you're working with a site that is still in production, you can afford to make small incremental changes to your local site, then repeat on the live site and down load the new version of the database when your changes are in place. This does mean you double handle the database but can be a safe way of doing things. It keeps both your database closer to each other and minimises risk.
You can also export views backup to your server in either your code or importing them into your live site. There is a hack to get around deploying cck changes here: http://www.tinpixel.com/node/53 it works OK but cannot truly manage changes like rollbacks. (Respect to the guy who wrote that)
You can also use hook_updateN to capture changes and then run update.php to apply them. I worked on a d5 site with dozens of developers and this was the only way to keep things moving forward. This may be a good option if your site is live or if you need all database schema changes captured in a version control system (so you can roll back).
Also: Take a look at drush and drush make. These tools can be of great benefit. I can't remember how much support is for d5.
One final method of dealing with this is not to use cck or views (and use hook updates). But this is really only suitable for enterprise sites where you have big developer resources. This may seem like a strange suggestion but it can negate this whole problem completely.
Sorry I could not give you a clear answer. This is because one does not exist yet. You'll end up finding your own rhythm once you get into. Just keep backups of your database if you can roll back to them easy enough.