Is it possible to deploy a meteor-app to a Synology NAS and run it from there? If so, how?
I guess I need a node.js server to run on my NAS, but I do not know what to do when it is up and running.
Since the Synology NAS is built ontop of linux so you could meteor running on it.
Because some of Synology's NAS units use ARM cpu's you would need to compile meteor for ARM (meaning its dependencies such as MongoDB and Node, all of which should be possible).
Have a look at http://forum.synology.com/wiki/index.php/What_kind_of_CPU_does_my_NAS_have to determine if your nas' CPU is x86 or ARM. If it is ARM you'll have to build the binaries from source. If its x86 you could probably just run curl https://install.meteor.com | sh
Have a look at https://github.com/meteor/meteor#slow-start-for-developers to build a dev bundle on your
unit.
Meteor can also be deployed as a normal node.JS Application with meteor deploy or you can use demeteorizer (https://www.npmjs.com/package/demeteorizer)
For details please check here: https://guide.meteor.com/deployment.html
Related
Does anyone have a tutorial how to use a C++ module from NPM in Meteor? Running the normal: meteor npm install and importing with the package returns this message.
Error: module.useNode() must succeed for native .node modules
You're probably using an unusual server configuration. Some common problems that break compilation are:
Incorrect permissions for various directories. This may mean you've been running as root for some meteor installation commands, which then prevents the local user's access to those directories for compilation.
A missing or broken compiler toolchain.
A bad package or one that is untested for your hosting platform.
I'd discourage you from trying to run locally on an desktop Linux installation if you're unfamiliar with some of these gotchas. Use a virtual machine for development. I'd recommend Vagrant using IntelliJ's WebStorm for native integration.
Meteor is generally deployed on Linux using the Amazon AMI or Ubuntu on AWS, Galaxy or Modulus.
If you think everything's in order and you're having issues with Galaxy or Modulus, reach out to their support.
so i just started working on a project, and my task is to upgrade sonatype nexus 1.9.x running on CentOS6 to 2.11.x. The old version is currently deployed via a war file. The goal is to get the new version deployed while not breaking builds when devs try to build their project.
My plan of attack is to download nexus. Make the current nexus that is deployed via tomcat, run on a different port, make the new nexus run on the current port, then proxy the old nexus.
Im running into a couple problems though. the old nexus uses java 1.6. If update java to 1.8, would this break the current running nexus?
Would I be able to run two version of nexus on the same vm? If so, how would i do that and minimize the change of messing something up?
Thanks everyone. Im just starting out and this is all very new to me.
Since you Nexus install is very old you have to consider your options:
You could upgrade the existing instance. 1.9 is VERY old so you have to upgrade in multiple steps. First to 2.0, then 2.7 and then 2.11. This is necessary due to data storage changes for configuration and removed upgrade steps.
You could just reconfigure a new server from scratch with the same configuration in terms of repositories and other things and simply rsync the repsitories over to the new storage. You really only have to do this for hosted repositories since the proxy repositories will hopefully still be online and you will just download whatever is requested anew.
If your setup is not too complex I would personally go with option 2. It gives you a chance to revisit things and clean up your setup.
For that setup the steps are roughly.
Install Java 8 in parallel to Java 6
Install Nexus 2.11 from the bundle so it runs with Eclipse Jetty. Do NOT try to run on Tomcat.
Configure it to run on port 9081 or some other non-conflicting port with your original setup and do all the other config including creating the repositories as desired as well as security setup.
Now you should be able to have both servers running.
Create a script that rsyncs the repositories (located in sonatype-work/nexus/storage) and run it with the new server offline
Start the new Nexus in parallel and run a number of tests against it.
Once you have confirmed everything is working plan for a specific time for the cutover and do this
Disable any deployment to Nexus (CI servers, tell people, switch hosted repositories to read only)
Run the rsync script one last time
Turn the old Nexus server off
Configure the new server to use the port of the old one
Start the new one up
You are done. Everything should be good now so the last step is to delete the old Nexus and Tomcat setup.
There are various variations for this process of course. Here are some tips for the rsync.
Also feel free to ping us on the mailing list or chat for further help and check out the comprehensive documentation as well.
Does devstack completely install openstack? I read somewhere that devStack is not and has never been intended to be a general OpenStack installer. So what does devstack actually install? Is there any other scripted method available to completely install openstack(grizzly release) or I need to follow the manual installation steps given on openstack website?
devstack does completely install from git openstack.
for lesser values of completely anyways. devstack is the version of openstack used in jenkins gate testing by developers committing code to the openstack project.
devstack as the name suggests is specifically for developing for openstack. as such it's existence is ephemeral. in short, after running stack.sh the resulting ( probably ) functioning openstack is setup... but upon reboot it will not come back up. there are no upstart or systemd or init.d scripts for restarting services. there is no high availability, no backups, no configuration management. And following the latest git releases in the development branch of openstack can be a great way to discover just how unstable openstack is before a feature freeze.
there are several vagrant recipes in the world for deploying openstack, and openstack-puppet is a puppet recipe for deploying openstack. chef also maintains an openstack recipe as well.
Grizzly is a bit old now. Havana is the current stable release.
https://github.com/stackforge/puppet-openstack
http://docs.opscode.com/openstack.html
http://cloudarchitectmusings.com/2013/12/01/deploy-openstack-havana-on-your-laptop-using-vagrant-and-chef/
and ubuntu even maintains a system called maas and juju for deploying openstack super quickly on their OS.
https://help.ubuntu.com/community/UbuntuCloudInfrastructure
http://www.youtube.com/watch?v=mspwQfoYQks
so lots of ways to install openstack.
however most folks pushing a production cloud use some form of configuration management system. that way they can deploy compute nodes automatically. and recover systems quickly.
also check out openstack on openstack.
https://wiki.openstack.org/wiki/TripleO
I think the code should be same, but at least the configuration is not same, for example, devstack will by default use nova network. In a manual installation, you can choose neutron. so:
if you are starting to learn openstack, devstack is a good starting point. with it, you can quickly have a development env.
if you are deploying openstack env, devstack is not a choice, and
instead you need install it following the installation guide.
If you would like another scripted option for deployment, you can try Packstack. This will work only on Fedora and RHEL.
https://wiki.openstack.org/wiki/Packstack
https://www.rdoproject.org/install/quickstart/
In this, you can choose which services you would like to install. For example you may choose to install Neutron for networking purposes, instead of using nova.
Also, it lets you deploy multiple instances of compute nodes by just providing it's IP !!
Yes. Devstack is a tool which help you build all in one for Openstack environment in quickly (Just take a coffee cup and wait until complete). Normally they were using for developer to develop new features and/ or test code quickest. For operator, we need to setup by manual step by step for each services.
To build via devstack repo then you need pull newest source-code from http://git.openstack.org/openstack-dev/devstack. then create new local.conf in devstack folder. And run ./stack.sh.
For example local.conf: https://github.com/pshchelo/stackdev/blob/master/conf/local.conf.sample
Yes, Devstack install all the components of Openstack. But when you use basic configuration then it will install core components of openstack which are the base of openstack cloud platform to run some basic things.
And in Advance configuration of openstack you should configure your local.conf file for what type of services and components you want to install or use in your cloud.
https://github.com/openstack/tacker/blob/master/devstack/local.conf.example
I currently have a Java application packaged in an RPM that gets built for 32-bit RedHat platforms, and I want to create a 64-bit RPM, which is largely just the same as the 32-bit one, but with a couple different .so files included. All the Java stuff is the same on both platforms, so it's just JNI .so's.
My question is: Is it possible to have rpmbuild on a 32-bit system generate both the 32-bit and 64-bit RPMs (from different .spec files) since it's just repackaging already-built components, or do I need to build the 64-bit RPM on a 64-bit system?
N.B. I'm not actually building anything native on the system. I'm just repackaging stuff that's already built.
... or vice versa, can I build a 32-bit one on a 64-bit system? I really would prefer just to build and package this on one system than have two separate builds run for the separate RPMs.
As Aaron stated you can build an RPM for multiple distros on the same machine (64-bit), but you have to be very careful or you can run into issues. The biggest problem I've run into is you build on RHEL 5, then you try to deploy to RHEL 6, since RHEL 6 has a different version of RPM installed, it can cause conflicts and fail to install. So in this scenario you have a few options:
Build the RPM on two machines, you've stated you don't really want to do this.
If you have the disk space, configure Mock, I've used it a ton before and it's really easy to get going as long as you have the disk space and the package spec was designed to pull in requires properly.
Personally I'd give Mock a shot, it's quite simple to set up, and will allow you to do what you want with minimal effort as long as the proper repos are available. In the event the build fails the log is pretty comprehensive regarding what the RPM build error was.
I am developing OpenCL code on a linux cluster through SSH -
are there any tools that would make this process easier, i.e.
something like NVIDIA Parallel Nsight for OpenCL ?
No there is no such tool, though you might try developing your code using ordinary computer and post production versions there..
If the computer where you perform development is also running Linux, you can easily mount a remote folder as local. In a Gnome environment, open Nautilus (the file manager), click File => Connect to server, chose SSH, fill the required parameters, and you have a remote folder as local.
You can then use any IDE you want to develop code, and maybe perform simple runs, tests and debugs if the OpenCL tools (compiler, debugger) you're using remotely are also installed locally. However, To compile and properly run the code on the cluster, you need to use the ssh client on the command line.