What are some online solutions for easily accessing my source code from anywhere? [closed] - online-storage

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm a college student and at any given time I have 4-5 programs I'm writing in various languages for various classes/projects.
At any given hour of the day I might be in the library, at home, in any of our different computer lab classrooms, etc.
Right now my current modus operandi is at the end of each class period or coding session, I gmail myself the current state of whatever I'm working on with an appropriate subject line (ie, "MIPS Assembly Lab 2, Revision #3").
However, this is becoming cumbersome and I'm looking for other solutions.
Restrictions:
No Thumbdrive. I'm about as absent minded as possible while still somehow managing to function. I'll lose it.
Portable or Web Apps only. I can't install non-portable executables. So, if a tool requires an installation wizard or administration privileges, I can't use it. I can use portable executables however, stored in our network drive space given to each student. So, that might open some possibilities.
I'm looking for some kind of online storage that I can easily download the latest files for my project or update those in the online storage, with as little friction as possible.
I've considered using some free version control repository and trying to find a portable executable or web-tool I can use to integrate with it, but I wonder if it might be overkill. I'm not really looking for keeping a revision history.
I've seen videos of things like dropbox and it seems like it is a step in the right direction.
Any suggestions?

A good solution would be to get an account at some hosting provider that offers shells (eg. Dreamhost) and do your work remotely. That way you always have a consistent environment that you can just ssh into from anywhere.
It's far easier to find a run-anywhere SSH client than a run-anywhere filesharing or revision control system.

www.github.com?
Git binaries should be usable without any installation process (I do not get the 'portable' part there, as you do not mention anything about your work environment).
Or, alternatively, a thumbdrive git repository, altough you said that you do not want to use a thumbdrive.

The most simple solution (if you can share the code with the world), is to create a project on Google code. This gives you a subversion repository plus a wiki to sort your ideas and an issue tracker for your TODO list, too.
Today, I prefer Subversion in your situation for two reasons:
There is a command line standalone client (just a couple of files which need no install) for Windows. Git would need either Cygwin or MinGW and a Unix environment of some kind. Too much hassle.
It's a bit more simple to use than Git. Git asks a paradigm shift from your brain and unless you get that right, Git will feel "weird".
For professional work on large projects, I prefer Git :)

CVS, Subversion and GIT all allow to create the repository on a network share. All of them discourage this because, in the case of a network outage, the repository may become corrupted.
So if you have frequent network outages, this might not be an option but frankly, most networks are pretty stable today. And in my 15 years since I use VCS, I never had one corrupt a repo on a share. Most network file systems will try their very best to commit pending writes, so unless the server completely dies, the data will be saved when the hiccup is over.
But if you're still worried, use git because it allows to restore the main repository with minimal data loss from your local copy (see this question for details).

We use CVSDude, who do CVS and SVN, it's a pay service $6/month = 250M, works really well, although maybe you're after something free?

You can get a free account on drivehq to store up to 1GB of data. Nothing fancy, but if you're looking for some place to put software it might do the trick.

You might like to check out Bespin:
Bespin is a Mozilla Labs experiment
that proposes an open, extensible
web-based framework for code editing.

I use Dreamhost's integrated SVN. They have an interface for setting up repositories and user accounts. I work on a mac which comes with SVN installed so the whole thing for me was completely painless. Couple of clicks, point SVN at my server and I was good to go.

Related

Deploying an R app with a GUI [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I developed an R application and I want to deploy it.
Currently the application consists of a set of functions to be run from the command line, like an R package. In order to deploy it, I am thinking of repackaging R Portable adding the necessary libraries and my code to it. My main problem is choosing a proper GUI toolkit.
Production Environment
My app is a single-user one (i.e. a Desktop application) and the target platform is Windows. It could bootstrap in R and then call the toolkit, or bootstrap, say, in Java and then call the R engine. The GUI should first and foremost feed the app functions. It should also grab the function graphical output.
Possible Alternatives
Here is a potential list of alternatives. I’d like to know if they meet/fit the requisite environement described.
Java JRI is now released only as a part of rJava, but while the latter is clearly documented, I am unable to find docs and tutorials for the former.
As for Deducer, it is presented as a GUI front-end, but I found out that it is also a GUI toolkit
TCL/Tk bindings seem a natural choice for R and are well documented, but someone complains about limitations of this toolkit.
RGtk2 seems interesting and there are also some tutorials around.
gWidgets is one of the rare toolkits to sport a package vignette!
Despite I don’t need a real web application, an interesting option would be interfacing R with JavaScript/HTML. As most of us, I am familiar with this environment and the app could benefit of the many JS libraries.
The problem is that the beautiful Shiny server and rApache are for Linux only and this is probably true probably Concerto too. Instead Rserve runs on Windows and, while there is no official JS client, I found the third party rserve-js and also a node.js client.
Rook, by the same author of rApache, should be platform agnostic (isn't it?).
R Server Pages could work, but I didn't find examples on the functions HttpDaemon and HttpRequest in the vignette or reference manual.
I run some simple examples with gWidgetsWWW. It works, but it seems to produce canned web pages, without the possibility of modifying the HTML code.
EDIT
Let me clarify my question. I am not surveying your personal preferences.
The technologies or products mentioned here tend to be very young and not widespread. It would be very unpleasant to discover, after investing months of code, that they are not yet ready or don’t fit production. So I would like to know (not your subjective tastes, but) if they are able to work in the environment described above.
We have created a kind of webapp building on rApache and Ruby on Rails beside some other technologies at rapporter.net - that turned out to act rather a framework to host R-based statistical applications in the means of Rapplications instead of our initial goal to create a user-friendly online front-end to R. I'd warmly suggest to check out our features, as you might save tons of resources by not dealing with server-side, CMS and other boring issues, but could concentrate on the statistical tool.
Anyway, beside promoting our stuff, let me summarize my experiences:
rApache is definitely ready for production, but please note that only for rather stateless algorithms (by default Apache would start bunch of workers so the same user/client would end up interacting with different R sessions in each query). E.g. RServe would be a better alternative for a stateful application.
AFAIK Shiny server is meant to host dedicated statistical tools and applications -- just like our Rapplication service with or without DB backend -- with some customizable user input. You would need some technical skills to do so and also providing a (HA) environment might require way too much extra resources. This can be a huge advantage or disadvantage based on your requirements and expectations.
The hugest issue in such question should be security (like using RAppArmor or sandboxR), not just the R connector back-end, as users will interact with your servers (if hosted in the cloud). Desktop applications are a bit more developer-friendly, but supporting all major platforms can be a no-no in the era of tablets and smartphones. The cloud app can run on any device with a browser.
You should choose the optimal solution based on your requirements. There are bunch of tools ready for production, and each has its own advantages and special use cases. Just check out which related packages/applications are still under development with support, and answer a few questions like:
Is there any need to connect to databases?
What types of user input is needed (e.g. only parameters, datasets, R commands)?
Desktop/cloud app? Are you sure? If the later, would you like to care with the setup, maintenance and support?
Do you run any computational intensive tasks?
Do you want to implement an application to help users with repetitive and standardized tasks or rather providing a rather general and extensible software?
Do you need a responsive application with interactive setup or using that for reporting purposes?
What output formats do you need?
What other technologies are you familiar with? It's rather hard to built a Meteor-based app with a NoSQL backend if you worked with MySQL, PHP, Java or C# in the past.
I'm about to do something similar. The fastest way ( in respect of both deployment time and future application performance) seems to be c# interfacing with R through R.NET. In Visual Studio you will benefit from incredible visualisation options with just few clicks, and setting up interactive/drill down charts it's quite straightforward too. As you also mentioned, RServe (with Java) is another valuable option.
EDIT
If a web application is not required to run on a public ip address the r package Rook it's an interesting option. Some example with Rook: using ggplot2, using googleVis

How to use CVS on Unix

I'm quite pretty new to the concept of CVS. However, I want to start using CVS and thus need to 'check-in' some scripts. I'm using a UNIX server and I know that CVS is installed, since doing a
cvs -v
Gives me the correct version number installed. Now the problem I have in is finding documentation to use CVS. Is there an online tutorial/FAQ someone can recommend. I've scoured Google for information and all I come across are posts for installing CVS ...
What I'm really looking for our sample commands taking a beginner from scratch like Logging in etc.
The meta-answer to your question is not to use CVS, unless you're participating in a project that's already using it. Even the CVS maintainers, as far as I understand, don't recommend it for new projects, but recommend svn instead. If you're being obliged to use it, then this answer isn't helpful; sorry.
If the decision is up to you, then you have alternatives:
svn is the system which is most similar to CVS (as noted in another answer).
Mercurial is a distributed version control system, but the distributed features aren't hugely important if, as your question vaguely suggests, you're working on your own.
Git has broadly the same model as Mercurial.
There are others (including at least bazaar and darcs), but those are the big three.
All of these are heavily used in both small projects and big ones.
I now tend to recommend Mercurial to people, and that's the one I predominantly use myself. There are holy wars possible about this, but I feel that's the one which has the best tradeoff between flexibility, good design, and usability (there's a longer version of this answer...!)
Update: there's a very good Mercurial introduction by Spolsky, which is well worth reading for rationale and pointers.
Use svn instead, lots of documentation for that.
Hmmm... a quick Google search for cvs tutorial returns this as the second hit:
http://www.linux.ie/articles/tutorials/cvs.php
I've quickly glanced over it, and Chapter 3 (Basic CVS Usage) starts with "Logging In" and seems to come pretty close to what you need. If you have any concrete questions, feel free to ask.

DVCS for a small company of remote employees

Here's the situation: at my small office, because we like to keep mobile and occasionally work from home, instead of having a central file server, we have all the office documents in an SVN repository, and each person keeps a checkout on their own laptops. A checkout weighs in at about 3GB, and the repo with revisions in it: about 6GB. This is all working great.
The problem is that soon we won't have a small office any more - all our 5 workers will be working remotely. I had considered purchasing a dedicated server and running our SVN repository from that, except two of our workers will be really remote and will be using wireless "broadband" with a 3GB/month limit, and I'm afraid that a few large updates will really rip through their monthly allowance, not to mention taking all day to complete.
Reading a few questions on Stack Overflow, it seems there's quite a community of distributed VCS aficionados who think git or mercurial is definitely the best for many situations. Given that all the employees would still be able to meet face-to-face at least once a fortnight (and hence be on a fast LAN), I'm wondering if a DVCS would work for us?
I don't know exactly what's in your repo, but unless you're changing all the files regularly, a DVCS should provide you a very desirable workflow.
You could do an svn -> git conversion, stick the repo on a DVD and mail it out to all the satellite offices, and then let them fetch from the office as things change at a fairly low incremental cost (should be smaller than the delta in general).
Checkout the Fossil DVCS, it may fit your bill. Fossil may be used like SVN or a DVCS. If you are concerned about it handling your current repository try it out. It also has a built in project wiki and bug tracking system that distribute with the repository as well. You could try it out and see if it would work for your small team.
The pain for you would be losing your revision history, at this time I don't beleive you can import a svn repository into Fossil.
Join the mailing list and you will get answers for any of your questions. The creator of SQLite is also the creator of this project as well. Hope this helps.
I can't see why not. With something like git, the repository is local to the machine, and so your remote employees can actually have a tracked changelog that can then be merged or rebased with the main repository--whatever you decide that to be--when they get the chance.
Also, git has really good compression compared to SVN, so the 3GB/mo quota may be more than enough for your remote employees.
Randal Schwartz actually gave a really good presentation on git at Google's Tech Talks: http://www.youtube.com/watch?v=8dhZ9BXQgc4
(It seems no one is answering this.) DVCS of course seems like it would work, but I have no experience with it. A centralized system like svn might also work if you are not expecting large changes daily. (to go up and back from the server) The initial get in that case would be the only real expensive issue.
Can you monitor your use now and see how much traffic goes back and forth?
The real problem here is the 3GB/mo bandwidth limitation. It's probably just better to come up with a better solution for connectivity...

What Tools Do You Recommend To Auto-Build Your Application? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
As recently as several years ago, the developers actually made the builds that went to clients. This was obviously a disaster for reasons too numerous to list.
Then when we started to learn the errors of our ways, we looked for a way to auto-build the entire application on a dedicated build machine. The culture at that time was very averse to bringing in outside tools, so we built our own autobuild system by writing a VB app.
This worked fine for a while, until the project's structure started to change, new projects were added, and we needed to build the application in different ways. Then then weaknesses of our hand-rolled autobuilder became apparent and, over time, increasingly onerous. This disease has progressed now to the point where QA (who owns our build process) can't even maintain the autobuilder because it requires more and more programming skill. Every time we add a project or change something in an existing project, it consumes more developer time just to make it work. There have been days when we were unable to produce a build because the system was broken.
I'm now in a position where I can change this process, and I'm looking to scrap the entire system and put something else in it's place. My goals are:
Have an autobuild system that can run with zero human interaction at a specific time every day. It should be able to gather all the source code, compile all the apps, create the setups, put the finished products on a network share, and possibly trigger the automated testing system to kick in (we use QTP).
The autobuild system should be flexible enough to easily adapt to changes in the project without rrequiring a major overhaul.
It should be simple enough so that QA can own the system and not require developer resources to make changes to how builds are made.
What are your experiences? Can you recommend an autobuild system? Should I have different goals?
I'm currently using CruiseControl integrated with Ant to control project builds. This allows flexibility of build schedules and means you can automate the entire build process fairly easily using Ant scripts. Also, during defect fixing periods you can have CruiseControl set up to watch for source control submissions instead of time periods and build when these occur. This allows developers very quick feedback on defect fixes.
I use FinalBuilder and FinalBuilder Server for nightly builds. It's a bit buggy at times, but if you think it through it's quite easy to create extensible projects that can build X project type, build it's database from change scripts and deploy it to a testing server.
It can also handle all kinds of wierd and wonderful things like ZIPing a nightly build and uploading it to an FTP or creating ISO images automatically.
Definitely look into MSBuild if you're on the Microsoft stack.
Joel is always going on and on about how great FinalBuilder is, so that might be worth a look as well.
We just migrated from a hand-rolled set of Perl scripts to a Buildbot setup. I found it because that's what Google's using for Chrome.
You can do nightlies, or it can integrate with source control to do an isolated test build whenever anybody does a checkin, or a variety of other things. It's also parallel; you can have more than one machine in the build farm, either for specialized duties or just to handle more load.
The entire system is written in Python, so it's platform-agnostic, which is important if you need to do builds on more than one platform. It can do anything you can do from the command line; we have it calling MSBuild for user-mode components, a DDK build for kernel-mode pieces, and running products for unit test builds.
Out of the box it supports most OSS source control tools, but if you're using TFS or something else you may need to modify the package that you install on the slave machines.
I think you are on the right track here.
Whoever looks after your automated build process needs to have a fundamental understanding of how your solution fits together. This doesn't necessarily mean knowing how to write code or architect solutions, but they will require a solid understanding of how the solution compiles, packages itself etc.
You might need to share responsibility for builds between people or teams to accomplish this. I'd say that a daily build is a "team responsibility".
I'd look at establishing a baseline build configuration which can be extended for "special use" builds (besides just building a release version), e.g. internationalized releases, fxCop/Quality Tools config, build + run Unit Tests, continuous integration builds, a build config to run on developer workstations, etc.
Instead, I'd aim to achieve the following:
Automatic versioning, signing etc
Ability to produce verbose output (logging) to help debug build breaks
On that point - it should handle errors properly, capture as much information and log it properly
Consistency - It should work the same way each time to produce repeatable outcomes
Run in a clean, limited access environment
Well commented/documented so that it can be understood by new staff, etc.
Option to generate release notes, compile metrics, produce reports (if this option is available)
Ability to deploy to multiple environments
Support different ways to obtain source code from source control, e.g. by changeset, label, date, etc
As for tool recommendations, I've used FinalBuilder, Visual Build Pro, MSBuild/Team Build, nAnt, CruiseControl and CIFactory plus and good old fashioned batch files.
Each has its pros and cons, I'm not going to make a recommendation except to say that the products with decent UI support were a little bit easier to work with, but at times were far less powerful. If you're working with VIsual Studio, MSBuild is very powerful, but has a somewhat steep learning curve.
As of tools delivered with MS Visual Studio you might want to use MSBuild. Additional Community toolsets for MSBuild will even give you the possibility to checkout code from Subversion and zip output.
We're using it successfully in our company. Projects consists of several solutions with 100+ subprojects. Works like a charm.
Visual Build Pro is nice, if your build machines are Windows. I think this would fill the requirement you have about QA owning the system. But don't get me wrong, it's pretty powerful.
We use CruiseControl.NET and UppercuT (which uses NAnt) to do this. UppercuT uses conventions for building so it makes it really easy for someone to get started by answering three questions (What is the solution named? What is the path to source control? What is your company's name?) and you are building.
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
We use the Hudson buildbot for for big Java web app building from ant build scripts. Hudson is pretty sweet for our purposes. It has a master/slave setup so builds can be done concurrently (on a timer or on-demand). Slave nodes can be any OS/hardware combo provided it has the needed build tools already on it and is on the network (and won't crash every 10 min).
Full web-based interface including live console output, change logs, artifacts from the build are available across the network including previous builds (if successful). Awesomesauce!

What do people think of the fossil DVCS? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
fossil http://www.fossil-scm.org
I found this recently and have started using it for my home projects. I want to hear what other people think of this VCS.
What is missing in my mind, is IDE support. Hopefully it will come, but I use the command line just fine.
My favorite things about fossil: single executable with built in web server wiki and bug tracking. The repository is just one SQLite (http://www.sqlite.org) database file, easy to do backups on. I also like that I can run fossil from and keep the repository on my thumb drive. This means my software development has become completely portable.
Tell me what you think....
Mr. Millikin, if you will take a few moments to review some of the documentation on fossil, I think your objections are addressed there. Storing a repository in an sQLite database is arguably safer than any other approach. See link text for some of the advantages of using a transactional database to store a repository. As for bloat: The entire thing is in a single self-contained executable which seems to disprove that concern.
Full disclosure: I am the author of fossil.
Note that I wrote fossil because no other DVCS met my needs. On the other hand, my needs are not your needs and so only you can judge whether or not fossil is right for you. But I do encourage you to at least have a look at the documentation and try to understand the problem that fossil is trying to solve before you dismiss it.
After having used Fossil for more than a year now on non-trivial development projects, I feel confident enough to weigh in on this topic.
Below is my experience so far. I'm comparing against git and svn at times, simply because I know those SCM's very well and comparing makes it easier for me to get the idea across.
I'm totally in love with this SCM, so it's mostly points on the plus side.
What I like about Fossil:
We have a bunch of machines (win/mac/a number of Linux distros), and the single-executable installation is just as beautiful as it sounds. No dependencies; it just works. Git is a messy pile of files and the dependency hell in Subversion makes it very nasty on some Linux distributions, especially if you must build it yourself.
The default Fossil workflow suits our projects perfectly, and more git'ish workflows are possible when needed.
We've found it extremely robust, even on large projects. I wouldn't expect anything else from the guys who wrote SQLite. No crashes, no corruption, no funny business.
I'm actually very, very happy with performance. Not as fast as git on huge trees, but not much slower either. I make up any lost time by not having to consult the documentation every other command, as is the case with git.
The fact that there's a tried'n'true transactional database behind every operation makes me sleep better at night. Yes, we've been through more than one horrible incident of stale and corrupt Subversion repositories (thankfully, a helpful community helped us fix them.) I can't imagine that happening in Fossil. Even Subversion 1.7.x use SQLite now for metadata storage. (Try turning off power in the midst of a git commit - it'll leave a corrupt repos!)
The integrated issue tracker and wiki are optional, obviously, but very handy as it's always there - no installation required. I wish the issue tracker had some more features though, but hey - it's an SCM.
The built-in server and web gui is simply brilliant and quite configurable through css.
We sometimes need to import to and from git and subversion repositories. This is a no-brainer in Fossil.
Single file repository. No '.svn' directories all over the place.
What I miss in / dislike about Fossil:
Someone please write TortoiseFossil for our non-technical Windows users :)
The community isn't that large yet, so it's probably hard for a lot of people to introduce it in their company. Hopefully this will change, gaining all the benefits of a large community (documentation, more testing of new releases, etc.)
I wish the local web ui had a search feature (including searching for file content).
Fewer merge options than in git (though the Fossil workflow makes merging less likely to occur in the first place.)
I hope everyone gives Fossil a run - the world is a better place with stuff that just works and which you don't need to be a rocket scientist to use.
Fossil is small, simple, yet powerful and robust, reminds me some principles of C Culture. Likable by those who develop independently and still collaborate.
Any great project should start with principles and continue them at its core as it gathers more layers (GUI, extra features).
I am impressed with Fossil and starting to use... take a look at fossil
cheers
I'm landing on this page after an year of the last post, recursive add that has been mentioned here is now taken care of.
Fossil mesmerizes me with simplicity especially after I struggled to get a bug-tracking system to work with mercurial. I need to see how to manage multiple projects, publish the repositories for multi-user access and how to do merging, manage patches etc. I get the feeling that it wont be disappointing going forward.
I'm not interested in using it for source-code version control, but I am interested in a distributed version-controlled personal wiki that I can sync between all the machines I use.
damian,
1/ yes, fossil doesn't support recursive add. However there are some fairly simply workarounds such as
for /r %i in (*.*) do fossil add "%i"
on Windows, and
find . -type f -print0 | xargs -0 fossil add --
on Unix.
2/ I saw the message about malformed manifest when you are adding a file with non-ASCII characters in the filename. The problem was corrected in the last build.
Regards,
Petr
I think fossil is really cool. The most important feature for me was easy installation, and developer friendly defaults. I currently use it to keep track of the local changes of my files. (Our project is hosted in sourceforge and kept track in CVS.) This way I can "commit" locally even if it would otherwise break the project, so smaller changes can be kept track as well.
Fossil is good. It is simple and easy to use. If fossil can provide GUI interface to check in and check out, then it would be better (prefer java gui to archive cross-platform GUI).
The main advantages of Fossil are "open source" and "use SQLite database", so somebody can compile fossil source code to make fossil work on google android platform (mobile and tablet devices).
I am trying your vcs right now.
I like the idea of having all integrated. After all, is all i want when i look for a system like this. I am an active user of Mercurial. And i couldn't find an integration with a issue tracker (I try unsuccessfully to set p Trac with mercurial in the past).
After some test i realize that:
1) "add" command is not recursively, or i can not found in the doc a way to do it
2) i write a bat (i work with windows) to add 750 files and i run it (it took a while). When a run commit it jumps with "manifest malformed"
i think you could address this issues and others making a survey like the Mercurial's one in https://www.mercurial-scm.org/wiki/UserSurvey.
you could write me at dnoseda at gmail
i am interested in you work. keep improve it.
regards
ps.: as an mayor improvement you could add something like gitstat
Perhaps an uneducated knee-jerk reaction, but the idea of storing a repository in a binary blob like an SQLite database terrifies me. I'm also dubious of the benefits of including wikis and bug trackers directly in the VCS -- either they're under-featured compared to full software like Trac, or the VCS is massively bloated compared to Subversion or Bazaar.

Resources