I am currently working on building a simple online game as practice and to be able to play with some friends. I am interested in the ease of use of the Galaxy system provided by Meteor, but do not want to constantly pay the hourly price, totaling around $30 per month.
They mention that you can stop your which stops billing (on this site) but I have yet to find out much more about starting and stopping.
Is there any back end work that needs to be done each time an app is stopped and then started? What is the time delay for starting/stopping? Is there a maximum amount of times an app can be started and stopped per month?
If there is a site that answers all this that I missed in my research, I apologize. I've tried looking everywhere I can.
i have apps hosted on Galaxy, and no i haven't seen anything cumbersome when taking an app offline and back online. it's simply unavailable when offline, and then just starts up when back online.
note that, during this time, i didn't do anything with the database (which is hosted on Mongo's Atlas) until i permanently took an app offline. so the db portion may be a consideration for you.
Related
So far I've only used Jupyter on my local machine, which is way too slow. I'm completely new to using cloud services for Jupyter, or using cloud services at all for that matter. I know there are a million tutorials out there, but this is my problem: How to choose the right service from all those options (Amazon? Google? Cheaper options?)? What's the 'right way' to get started?
What I need:
I want a service where I can start up a Jupyter notebook in my browser as simply as possible. (I know next to nothing about setting up servers etc., and have very limited time to learn that if needed)
I currently have an old MacBook from 2014. The server should be at least 10x faster. (Which options do I need to pick?)
I want to do machine learning, so GPUs would be good.
My budget is about $50 per month, less would be great; a free tryout would be great too.
As I am completely new, I also need to know what pitfalls to look out for. (E.g.: Stop the machine to stop increasing the costs?)
If you could help me, or point me to a good tutorial or even a book, I'd be forever grateful.
(Sorry for the basic question. Of course I googled tutorials myself before posting this question, but as indicated above, I'm overwhelmed by the options - that's why I posted this question.)
AWS based tutorial:
https://aws.amazon.com/de/getting-started/hands-on/get-started-dlami/
GPU, CPU and pricing informations are gathered here:
https://docs.aws.amazon.com/dlami/latest/devguide/pricing.html
You can set up a budget for cost limitation:
https://aws.amazon.com/de/getting-started/hands-on/control-your-costs-free-tier-budgets/
I consult with folks on a Wordpress site that is used for an event held once each year. The site runs fine throughout the year. Cache plugins in place, etc...The site is hosted on Hostgator.
However, on the day of the event- usage is off the charts and the site becomes unusable.
Last year I used CloudFlare, which helped. Any other suggestions?
There is only so much software can do to improve performance. This sound like the server your site is running on is not able to handle all the requests. I suggest you upgrade your server/VPS only for the amount of time that you suspect that huge amount of load, instead of looking into software solutions.
Especially since you only expect this to happen once a year, I think it is financially more appealing to increase server speed for a short period of time, which may cost some money, but will terminate the problem you're facing.
Trying to resolve this only with software, you might not ever be able to reach the results of upgrading your server and it can take a lot of time and research to streamline everything for the best performance.
I have been using Visual Studio Team Services (was Visual Studio Online) for version control. As a backup i keep my project files on OneDrive and mapped the root work-spaces there. Soon after the launch of RC 1 and its support on Azure, we have started migrating all our projects to ASP.NET 5 and things have been really great, and i really love the GruntJS task runner for the client side development, but the issue with it is that the node modules create a highly nested folder structure which causes the OneDrive sync to fail.
The client side development happens in TypeScript and grunt is used just for bundling and minification purposes of the compiled JavaScript files. Since Web Optimizations is not the recommended method in ASP.NET 5 and i could not find any port for .NET Core
While searching for solutions i somewhere stumbled upon a link which said that OneDrive for Business does not have this limitation, we have Office 365 subscription for the organization and tried syncing the projects there, but failed as even OneDrive for business has this limitation.
Though there is a UserVoice suggestion for this, where the Microsoft Representative says that they are thinking about this, but till the time this gets implemented, i wanted to ask for alternatives to GruntJS for ASP.NET 5.
Since most of the application we build are single page enterprise apps where user opens the app during start of the workday and closes the tab at the end of day, rarely refreshing the app so we can live without optimizations for now, but just out of curiosity for consumer oriented application, are there any alternatives for GruntJS for ASP.NET 5.
EDIT
Points to be noted
TFS Check in happens only when a code needs to be committed, but OneDrive sync happens continuously in the background automatically, in the event of a hardware crash or device getting lost (or any reason in the world stopping us from accessing the code), TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between.
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders to live
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days and it's not an acceptable risk to loose the progress. (Just installation of visual studio enterprise with all features takes around 5-8 hrs on my development machine with 4th Gen Intel i5 2.9GHz 16GB ram, we have plans to upgrade our development machines with SSD and 64 GB of ram, but that's out of scope of question)
If we weigh extra time spent in optimization tasks vs the risk of lost files, the optimization happens only when we push updates to production server which happens 1-3 times a month, every round of optimization tasks does not takes more than 10-15 minutes, so we can live with manually optimizing the files before production or not optimizing at all.
Files like .gitignore and.csproj allows which files to sync to TFS not OneDrive, and my problem is not at all with TFS, we are perfectly happy with the way Check Ins are done and managed, once a code is committed, all the worries are resolved automatically, my worries is just for the uncommitted code.
Summary
I agree the problem has a highly specific nature, but it might be of help to future readers either looking for similar solutions or just to gain some more knowledge.
Two things:
Team Foundation Services / Visual Studio online IS your source control solution, why would you need / want to back it up elsewhere?
I would probably exclude your node modules from your sync anyway. Your project configuration files will contain a list of the node modules you need, and if you wanted to move the app to a new machine or folder you'd just run npm install again to pull down the packages.
EDIT:
TFS Check in happens only when a code needs to be committed, but
OneDrive sync happens continuously in the background automatically, in
the event of a hardware crash or device getting lost (or any reason in
the world stopping us from accessing the code), TFS won't be able to
provide the code under development which was not committed. OneDrive
saves us from those scenarios
This indicates that your check ins are likely too large or not frequent enough.
How often do you check in? How often does your hardware fail to the extent where you lose all files? Is this an acceptable risk?
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders
to live
And this is why you should be using TFS/VSO as the mechanism to control what files are stored in your source control repository. Systems like .gitignore or .csproj files exist for exactly this reason.
EDIT2:
I'm trying not to be too harsh and I agree this is a problem which could probably be solved in some specific way by OneDrive, but I'm trying to perform a SO equivalent of the 5 Whys.
You've said:
TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
and
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between
Ok, so lets talk about that. You want to prevent loss of code if hardware fails, sure we all do. Looking at your figures 2-6 working hours seems reasonable, so you're looking at loosing 0.25 to 1 day (ballpark). I think the bigger issue is if people aren't checking in for 5 days. You've mentioned 5 days could be the worse case and only if holidays happen, so that's not 5 working days it's 2-3 business days worst case.
You've then said:
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days
So, above you've said 2-6 hours for check-ins (with exceptions) but you've based your decision to come up with a solution for this problem based on an assumption of 5 working days. Above you've said 5 days (not working) only if holidays occur. So really your figures for risk are exaggerated.
Let's be more realistic and say you have a hardware failure once a year, and you lose 4 hours worth of work.
From OneDrive, how long would it take you restore this work? lets guess at say a few minutes to connect and download it, and perhaps half an hour to an hour to sort out conflicts (#optimism). So lets say a net saving of 3 hours...once a year...do you think this is reason enough to back up all code to a and offsite cloud provider?
You talk about time taken to re-install VS, but that's not relevant to backing files up to OneDrive, you'd need to do that irrespective.
To be honest, I think your fear of losing uncommitted code is blown out of proportion and I think your reaction is too. TFS and VSO offers functions like shelvesets which can be used for just this kind of situation.
As an enterprise developer too, I do understand the concern, but normally you'd cover this by working on a machine with a RAID array or similar.
I think you should re-assess you check in policy and your assessment of the potential for lost code.
All of my websites are hosted in IIS and configured with one application pool. This application pool consists 10 websites running.
It is working fine till today, but all of sudden I am observing that there is sudden up and down % in CPU usage. I am unable to trace out the problem.
Is there anyway to check which website is taking much load among all in the application pool?
Performance counters, task manager and native code analysis tools only tell part of the story. To gain a deeper understanding of what is happening inside your ASP.NET application you need to use WinDBG, SOS and ADPlus.
Tess Ferrandez has a great series of articles on tracking down what is to blame here:
.NET Debugging Demos Lab 4: High CPU hang
.NET Debugging Demos Lab 4: High CPU Hang - Review
This is a real world example:
High CPU in .NET app using a static Generic.Dictionary
You will probably want to separate your sites into individual application pools so you can identify and isolate the site that is causing the high CPU (but it already looks like you have a suspect so I'd isolate that one). From then you can follow Tess's advice and guidance to track down the cause.
You should also take a look at the logs to see if you're experiencing an unexpected spike or increase in traffic. Perhaps there's a badly behaved search engine site indexer nailing the site. If that's the case then maybe you need to (if you haven't already done so) create a robots.txt to prevent crawlers from indexing parts of the site that don't need to be indexed. On top of that if certain crawlers are being overly promiscious then just ban them. Perhaps consider a sitemap for google to tame and tune its activities.
If your server has reached it's max capacity, you will see CPU go up and down erratically because the GC will start trying to recover resources(cache..etc), which in turn causes your sites to work even harder. It's an endless cycle.
Have you been monitoring your performance counters? Do you have any idea what normal capacity is for your site? If you cannot answer these questions, I suggest you gather some perf numbers as soon as possible.
My rule of thumb is to always measure first, then make necessary changes.
Most of the time performance bottlenecks aren't where you think they would be.
There is really no performance counter way to tell, because the CPU counters are at the process level. Your best bet would be to do a time corelation with other events in the event log and .NET/ASP.NET counters for garbage collection, requests etc.
If you really want to go hardcore, you could use the SysInternals toolset to take snapshots of your app pool over time and then do a post-analysis to figure out what code was executed when the spike happened. Here is a related example from Mark Russinovich's blog - http://blogs.technet.com/b/markrussinovich/archive/2008/04/07/3031251.aspx.
We have 4 different environments:
Staging
Dev
User Acceptance
Live
We use TFS, pull down the latest code and code away.
When they finish a feature, the developers individually upload their changes to Staging. If the site is stable (determined by really loose testing), we upload changes to Dev, then UserAcceptance and then live.
We are not using builds/tags in our source control at all.
What should I tell management? They don't seem to think there is an issue as far as I can tell.
If it would be good for you, you could become the Continuous Integration champion of your company. You could do some research on a good process for CI with TFS, write up a proposed solution, evangelize it to your fellow developers and direct managers, revise it with their input and pitch it to management. Or you could just sit there and do nothing.
I've been in management for a long time. I always appreciate someone who identifies an issue and proposes a well thought-out solution.
Whose management? And how far removed are they from you?
I.e. If you are just a pleb developer and your managers are the senior developers then find another job. If you are a Senior developer and your managers are the CIO types, i.e. actually running the business... then it is your job to change it.
Tell them that if you were using a key feature of very expensive software they spent a lot of money on, it would be trivial to tell what code got pushed out when. That would mean in the event of a subtle bug getting introduced that gets passed user acceptance testing, it would be a matter of diffing the two versions to figure out what changed.
One of the most important parts of using TAGS is so you can rollback to a specific point in time. Think of it as an image backup. If something bad gets deployed you can safely assume you can "roll" back to a previous working version.
Also, developers can quickly grab a TAG (dev, prod or whatever) and deploy to their development PC...a feature I use all the time to debug production problems.
So you need someone to tell the other developers that they must label their code every time a build is done and increment a version counter. Why can't you do that?
You also need to tell management that you believe the level of testing done is not sufficient. This is not a unique problem for an organisation and they'll probably say they already know. No harm in mentioning it though rather than waiting for a major problem to arrive.
As far as individuals doing builds or automated build processes this depends on whether you really need this based on how many developers there are and how often you do builds.
What is the problem? As you said, you can't tell if management see the problem. Perhaps they don't! Tell them what you see as the current problem and what you would recommend to fix the problem. The problem has to of the nature of "our current process has failed 3 out of 10 times and implementing this new process would reduce those failures to 1 out of 10 times".
Management needs to see improvements in terms of: reduced costs, icreased profits, reduced time, reduced use of resources. "Because it's widely used best practice" isn't going to be enough. Neither is, "because it makes my job easier".
Management often isn't aware of a problem because everyone is too afraid to say anything or assumes they can't possibly fail to see the problem. But your world is a different world than theirs.
I see at least two big problems:
1) Developers loading changes up themselves. All changes should come from source control. Do you encounter times where someone made a change that went to production but never got into source control and then was accidentally removed on the next deploy? How much time (money) was spent trying to figure out what went wrong there?
2) Lack of a clear promotion model. It seems like you guys are moving changes between environments rather than "builds". The key distinction is that if two changes work great in UAT because of how they interact, if only one change is promoted to production it could break there. Promoting consistent code - whether by labeling it or by just zipping up the whole web application and promoting the zip file - should cause fewer problems.
I work on the continuous integration and deployment solution, AnthillPro. How we address this with TFS is to retrieve the new code from TFS based on a date-time stamp (of when someone pressed the "Deliver to Stage" button).
This gives you most (all?) the traceability you would have of using tags, without actually having to go around tagging things. The system just records the time stamp, and every push of the code through the testing environments is tied to a known snapshot of code. We also have customers who lay down tags as part of the build process. As the first poster mentioned - CI is a good thing - less work, more traceability.
If you already have TFS, then you are almost there.
The place I'm at was using TFS for source control only. We have a similar setup with Dev/Stage/Prod. I took it upon myself to get a build server installed. Once that was done I added in the ability to auto deploy to dev for one of my projects and told a couple of the other guys about it. Initially the reception was luke warm.
Later I added TFS Deployer to the mix and have it set to auto deploy the good dev build to stage.
During this time the main group of developers were constantly fighting the "Did you get latest before deploying to Stage or Production?" questions; my stuff was working without a hitch. Believe me, management and the other devs noticed.
Now (6 months into it), we have a written rule that you aren't even allowed to use the Publish command in visual studio. EVERYTHING goes through the CI build and deployments. When moving to prod, our production group pulls the appropriate copy off of the build server. I even trained our QA group on how to do web testing and we're slowly integrating automated tests into the whole shebang.
The point of this ramble is that it took awhile. But more importantly it only happened because I was willing to just run with it and show results.
I suggest you do the same. Start using it, then show the benefits to get everyone else on board.