Artifactory has a built in backup solution. The documentation for artifactory warns that the built in backup is not optimized for drives larger than 1TB, and you should contact support for the recommended configuration.
How do you optimize artifactory backups for more than 1TB.
Currently I'm doing the standard daily and weekly backups. I'm finding that the weekly backups are taking 6+ hours.
JFrog recommends not using the built in backup and instead doing database snapshots at the same time that you backup the entire $ARTIFACTORY_HOME location on the hard drive using a classic backup tool (rsync,cp, drbd)
A blog post covering how to do this can be found here:
https://www.jfrog.com/confluence/display/RTF/Managing+Backups
Related
I have been using JFrog Artifactory for a while in my company. Recently I learned about JFrog Bintray. What is the difference between Artifactory and Bintray? Is Bintray a replacement for Artifactory?
Thanks for the question, it's a good one!
The main difference between artifactory and bintray is in the intended usage. Artifactory is a development-time tool, while Bintray is a release, distribution-time tool. It might look like a subtle difference, but it has a great impact on the feature set of the products:
For development, you need features like:
support for snapshots
CI servers metadata integration (a.k.a. build-info)
promotion between repositories
on-prem install
development site replication
integration with enterprise security systems like SAML
etc.
For distribution, you need stuff like:
a global distribution network (CDN)
extreme throughput and redundancy for downloads
permission control for external users (entitlements)
product and EULA support
etc
As you can see, those are quite different lists.
Of course, there are common requirements:
full REST API automation
CLI
plugins for popular CI servers and build tools
indexing as much binary packages standards as possible
"Set Me Up" snippets for easy configuration
smart checksum-based binary storage
and of course there must be a simple way to roll out the artifacts from the
development-time tool to the distribution tool (a repository in
Artifactory that is synced with Bintray)
and we have all that covered of course :)
I am with JFrog, the company behind bintray and artifactory, see my profile for details and links.
I've encountered several issues with Amazon EC2 & Bitnami Wordpress AMI (RedHat) on small instance.. and honestly I don't know who to ask :) I'm not a SysAdmin/Linux expert, but I've learned basic SSH commands and other things required to keep going for a basic start.
So here's what is happening:
Wordpress website is loading extremely slow - PageSpeed & YSlow score is 27 of 100.
I think this is caused by memory_limit in php.ini. When I installed Bitnami Wordpress AMI, imported WP Users, set the theme and other basic things, I wasn't able to even access wordpress website - just a blank page showed up. After few solutions, I've tried increasing php.ini memory_limit from 32M to 128M (Max). And I've increased WP memory limit to 64M.
Website loaded properly and users were able to access it - but it's extremely slow.
When I try decreasing php.ini memory limit to 64M, website shows up a blank page again.
Only thing that I can think of currently is increasing EC2 instance from .small to .large or similar. Please let me know your thoughts on this issue.. and many thanks!
We had a similar problem with a Php/MYSQL Application which we moved to an EC2 instance connecting to an RDS database instance. Pages were taking 10x longer to load than on our previous server, even though all specs were the same, i.e. number of CPUs, RAM, clock speed, and the versions of Php/Apache were identical.
We finally found the cause of the problem, the default setting for an RDS database for the Cache query size is 0. This causes the database to run extremely slowly. We changed the query_cache_size to be 1000000000 (1G) (as the RDS instance had 4G of RAM) and immediately the application performance was as good as our previous (non-AWS) server.
Secondarily, we found that an EC2 server with MySQL installed locally on the server did not perform well, on the Amazon Linux build. We tried the same thing on an EC2 instnace running Ubuntu, and with a local MySQL database the performance was great.
Obviously for scalability reasons we went with using an RDS instance but we found it interesting that moving the MySQL database onto the EC2 instance radically improved the performance for an Ubuntu linux EC2 server but made no difference with the Amazon Build of Linux.
Since you have not received an answer yet, allow me to summarize my comments into something that is hopefully useful:
Profile your application to understand where the time is being spent.
Some areas you can affect are:
PHP needs RAM, but so does your database (I know nothing about Bitnami, but Wordpress uses a SQL database for storage).
Allocate enough RAM to PHP. Seems like that's somewhere between 64MB and 128MB.
If you are using MySQL, edit my.ini. If you're using a default configuration file for MySQL, the memory allocation parameters are dialed way too low. If you post your my.ini file, I can give suggestions (or if you're using a different database, state which that is).
Consider striping multiple EBS volumes for your data partition.
Use an EBS backed instance if you are not already.
You can make a more informed decision about where to tune if you have profiling results in hand.
I would suggest to use a Cache tool. The first one that you can try is APC (Alternative PHP cache). It is easy to install in Red Hat: yum install php-pecl-apc. You can get much better results with a WordPress specific cache plugin like W3 Total Cache or Super Cache. I use the last one and it is easy to install in WordPress application:
Install Super Cache from the WordPress admin panel
Change the .htaccess permissions: sudo chmod 666 /opt/bitnami/apps/wordpress/htdocs/.htaccess
Enable the plugin and follow the configuration steps. You can see how this plugin modifies the .htaccess file
Configures the cache options according to your preferences and test it. You can do performance tests using a service like blitz.io
Change the .htaccess permissions to 600 when everything is ok.
I hope it helps.
We saw something similar. For us, the opportunity cost of our time fiddling with optimization settings was much higher than just going with a dedicated Wordpress hosting provider.
The leaders in this space (dedicated Wordpress hosting) appear to be WP-Engine and a few others like Synthesis
http://trends.builtwith.com/hosting/wordpress-hosting
I had my personal site on dreamhost but they ended up becoming worse and worse over the years so I moved to bluehost, which has been ok.
Overall, I think EC2 is great but it requires a lot of fiddling. Depending on the cost of your time and area of expertise, you might choose to switch to a more specialized provider.
I have no affiliation with any of these companies other than my personal experience being an individual shared hosting customer at both dreamhost and bluehost.
I'm getting close to finishing a public-facing ASP.Net app and I'm starting to weigh deployment options. I'm an ASP.Net/SQLServer veteran but noob when it comes to Azure. I'm wondering how others have felt about the learning curve to effectively migrate a local dev ASP.Net/SQLServer apps into Azure cloud.
More specifically:
How steep is the learning curve towards understanding administration and programming concepts, and do you think it's worth the investment?
What is Microsoft's support like if I have catastrophic problems from my cloud infrastructure and my live site is down? My expectation is a large price tag for a not-so-urgent SLA.
Will my non-Azure ASP.Net app require significant modification and/or coupling to run in the Azure environment?
Thanks
I answered a similar question a while back, here. Azure has evolved since then:
Azure's AppFabric Cache is currently in CTP (community technology preview) and will go live some time later this year (sorry, I can't quote a date). With a single configuration change, you'll be able to enable the asp.net session state provider without changing any code, and have your session state available to all of your web role instances.
With Azure v1.3 which rolled out in November, you have have the ability to run tasks at startup with elevated privileges (e.g. to run an MSI to install some prerequisite control suite).
For monitoring, you can take advantage of Microsoft System Center, which now supports Azure directly. Alternatively, you can look into 3rd-party options such as AzureWatch.
With Azure's extra-small instance, you can run a site for approx. $44 monthly. You mentioned catastrophic failures and SLA. With Azure, you need a minimum of two instances for SLA to take effect (this is because your virtual machines are located in physically different areas of the data center, in separate fault domains). So you're looking at approx. $90 / month to run a site with 99.95% uptime. Only you can determine whether this is worth it to you. Yes, you can host with a simple hosting provider for significantly less (such as GoDaddy). However, if your site fails there, you have to wait for it to be detected and then installed on a separate box. Also, you share each server with potentially dozens of other tenants, which will impact your site's performance. With Azure, at most 8 tenants will occupy a box, depending on how many cores you configure your virtual machines to use. And it's incredibly simple to scale up or down to handle traffic increases and decreases.
My personal experience is that there isn't much documentation and you have to search through blogs / forums to find answers for more advanced questions. If you have a nicely design app then there shouldn't be much problem with porting - you can google for Azure version of ASP.NET providers, ie. membership.
The biggest disadvantage may be cost: you have to do your maths but for me it turned out that a VPS hosting is much cheaper than Azure.
I would say that unless you get considerable savings on infrastructure don't move to Azure for just the sake of doing it. A hosted server with SQL and IIS will give you less problems and a bit more freedom.
I see an excellent answer by David Makogon already. The following might be helpful for you as well. The last episode of the Connected Show podcast was about migrating Wold Maps to Azure. If you are considering moving to Azure it is certainly worth listening to, as they explain the challenges they faced during the migration.
You could give a look at Moving Applications to the Cloud on the Microsoft Windows Azure Platform in MSDN.
Cheers.
We own a small company and develop asp.net websites. Here is our work procedure:
We have a server at the company with Sql Server 2008 and IIS 7.5 installed on it. All our projects including the database and website pages are on the server. We connect to the server and edit the files using FTP, so any change to a web page can be seen at once. The programmers (less than 10 programmers) connect to the server using Visual Studio 2010.
Now we want to include source control system in our work. The problem is including a SCM in our work requires changing our way of working.
Does anyone have any advise on setting up the working environment?
Thanks in advance.
You first need to decide on what type of SCM you are going to use - centralized or distributed.
One centralized SCM is TFS - this is from MS and integrates very will with VS. I believe there is an express (basic) version that is free, but the other editions are quite expensive.
An easy and free centralized SCM to start with is subversion - you can install the SVN server on your server and setup a client for each developer.
A distributed SCM does not have a server - a popular one is GIT.
Do read up on all of these before deciding. You will also have to figure out a good workflow for your team. Start with a small project so you can gain understanding and minimize the cost of mistakes.
So many ways to do this :)
One way is to use something like http://beanstalkapp.com/ to store your source code under SVN. Each developer then has a local copy of the code to work on and a good history of changes is kept when developers commit their code (at least daily), and these changes can be emailed around to the team if you want them to be. One member of the team is then tasked with uploading the latest SVN code to the testing server once it's tested and approved locally (probably at the end of each day).
I'd recommend your developers install http://www.visualsvn.com/visualsvn/ Toolbar into Visual Studio if you use SVN.
As an alternative to hosting your SVN repository with someone like Beanstalk, you could use the free http://www.visualsvn.com/server/ which cuts out the need to upload the latest code to your testing server, as it'd be stored right there and updated on each SVN commit. But this adds an overhead in terms of backups etc.
Let us know what road you go down in the end.
Here's the situation: at my small office, because we like to keep mobile and occasionally work from home, instead of having a central file server, we have all the office documents in an SVN repository, and each person keeps a checkout on their own laptops. A checkout weighs in at about 3GB, and the repo with revisions in it: about 6GB. This is all working great.
The problem is that soon we won't have a small office any more - all our 5 workers will be working remotely. I had considered purchasing a dedicated server and running our SVN repository from that, except two of our workers will be really remote and will be using wireless "broadband" with a 3GB/month limit, and I'm afraid that a few large updates will really rip through their monthly allowance, not to mention taking all day to complete.
Reading a few questions on Stack Overflow, it seems there's quite a community of distributed VCS aficionados who think git or mercurial is definitely the best for many situations. Given that all the employees would still be able to meet face-to-face at least once a fortnight (and hence be on a fast LAN), I'm wondering if a DVCS would work for us?
I don't know exactly what's in your repo, but unless you're changing all the files regularly, a DVCS should provide you a very desirable workflow.
You could do an svn -> git conversion, stick the repo on a DVD and mail it out to all the satellite offices, and then let them fetch from the office as things change at a fairly low incremental cost (should be smaller than the delta in general).
Checkout the Fossil DVCS, it may fit your bill. Fossil may be used like SVN or a DVCS. If you are concerned about it handling your current repository try it out. It also has a built in project wiki and bug tracking system that distribute with the repository as well. You could try it out and see if it would work for your small team.
The pain for you would be losing your revision history, at this time I don't beleive you can import a svn repository into Fossil.
Join the mailing list and you will get answers for any of your questions. The creator of SQLite is also the creator of this project as well. Hope this helps.
I can't see why not. With something like git, the repository is local to the machine, and so your remote employees can actually have a tracked changelog that can then be merged or rebased with the main repository--whatever you decide that to be--when they get the chance.
Also, git has really good compression compared to SVN, so the 3GB/mo quota may be more than enough for your remote employees.
Randal Schwartz actually gave a really good presentation on git at Google's Tech Talks: http://www.youtube.com/watch?v=8dhZ9BXQgc4
(It seems no one is answering this.) DVCS of course seems like it would work, but I have no experience with it. A centralized system like svn might also work if you are not expecting large changes daily. (to go up and back from the server) The initial get in that case would be the only real expensive issue.
Can you monitor your use now and see how much traffic goes back and forth?
The real problem here is the 3GB/mo bandwidth limitation. It's probably just better to come up with a better solution for connectivity...