I have two projects on the F0 tier. This morning they both will not let me upload additional images
150 training images uploaded; 0 remain
and
1162 training images uploaded; 0 remain
The documenation says the limit should be 5,000.
https://learn.microsoft.com/en-us/azure/cognitive-services/custom-vision-service/limits-and-quotas
There is a known issue for F0 limit. Basically recently we have done a pretty big backend change and just deployed that. So this deployment caused the regression. The setting for F0 limit of projects are set wrong.
We Will deploy the fix as soon as possible.
Related
We are running a regional news website (https://www.galwaydaily.com/) on an AWS EC2 instance (t3 medium).
The problem is that the page load speed over the past few months has gone up and up and a few days ago the site stopped working altogether for a few hours. In the past, we would just have scaled up the instance, but I'm not sure this is best practice.
Here is a screenshot of our CPU utilization for the past 2 weeks at 1 hours intervals:
I'd love some advice on how best to host and serve this site!
At a quick glance, your best option for the least amount of effort is to add a CDN. Your top 7 longest loading assets are a couple js/css files and then some images - none of which seem large enough to be taking as long as they do. Use a tool like GTmetrix.com to see if you are utilizing the resources you have effectively before resizing your instance and/or DB.
Other options include utilizing AWS features like memcache (I tend to use redis), Autoscaling groups and RDS.
I've set up NXRM 3.14 with a ceph (S3 compatible) blobstore back-end. I've been testing it both on physical hardware and inside a docker container.
It "works" but is much, much slower than uploading directly to the bucket (a 2 second upload directly to the bucket may take 2 minutes through NXRM)
I haven't found any bugs or complaints about this, so I'm guessing it's specific to ceph and that the performance may be fine with S3. Uploads to the local filesystem are also very fast.
I've found nothing in the log files to indicate performance problems.
Sorry this question is extremely vague, but does anyone have recommendations for debugging NXRM performance or maybe is anyone using a similar setup? Thanks.
I eventually tracked this down in the NXRM open source code, the current MultipartUploader is single threaded (https://github.com/sonatype/nexus-public/blob/master/plugins/nexus-blobstore-s3/src/main/java/org/sonatype/nexus/blobstore/s3/internal/MultipartUploader.java) and uploads chunks sequentially.
For files larger than 5mb, this introduces a considerable slowdown in upload times.
I've submitted an improvement suggestion on their issue tracker: https://issues.sonatype.org/browse/NEXUS-19566
I am trying to test out the feasibility of moving my website from godaddy to AWS.
I used a wordpress migrate plugin which seems to have moved the complete site and at least peripherally appears to be moved properly.
However, when I try to access the site, it is extremely slow. Upon using developer tools, I can tell that some of the css and jpg images are sort of acting as blocking threads.
However, I cannot tell why this is the case. The site loads in less than 3 seconds in godaddy, however, it takes over a minute to load it fully on AWS and there are at least a few requests that timeout. Waterfall view on chrome developer tools show a lot of waiting on multiple requests and I cannot seem to figure out what or why these requests are sort of waiting forever and timing out.
Any guidance is appreciated.
I have pointed the current instance to www. blind beliefs .com
I cannot seem to figure out if it is an issue with the bitnami wordpress AMI or if I am doing something wrong. May be I should go the traditional route of spinning up EC2 instance , run a server on it, connect it to a db and then install wordpress on my server. I just felt the AMI available took care of all of that tailoring without me having to manually doing it.
However, it is difficult to debug though as to why certain assets get blocked/are extremely slow and timeout without loading.
Thank you.
Some more details:
The domain is still at godaddy and I have not moved it to AWS yet, not sure if that is sort of having an impact.
I still feel it has to do with the AMI though - cannot prove it.
Your issue sounds like you have a free memory problem. You did not go into details on the instance size, if MySQL is installed on the instance, etc.
This article will show you how to determine memory usage on your instance. When free memory is low OR you start using SWAP space, your machine will become very slow. Your goal should be 0 bytes used in SWAP space and at least 25% free memory during normal operations.
Other factors to check is percent CPU utilization and free disk space on your file systems.
Linux Memory Check Commands
If you have a free memory problem, increase the instance size. If you have a CPU usage problem, either change the instance size or switch to another instance type. If you have a free disk space problem, create a new instance with a larger EBS volume OR move your website, etc to a new EBS volume sized correctly.
I'm talking about the chart found in the console, in Hosting > Usage > Storage.
I know that the storage volume is supposed to be the size used by the hosting. and it's for all versions, that are NOT deleted by default.
My problem is that I have 2 non continuous series on the chart. There are some days with 1 value, some days with 2 values, and some days with 0 values!
Here is a screenshot
Edit: Not sure if it's linked to the problem, but that could be interesting info for debugging. That project is now on a Blaze plan but hosting storage was over 1 GB when it was still on a Spark plan.
This is a bug in the Firebase Hosting console that is currently being investigated by the team. There should only be one line. We hope to address it in the near future.
I have been using Visual Studio Team Services (was Visual Studio Online) for version control. As a backup i keep my project files on OneDrive and mapped the root work-spaces there. Soon after the launch of RC 1 and its support on Azure, we have started migrating all our projects to ASP.NET 5 and things have been really great, and i really love the GruntJS task runner for the client side development, but the issue with it is that the node modules create a highly nested folder structure which causes the OneDrive sync to fail.
The client side development happens in TypeScript and grunt is used just for bundling and minification purposes of the compiled JavaScript files. Since Web Optimizations is not the recommended method in ASP.NET 5 and i could not find any port for .NET Core
While searching for solutions i somewhere stumbled upon a link which said that OneDrive for Business does not have this limitation, we have Office 365 subscription for the organization and tried syncing the projects there, but failed as even OneDrive for business has this limitation.
Though there is a UserVoice suggestion for this, where the Microsoft Representative says that they are thinking about this, but till the time this gets implemented, i wanted to ask for alternatives to GruntJS for ASP.NET 5.
Since most of the application we build are single page enterprise apps where user opens the app during start of the workday and closes the tab at the end of day, rarely refreshing the app so we can live without optimizations for now, but just out of curiosity for consumer oriented application, are there any alternatives for GruntJS for ASP.NET 5.
EDIT
Points to be noted
TFS Check in happens only when a code needs to be committed, but OneDrive sync happens continuously in the background automatically, in the event of a hardware crash or device getting lost (or any reason in the world stopping us from accessing the code), TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between.
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders to live
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days and it's not an acceptable risk to loose the progress. (Just installation of visual studio enterprise with all features takes around 5-8 hrs on my development machine with 4th Gen Intel i5 2.9GHz 16GB ram, we have plans to upgrade our development machines with SSD and 64 GB of ram, but that's out of scope of question)
If we weigh extra time spent in optimization tasks vs the risk of lost files, the optimization happens only when we push updates to production server which happens 1-3 times a month, every round of optimization tasks does not takes more than 10-15 minutes, so we can live with manually optimizing the files before production or not optimizing at all.
Files like .gitignore and.csproj allows which files to sync to TFS not OneDrive, and my problem is not at all with TFS, we are perfectly happy with the way Check Ins are done and managed, once a code is committed, all the worries are resolved automatically, my worries is just for the uncommitted code.
Summary
I agree the problem has a highly specific nature, but it might be of help to future readers either looking for similar solutions or just to gain some more knowledge.
Two things:
Team Foundation Services / Visual Studio online IS your source control solution, why would you need / want to back it up elsewhere?
I would probably exclude your node modules from your sync anyway. Your project configuration files will contain a list of the node modules you need, and if you wanted to move the app to a new machine or folder you'd just run npm install again to pull down the packages.
EDIT:
TFS Check in happens only when a code needs to be committed, but
OneDrive sync happens continuously in the background automatically, in
the event of a hardware crash or device getting lost (or any reason in
the world stopping us from accessing the code), TFS won't be able to
provide the code under development which was not committed. OneDrive
saves us from those scenarios
This indicates that your check ins are likely too large or not frequent enough.
How often do you check in? How often does your hardware fail to the extent where you lose all files? Is this an acceptable risk?
One drive allows us to stop syncing of selected folders available on live to local but there is no way we can stop sync of local folders
to live
And this is why you should be using TFS/VSO as the mechanism to control what files are stored in your source control repository. Systems like .gitignore or .csproj files exist for exactly this reason.
EDIT2:
I'm trying not to be too harsh and I agree this is a problem which could probably be solved in some specific way by OneDrive, but I'm trying to perform a SO equivalent of the 5 Whys.
You've said:
TFS won't be able to provide the code under development which was not committed. OneDrive saves us from those scenarios
and
Check in happens mostly when a significant portion of assigned task in backlog is completed. In 50% of the cases it's 2-6 working hrs but in the rest half it can happen around 2-3 business days, worst case scenario is 5 days in case holidays happen in between
Ok, so lets talk about that. You want to prevent loss of code if hardware fails, sure we all do. Looking at your figures 2-6 working hours seems reasonable, so you're looking at loosing 0.25 to 1 day (ballpark). I think the bigger issue is if people aren't checking in for 5 days. You've mentioned 5 days could be the worse case and only if holidays happen, so that's not 5 working days it's 2-3 business days worst case.
You've then said:
Possibility of hardware failure is assumed to be once a year resulting in a loss of 5 working days
So, above you've said 2-6 hours for check-ins (with exceptions) but you've based your decision to come up with a solution for this problem based on an assumption of 5 working days. Above you've said 5 days (not working) only if holidays occur. So really your figures for risk are exaggerated.
Let's be more realistic and say you have a hardware failure once a year, and you lose 4 hours worth of work.
From OneDrive, how long would it take you restore this work? lets guess at say a few minutes to connect and download it, and perhaps half an hour to an hour to sort out conflicts (#optimism). So lets say a net saving of 3 hours...once a year...do you think this is reason enough to back up all code to a and offsite cloud provider?
You talk about time taken to re-install VS, but that's not relevant to backing files up to OneDrive, you'd need to do that irrespective.
To be honest, I think your fear of losing uncommitted code is blown out of proportion and I think your reaction is too. TFS and VSO offers functions like shelvesets which can be used for just this kind of situation.
As an enterprise developer too, I do understand the concern, but normally you'd cover this by working on a machine with a RAID array or similar.
I think you should re-assess you check in policy and your assessment of the potential for lost code.