azure site wont scale down from basic as incorrectly reporting size - asp.net

I have two asp.net core apps which both are deployed via github integration directly to their own respective azure websites. One site has a custom domain and the other doesn't.
When initally configuring the integration on both sites they initially failed with space related warnings. So i scaled the sites to be a basic(1 small). I don't know
why i needed to do this as both the apps are considerably less than the 1G which i believe a shared webapp has as a limit. (the two sites on my local HDD are 117M and 120M respectively)
As a result of this i have two sites both sharing the same service plan which is a £41 a month rather than having one site on free and the other on a shared £7 a month(as it needs a custom domain)
If i try and scale down the serviceplan i get the following error. (redacted as expected)
{
"authorization": null,
"caller": null,
"channels": null,
"claims": {},
"correlationId": null,
"description": "Failed to update App Service plan defaultserviceplan: {\"Code\":\"Conflict\",\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\",\"Target\":null,\"Details\":[{\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\"},{\"Code\":\"Conflict\"},{\"ErrorEntity\":{\"ExtendedCode\":\"11006\",\"MessageTemplate\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\",\"Parameters\":[],\"InnerErrors\":[],\"Code\":\"Conflict\",\"Message\":\"Storage usage quota exceeded. Cannot update or delete a server farm.\"}}],\"Innererror\":null}",
"eventDataId": null,
"eventName": null,
"eventSource": null,
"category": null,
"eventTimestamp": "Wed Jun 21 2017 11:01:25 GMT+0100 (GMT Summer Time)",
"id": "Failed to update App Service plan_Wed Jun 21 2017 11:01:25 GMT+0100 (GMT Summer Time)",
"level": "1",
"operationId": null,
"operationName": {
"value": "Failed to update App Service plan",
"localizedValue": "Failed to update App Service plan"
},
"resourceGroupName": null,
"resourceProviderName": null,
"resourceType": null,
"resourceId": null,
"status": {
"value": "Error",
"localizedValue": "Error"
},
"subStatus": null,
"submissionTimestamp": null,
"subscriptionId": null,
"properties": {
"correlationIds": "REDACTED"
},
"relatedEvents": []
}
How can i diagnose what is taking up the space, or report this issue?

The first thing to look at is your file-system usage. You can look at what the App Service thinks you are using by going to the App Service plan and clicking on filesystem on the left hand menu.
This will give you an aggregated view of how much space is being used by all apps in the App Service plan.
if this value is > 1 GiB then you won't be able to scale down to shared (I suspect this is what causing your issue)
The next step would be to look at the storage used by each of the apps in your App Service plan.
In the Web App UX you should be able to go to "Quota" and see what each app in the App Service plan is using.
If you find an app that using more space than you think it should be using, a here are a few things to look at:
Logs: if you are logging to the app's filesystem this can use up space quickly depending on the verbosity level.
MySQL In App: if you have enabled this feature, the db is stored as a file on disk, and will use up space as well.
Site extensions installed on your app
You should be able to use Kudu and the debug console to get a good idea of what is using space.

Related

Unable to get display names (sAMAccountName) of groups from Graph API call

I have a working Azure app that gives me the group names when I call
https://graph.microsoft.com/v1.0/me/transitiveMemberOf/microsoft.graph.group
However, I have tried to recreate the app several times, and checked all settings in App Registrations and Enterprise Applications to match the original app - but can never get the group names in the new apps (created in the last 24 hours, if that is relevant).
API Permissions:
Group.Read.All
GroupMember.Read.All
User.Read
App is created using these steps
App registrations, add, Single tenant
Quickstart, Mobile and desktop applications, Desktop, Make this change for me
Token configuration, Add groups claim, Security groups, set all to sAMAccountName
API Permissions, add Group.Read.All and GroupMember.Read.All
Permission granted using “Grant admin consent for Default Directory”
There must be another setting somewhere else that I am missing, which I thought to post here to uncover, thinking it might help someone else with the same problem.
FYI fragment of group result that I get:
"#odata.id": "https://graph.microsoft.com/v2/5ed71832-327b-4b98-b68a-6c54ff1717c0/directoryObjects/2f95e1d3-c7cf-4796-92a2-df844feb52d0/Microsoft.DirectoryServices.Group",
"id": "12345678-c7cf-4796-92a2-df844feb5eee",
"deletedDateTime": null,
"classification": null,
"createdDateTime": null,
"creationOptions": [],
"description": null,
"displayName": null, <<<<<<<<<< why is this null???
When an application queries a relationship that returns a directoryObject type collection, if it does not have permission to read a certain derived type, members of that type are returned but with limited information. This could potentially be a reason for you seeing a 'null' value.
Also for using the transitive memberOf endpoint, I suggest you use directory level permissions.
Refer Documentation here - https://learn.microsoft.com/en-us/graph/api/user-list-transitivememberof?view=graph-rest-1.0&tabs=http
Hope this helps. Thanks!

Unable to create knowledgebase for azure cognitive service (Error: "No Endpoint keys found.")

I am creating a new knowledge base connecting it to an already existing Azure Cognitive Service. But I am getting error: "No Endpoint keys found." when i click "Create KB".
See capture of the error:
My QnAMaker cognitive service has the endpoint
It seems that there is sometimes the problem that the endpoint keys can only be found, if the Resource Group holding all resources for the QnA Maker Service (like App Service, Application Insights, Search Service and the Application Service Plan) is hosted in the same region as the QnA Maker Service itself.
Since the QnA Maker service can only be hosted in West US (as far a I know and was able to find: https://westus.dev.cognitive.microsoft.com/docs/services?page=2), the current workaround for this case is to create a new QnA Maker service with the resource group being hosted in the West US region. Then the creation of a knowledge base should work as always.
PS: seems like this issues was already reported, but the problem still occurs for me from time to time (https://github.com/OfficeDev/microsoft-teams-faqplusplus-app/issues/71)
My resources and resource group were all in West US but I still got the same "No Endpoint keys found." error.
Eventually I figured out that the issue was related to my subscription levels. Make sure that they are all the same for all your created resources.
If you are using the deploy.ps1 script in the Virtual Assistant VS template, open the file at .\Deployment\Resources\template.json
That is a template for the resource creation. You can look through it to see exactly which resources will be created and what parameters are sent to Azure for each of the resources.
I am using a My Visual Studio subscription so it is registered as a free tier in Azure. What worked for me, is that I had to update all the "standard" subscriptions to free in the Parameters JSON array. I didn't update anything lower down for fear that it might interfere with the creation process too much.
An example is the appServicePlanSku parameter. It was set to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Standard",
"name": "S1"
}
}
I updated it to
"appServicePlanSku": {
"type": "object",
"defaultValue": {
"tier": "Free",
"name": "F0"
}
}
I made multiple of these updates in the parameters array. After those changes, deleting the resource group for the 100th time and running the deployment script again, it worked.

The requested app service plan cannot be created in the current resource group because it is hosting Linux apps

I'm provisioning App Service, App Service plan and storage account to existing Resource Group using ARM template. Doing this on a nightly basis. Everything has worked several months but suddenly started to see errors like this:
{
"Code": "BadRequest",
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one.",
"Target": null,
"Details": [
{
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one."
},
{
"Code": "BadRequest"
},
{
"ErrorEntity": {
"ExtendedCode": "59314",
"MessageTemplate": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one.",
"Parameters": [],
"Code": "BadRequest",
"Message": "The requested app service plan cannot be created in the current resource group because it is hosting Linux apps. Please choose a different resource group or create a new one."
}
}
],
"Innererror": null
' Error code: 1201
}
There are no changes on ARM template.
I don't have permissions to create new Resource Groups with this subscription, just Resource Group owner to this existing one.
Historically, you can't mix Windows and Linux apps in the same resource group. However, all resource groups created on or after January 21, 2021 do support this scenario. For resource groups created before January 21, 2021, the ability to add mixed platform deployments will be rolled out across Azure regions (including National cloud regions) soon.
See: https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-intro#limitations
See also the feature request to support Linux and Windows App Service Plan within the same Resource Group:
https://feedback.azure.com/forums/169385-web-apps/suggestions/37287583-allow-a-linux-and-windows-app-service-plan-to-exis
The issue can be resolved by creating new Linux App Service plan to Resource group and then deleting it. After that Windows App Service plan provisioning works again.
SOLUTION THAT WORKED FOR ME:
It seems same OS's (Linux/Windows) App Service Plans (ASP) can not be used in the same Resource Group with same Region.
So what I did was,
Created a new Resource Group (Optional, if you have it already)
Deleted all the ASP in the Group (if you are using already created Resource Group)
Search for "App Service plans" and press Enter
Click Add
Specify the Resource Group > Select OS (Linux) > Select Region (East US) > Select SKU > Review + Create.
Again,
Search for "App Service plans" and press Enter
Click Add
Specify the Resource Group > Select OS (Windows) > Select Region (Central US) > Select SKU > Review + Create.
Doing the above steps, resolved my issue. Hope it helps others.
In my case, I deleted all existing app services and solutions and placeholder in that resource group, then it works

cosmosdb sql api vs mongodb api which one to use for my scenario.

I have a document called "chat"
"Chat": [
{
"User": {},
"Message": "i have a question",
"Time": "06:55 PM"
},
{
"User": {},
"Message": "will you be able to ",
"Time": "06:25 PM"
},
{
"User": {},
"Message": "ok i will do that",
"Time": "07:01 PM"
}
every time a new chat message arrives i should be able to simple append to this array.
mongodb API aggregation pipeline (preview) allows me to use things like $push $addToSet for that
if i use sql api i will have to pull the entire document every time modify it and create a new document every time.
Other Considerations :
This array can grow rapidly.
This "chat" document might also be nested into other document as well.
My Question
Does this means that mongodb API is better suited for this and sql api will have a performance hit for this scenario ?
Does this means that mongodb API is better suited for this and sql api
will have a performance hit for this scenario ?
It's hard to say which database is the best choice.
Yes,as found in the doc, Cosmos Mongo API supports $push and $addToSet which is more efficient. However,in fact, Cosmos Mongo API just supports a subset of the MongoDB features and translates requests into the Cosmos sql equivalent. So, maybe Cosmos Mongo API has some different behaviours and results. But the onus is on Cosmos Mongo API to improve their emulation of MongoDB.
When it comes to Cosmos Sql Api, partial update is not supported so far but it is hitting the road. You could commit feedback here. Currently, you need to update the entire document. Surely, you could use stored procedure to do this job to release pressure of your client side.
The next thing I want to say, which is the most important, is the limitation mentioned by #David. The document size has 2MB limitation in sql api and 4MB in mongo api:What is the size limit of a cosmosdb item?. Since your chat data is growing, you need to consider to split them. Then give the documents a partition key such as "type": "chatdata" to classify them.

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Resources