Azure function runtime unreachable using app settings - .net-core

I have an azure function app ( time trigger functions ) which is giving "Azure Function runtime is unreachable" error. We are using appsetting.json in place of local.host.json and the variables are configured in devops pipeline. I can see that the variables gets updated in azure function app files, However function is not executing yet it shows some memory consumption every 3o mins. Please share your suggestions to fix this. Thanks !

As far as I Know,
Azure Functions Runtime is unreachable comes if the function app is blocked by firewall or storage account configuration is incorrect in the Connection String.
I found few solved issues in the SO for similar errors in the context of Azure Functions deployed through Azure DevOps Pipelines where user #JsAndDotNet solved here the above error by correcting the Platform value in Configuration menu of deployed Azure Function App in the portal.
Another user #DelliganeshSevanesan solved this error in the Context of Azure Functions Deployment using IDEs 70934637 where multiple reasons are given as a check to this error along with the resolutions.
Also, I have observed we need to add Azure Function App Configuration Settings in the Pipeline with the form of Key-Value Pair, which is also known as transformation of the local.settings.json of Azure Function App to the Azure CI/CD Pipelines. For this, I have found the practical workarounds given by #VijayanathViswanathan and #Sajid in these SO Issue 1 & 2.

Related

Why does turning on Application Insights on a App Service crash the app?

I have turned on Application Insights on my app service. Everytime I try to run the app or login using the app it gives an error:
An error has occurred. Please call support or your account manager if this error persists
When i looked in the Application errors under Logging,I get the following:
System.ApplicationException: The trace listener AzureBlobTraceListener is disabled.
---> System.InvalidOperationException: The SAS URL for the cloud storage account is not
specified. Use the environment variable
'DIAGNOSTICS_AZUREBLOBCONTAINERSASURL' to define it.
Im assuming I need to add the following in the Configuration of the App Service:
{
"name": "DIAGNOSTICS_AZUREBLOBCONTAINERSASURL",
"value": <URL>,
"slotSetting": true
},
But what is the and where can i find it ? Or is there a different error causing the app to crash once application insights is enabled, has anyone experienced this ?
I can see you have configured DIAGNOSTICS_AZUREBLOBCONTAINERSASURL without providing the value.
Get the Blob service SAS URL value from the Storage Account.
In Azure Portal => Create a Storage Account.
Initially the option to generate SAS is disabled for me.
Navigate to your Storage Account => Shared access signature = > select Container and Object checkboxes.
An option to Generate SAS and connection string will be enabled.
Copy the Blob service SAS URL and provide the value in either local Configuration settings or in Azure App Service => Configuration => Application Settings.
In Azure App Service Settings
Save the settings and access the URL.
My question before is that once i generate Generate SAS and Conection string and copy to clip board.
Even if you generate the SAS again, the value will be same till here - https://yourSA.blob.core.windows.net/?sv=2021-06-08&ss=*****=co&sp=******&se=2022-12-05T14:.
Even we can add the SAS token in App settings automatically. Follow the below steps.
In Azure App Service => App Service logs = >set Application logging (Blob) to On and continue the steps to add the Storage Account. If you don't have create a new Storage Account.
"Unable to find mscorlib assembly reference:.
Make sure you are using the latest package references.
Update the framework version 4.7.2 to 4.8 in VS. Rebuild and Re-deploy the App.

Is it supported to create an integrated notebookVM when the workspace is configured to be in a VNET?

Trying to follow doc at secure your experiments but after configuring default workspace storage for VNET access, attempts to create integrated notebook VM fails with what looks like a storage access error.

Create Failed:
Failed to clone samples. Error details: Microsoft.WindowsAzure.Storage This request is not authorized to perform this operation.
thanks,
jim
We are working on adding virtual network support to NotebookVM.
Thanks

BAD_GATEWAY when connecting Google Cloud Endpoints to Cloud SQL

I am trying to connect from GCP endpoints to a Cloud SQL (PostgreSQL) database in a different project. My endpoints backend is an app engine in the flexible environment using Python.
The endpoints API works fine for non-db requests and for db requests when run locally. But the deployed API produces this result when requiring DB access:
{
"code": 13,
"message": "BAD_GATEWAY",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "application"
}
]
}
I've followed this link (https://cloud.google.com/endpoints/docs/openapi/get-started-app-engine) to create the endpoints project, and this (https://cloud.google.com/appengine/docs/flexible/python/using-cloud-sql-postgres) to link to Cloud SQL from a different project.
The one difference is that I don't use the SQLALCHEMY_DATABASE_URI env variable to connect, but take the connection string from a config file to use with psycopg2 SQL strings. This code works on CE servers in the same project.
Also double checked that the project with the PostgreSQL db was given Cloud SQL Editor access to the service account of the Endpoints project. And, the db connection string works fine if the app engine is in the same project as the Cloud SQL db (not coming from endpoints project).
Not sure what else to try. How can I get more details on the BAD_GATEWAY? That's all that's in the endpoints logfile and there's nothing in the Cloud SQL logfile.
Many thanks --
Dan
Here's my app.yaml:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
env_variables:
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://postgres:password#/postgres?host=/cloudsql/cloudsql-project-id:us-east1:instance-id
beta_settings:
cloud_sql_instances: cloudsql-project-id:us-east1:instance-id
endpoints_api_service:
name: api-project-id.appspot.com
rollout_strategy: managed
And requirements.txt:
Flask==0.12.2
Flask-SQLAlchemy==2.3.2
flask-cors==3.0.3
gunicorn==19.7.1
six==1.11.0
pyyaml==3.12
requests==2.18.4
google-auth==1.4.1
google-auth-oauthlib==0.2.0
psycopg2==2.7.4
(This should be a comment but formatting really worsen the reading, I will update on here)
I am trying to reproduce your error and I come up with some questions:
How are you handling the environment variables in the tutorials? Have you hard-coded them or are you using environment variables? They are reset with the Cloud Shell (if you are using Cloud Shell).
This is not clear for me: do you see any kind of log file in CloudSQL (without errors) or you don't see even logs?
CloudSQL, app.yaml and requirements.txt configurations are related. Could you provide more information on this? If you update the post, be careful and do not post username, passwords or other sensitive information.
Are both projects in the same region/zone? Sometimes this is a requisite, but I don't see anything pointing this in the documentation.
My intuition points to a credentials issue, but it would be useful if you add more information to the post to better understand where the issue cames from.

Firebase + Datastore = need_index

I'm working through the appengine+go tutorial, which connects in with Firebase: https://cloud.google.com/appengine/docs/standard/go/building-app/. The code is available at https://github.com/GoogleCloudPlatform/golang-samples/tree/master/appengine/gophers/gophers-6, which aside from my Firebase keys is identical.
I have it working locally just fine under dev_appserver.py, and it queries the Vision API and adds labels. However, after I deploy to appengine I get an index error on datastore. If I go to the Firebase console, I see the collection (Post) and the field (Posted) which is a timestamp.
If I change this line: https://github.com/GoogleCloudPlatform/golang-samples/blob/master/appengine/gophers/gophers-6/main.go#L193 to remove the Order("-Posted") then everything works (it's important to note that any Order call causes it to error, except the test records I've posted come in random order.
The error message when running in appengine is: "Getting posts: API error 4 (datastore_v3: NEED_INDEX): no matching index found."
I've attempted to create a composite index, or test locally with --require_indexes=true and it hasn't helped me debug the issue.
Edit: I've moved this over to use Firebase's Datastore libraries directly, instead of the GCP updates. I never solved this particular issue, but was able to move forward with my app actually working :)
By default the local development server automatically creates the composite indexes needed for the actual queries invoked in your app. From Creating indexes using the development server:
The development web server (dev_appserver.py) automatically adds
items to this file when the application tries to execute a query that
needs an index that does not have an appropriate entry in the
configuration file.
In the development server, if you exercise every query that your app
will make, the development server will generate a complete list of
entries in the index.yaml file.
When the development web server adds a generated index definition to
index.yaml, it does so below the following line, inserting it if
necessary:
# AUTOGENERATED
The development web server considers all index definitions below this
line to be automatic, and it might update existing definitions below
this line as the application makes queries.
But you also need to deploy the generated index configurations to the Datastore and let the Datastore update indexing information (i.e. the indexes to get into the Serving state) for the respective queries to not hit the NEED_INDEX error. From Updating indexes:
You upload your index.yaml configuration file to Cloud Datastore
with the gcloud command. If the index.yaml file defines any
indexes that don't exist in Cloud Datastore, those new indexes are
built.
It can take a while for Cloud Datastore to create all the indexes and
therefore, those indexes won't be immediately available to App Engine.
If your app is already configured to receive traffic, then exceptions
can occur for queries that require an index that is still in the
process of being built.
To avoid exceptions, you must allow time for all the indexes to build.
For more information and examples about creating indexes, see
Deploying a Go App.
To upload your index configuration to Cloud Datastore, run the
following command from the directory where your index.yaml is located:
gcloud datastore create-indexes index.yaml
For information, see the gcloud datastore reference.
You can use the GCP Console, to check the status of your indexes.

How to properly configure Spring Datasource for an Elastic Beanstalk app?

I'm running into an issue integrating Spring Security with my Elastic Beanstalk app backed by a MySql database. If I deploy my app I'm able to login in correctly for some time but eventually I'll start to receive login errors without an exception being thrown so I'm unable to get any useful information about the issue. I've downloaded the logs as well and can't see anything of value. I can see where the logs show accessing the public page, attempting to access the private section, returning the login page, and then the loginError page; however, nothing about any issue.
Even though I'm unable to login through a browser I am able to login if I run the app from an IDE as well as view the db in MySQL Workbench. This suggests to me the problem is due to some persistent state on the server.
I've had a similar problem before with another Beanstalk app using Spring Security and was able to resolve it by setting application properties as follows:
spring.datasource.test-on-borrow=true
spring.datasource.validation-query=SELECT 1
I'm using a more recent version of Spring than that app and the properties have been changed to specific datasources so I tried adding the following properties:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.validation-query=SELECT 1
When that didn't work I added another based on an answer to a similar question here; now the properties are:
spring.datasource.tomcat.test-on-borrow=true
spring.datasource.tomcat.test-while-idle=true
spring.datasource.tomcat.validation-query=SELECT 1
That seemed to work (possibly due to less login activity) but eventually resulted in the same behavior .
I've looked into the various properties available but before I spend a lot of time randomly setting and/or overriding default settings I wanted to see if there's a reliable way to deal with this.
How can I configure my datasource to avoid login errors after long periods of time?
This isn't a problem of specific configuration values but with where those configurations reside. The default location for the application.properties (/resources; Intellij) is fine for deploying as a jar with an embedded Tomcat server but not as a war with a provided server. The file isn't found/used so no changes to the file affect the one given by AWS.
There are a number of ways to handle this; I chose to add an RDS configuration bean in my SpringBootServletInitializer:
#Bean
public RdsInstanceConfigurer instanceConfigurer() {
return () -> {
TomcatJdbcDataSourceFactory dataSourceFactory =
new TomcatJdbcDataSourceFactory();
// Abondoned connections...
dataSourceFactory.setRemoveAbandonedTimeout(60);
dataSourceFactory.setRemoveAbandoned(true);
dataSourceFactory.setLogAbandoned(true);
// Tests
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setTestOnReturn(false);
dataSourceFactory.setTestWhileIdle(false);
// Validations
dataSourceFactory.setValidationInterval(30000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(30000);
dataSourceFactory.setValidationQuery("SELECT 1");
return dataSourceFactory;
};
}
Below are the settings that worked for me.
From Connection to Db dies after >4<24 in spring-boot jpa hibernate
dataSourceFactory.setMaxActive(10);
dataSourceFactory.setInitialSize(10);
dataSourceFactory.setMaxIdle(10);
dataSourceFactory.setMinIdle(1);
dataSourceFactory.setTestWhileIdle(true);
dataSourceFactory.setTestOnBorrow(true);
dataSourceFactory.setValidationQuery("SELECT 1 FROM DUAL");
dataSourceFactory.setValidationInterval(10000);
dataSourceFactory.setTimeBetweenEvictionRunsMillis(20000);
dataSourceFactory.setMinEvictableIdleTimeMillis(60000);

Resources