Unable to start Azure Storage Emulator - azure-storage-emulator

I've run into problems trying to run the Azure Storage Emulator on a newly installed computer.
At first it was returning
Cannot create database 'AzureStorageEmulatorDb56' : The database 'AzureStorageEmulatorDb56' does not exist. Supply a valid database name. To see available databases, use sys.databases..
However, when I ran sqllocaldb i I could see that there was a DB named 'AzureStorageEmulatorDb56'.
I eventually ran the command
AzureStorageEmulator init -server localhost -forcecreate
which returned
Granting database access to user AzureAD\[username elided].
Database access for user AzureAD\[username elided] was granted.
Initialization successful. The storage emulator is now ready for use.
The storage emulator was successfully initialized and is ready to use.
which looks promising.
However, when I right-click the emulator's icon in the system try and select "Start Storage Emulator" nothing happens. And if I then look in the log files I can see an error log (Error20-Jul-18-11-07.log) which contains...
7/20/2018 11:06:36 AM [Error] [ActivityId=00000000-0000-0000-0000-000000000000] Input string was not in a correct format.
There's also an Info20-Jul-18-11-07.log file which contains
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Starting Service: Blob
7/20/2018 11:06:36 AM [Info] [ActivityId=00000000-0000-0000-0000-000000000000] Stopping Service: Blob
Can anyone explain what's going wrong and how I can get the local storage emulator up and running?

Try to disable logging, there seems to be a bug in the 5.5 release:
https://github.com/Azure/azure-storage-net/issues/728

Related

Why does turning on Application Insights on a App Service crash the app?

I have turned on Application Insights on my app service. Everytime I try to run the app or login using the app it gives an error:
An error has occurred. Please call support or your account manager if this error persists
When i looked in the Application errors under Logging,I get the following:
System.ApplicationException: The trace listener AzureBlobTraceListener is disabled.
---> System.InvalidOperationException: The SAS URL for the cloud storage account is not
specified. Use the environment variable
'DIAGNOSTICS_AZUREBLOBCONTAINERSASURL' to define it.
Im assuming I need to add the following in the Configuration of the App Service:
{
"name": "DIAGNOSTICS_AZUREBLOBCONTAINERSASURL",
"value": <URL>,
"slotSetting": true
},
But what is the and where can i find it ? Or is there a different error causing the app to crash once application insights is enabled, has anyone experienced this ?
I can see you have configured DIAGNOSTICS_AZUREBLOBCONTAINERSASURL without providing the value.
Get the Blob service SAS URL value from the Storage Account.
In Azure Portal => Create a Storage Account.
Initially the option to generate SAS is disabled for me.
Navigate to your Storage Account => Shared access signature = > select Container and Object checkboxes.
An option to Generate SAS and connection string will be enabled.
Copy the Blob service SAS URL and provide the value in either local Configuration settings or in Azure App Service => Configuration => Application Settings.
In Azure App Service Settings
Save the settings and access the URL.
My question before is that once i generate Generate SAS and Conection string and copy to clip board.
Even if you generate the SAS again, the value will be same till here - https://yourSA.blob.core.windows.net/?sv=2021-06-08&ss=*****=co&sp=******&se=2022-12-05T14:.
Even we can add the SAS token in App settings automatically. Follow the below steps.
In Azure App Service => App Service logs = >set Application logging (Blob) to On and continue the steps to add the Storage Account. If you don't have create a new Storage Account.
"Unable to find mscorlib assembly reference:.
Make sure you are using the latest package references.
Update the framework version 4.7.2 to 4.8 in VS. Rebuild and Re-deploy the App.

terraform GCP VPC connector creation issue

Overview
I tried creating a VPC network, having a subnet and adding a Serverless VPC connector with terraform in GCP. I was following the official guide ( https://cloud.google.com/vpc/docs/configure-serverless-vpc-access#terraform ) and initially everything was working well. After that I accidently commited my JSON key to github, someone stole it and used it for crypto, the project was disabled but shortly after that reinstated
After that my terraform VPC connector creations started to fail. I tried a lot of different things but nothing seems to work(running destroy, changine service accounts, changing names, deleting all of the terraform subfolders, deleting EVERY resource and restarting the process)
The errors I am getting are:
│ Error: Error waiting to create Connector: Error waiting for Creating Connector: Error code 13, message: An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
│
or
│ Error: Error creating Connector: googleapi: Error 409: Requested entity already exists
Today I tried to create VPC connector from the command line(gcloud) and from the UI tool. The errors persisted
Unknown error. Original error message: Operation failed: Insufficient CPU quota in region.
Max throughput of the connector per day over last seven days.
or
An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually.
errors while deleting:
│ Error: Error waiting for Deleting Network: The network resource 'projects/static-emblem-327016/global/networks/sun-serverless-network' is already being used by 'projects/static-emblem-327016/global/routes/default-route-5cbc9de02e21bb35'
│
I was lookint at this issue https://issuetracker.google.com/issues/164378672 In it I was problems with us-central1 but I tried a couple of different regions and still I have the same issue
Questions:
I am running out of ideas, I was wondering if this is an infrastructural issue, maybe I should dump the project and create a new one ? Where can I check if there are infra issues ? How can I resolve my issue?
I recently get this error Error: Error creating Connector: googleapi: Error 409: Requested entity already exists. So I can explain the root cause and it's fix.
What I was doing is like trying to create a GCP resource (Create PubSub topic) using terraform (plan and then apply).
But before executing the terraform apply, I created the resource manually long time back with the same name. I expected that the terraform plan or terraform apply will not try to create it again since the resource name is same. But instead of Refreshing state, I found it was trying to Creating the resource. The reason it that, terraform does not know about your resource history. Either you need to import your resource history using terraform import command or else delete the manually created resource and then run the terraform apply command.
The message “An internal error occurred: Failed to create a VPC Access connector. Please delete the connector manually” can indicate that you don't have enough resources in your project to create the connector. Please make sure you have enough Resource Quota available in your GCP project.
The message “googleapi: Error 409: Requested entity already exists” indicates that The resource that a client tried to create already exists.
If you want to know what the root cause is, you can check the logs of the VPC Connector creation in the System Event Audit Logs.
System Event audit logs contain log entries for Google Cloud actions that modify the configuration of resources. System Event audit logs are generated by Google systems; they aren't driven by direct user action. System Event audit logs are always written; you can't configure, exclude, or disable them. The instructions to access them are here.
On the other hand, generating and distributing service account keys poses severe security risks to your organization. They are long-lived credentials that are not automatically rotated. These keys can be leaked accidentally or maliciously allow attackers to gain access to your sensitive GCP resources. If you accidentally compromised your JSON Key, please read the recommendations in this link.
If you want to know more about the risk and alternatives to download Service Account, Key please follow this link. Please note that this is not GCP official documentation, so I cannot vouch for its accuracy.
I was able to resolve my issue. It turns out that I had deleted my default compute engine service account in panic. I was able to recover it and everything worked out from there. For more info go here: https://cloud.google.com/iam/docs/creating-managing-service-accounts#undeleting_a_service_account
you have to identify the default service acc for compute engine and undelete it:
gcloud beta iam service-accounts undelete ACCOUNT_ID

Is it supported to create an integrated notebookVM when the workspace is configured to be in a VNET?

Trying to follow doc at secure your experiments but after configuring default workspace storage for VNET access, attempts to create integrated notebook VM fails with what looks like a storage access error.

Create Failed:
Failed to clone samples. Error details: Microsoft.WindowsAzure.Storage This request is not authorized to perform this operation.
thanks,
jim
We are working on adding virtual network support to NotebookVM.
Thanks

neo4j failing to reload - possible causes for failed store_lock permissions

Until yesterday a site using neo4j + flask + nginx was working ok.
This morning I found error 500.
From the logs, it seems that the culprit is neo4j:
tail -100 /var/log/neo4j/neo4j.log
Caused by: org.neo4j.kernel.StoreLockException: Unable to obtain lock on store lock file: /usr/share/neo4j/data/databases/target_association.db/store_lock. Please ensure no other process is using this database, and that the directory is writable (required even for read-only access)
at org.neo4j.kernel.internal.StoreLocker.storeLockException(StoreLocker.java:94)
at org.neo4j.kernel.internal.StoreLocker.checkLock(StoreLocker.java:86)
at org.neo4j.kernel.internal.StoreLockerLifecycleAdapter.start(StoreLockerLifecycleAdapter.java:40)
at org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:433)
... 11 more
Caused by: java.io.FileNotFoundException: /usr/share/neo4j/data/databases/graph.db/store_lock (Permission denied)
This is for me cumbersome: it was working fine yesterday, all queries are GET, and the user permission of the db folder are owned by testuser:staff
I have not changed persmissions: what could have happened?
Testuser also run the flask app that interact with the db - so I don't get why know there are user permission problems.
The app is also owned by www-data group.
I looked at:
https://groups.google.com/forum/#!topic/neo4j/k2c6PgV6WFE
Could you please show a good procedure (ubuntu commands) to debug and correct permissions the user that is running neo4j and the permissions to db folder? should I change to root?
my app permissions are set as :
/var/www/app/
testuser:www-data
py2neo and connectors are in
/var/www/app/virtualenvironment/ ..
and
/usr/share/neo4j/data/databases/graph.db
testuser:staff

boxfuse dev db not provisioned correctly

I'm just starting with boxfuse and can't seem to find a way to get my dev database to be provisioned.
In my boxfuse.yml I have (for the database section):
database:
# the name of your JDBC driver
driverClass: com.mysql.jdbc.Driver
# the username
user: root
# the password
password: <password>
# the JDBC URL
url: jdbc:mysql://10.0.0.84:3306/dmsdb
# any properties specific to your JDBC driver:
properties:
charSet: UTF-8
hibernate.dialect: org.hibernate.dialect.MySQLInnoDBDialect
# the maximum amount of time to wait on an empty pool before throwing an exception
maxWaitForConnection: 1s
# the SQL query to run when validating a connection's liveness
validationQuery: "/* MyApplication Health Check */ SELECT 1"
# the minimum number of connections to keep open
minSize: 8
# the maximum number of connections to keep open
maxSize: 32
# whether or not idle connections should be validated
checkConnectionWhileIdle: false
If I try running it (boxfuse run), my application doesn't work at all.
boxfuse info produces the following:
oxfuse client v.1.18.7.938
Copyright 2016 Boxfuse GmbH. All rights reserved.
Account: mlr11 (mlr11)
Info about mlr11/dms-service in the dev environment:
App Type : Single Instance with Zero Downtime updates
App URL : http://127.0.0.1:8082
DB Type : MySQL database
DB URL : jdbc:mysql://localhost:3306/boxfuse-dev-db
DB Host : localhost
DB Port : 3306
DB Database : boxfuse-dev-db
DB User : boxfuse-dev-db
DB Password : boxfuse-dev-db
DB Status : available
Which is very different than what I was expecting. URL, Database, User, Password) are not matching my boxfuse.yml file.
What I am missing. I know it must be something simple. I did all kind of search and read the doc a few times. I can't seem to find what's wrong. Any pointers will be appreciated.
From the config file you posted I am assuming this is a dropwizard app.
Since your Boxfuse app was configured to use a MySQL database, Boxfuse automatically provisions a database in each environment when you first deploy your application there. In your case you can see the connection info for that database in the dev environment in the output you post in your question.
Boxfuse exposes these values (db url, user, password, ...) as environment variables (https://cloudcaptain.sh/docs/databases#envvars) and automatically configures your framework (Dropwizard I assume) to use those instead of the ones included in your config file. It will do so by passing -Ddw.database.url=$BOXFUSE_DATABASE_URL -Ddw.database.user=$BOXFUSE_DATABASE_USER -Ddw.database.password=$BOXFUSE_DATABASE_PASSWORD as arguments to the JVM.
Also doublecheck in the VirtualBox GUI that your VirtualBox installation is fully functional and able to start VMs and that both the Boxfuse Dev VM and the instance of your application are started properly.

Resources