How to configure the GAE Datastore emulator on Windows? - google-cloud-datastore

Following the docs I have started the GAE Datastore Emulator from a CMD shell.
I used:
gcloud beta emulators datastore start --data-dir=j:\projects\project\datastore
The output is:
WARNING: Reusing existing data in [j:\projects\project\datastore].
Executing: cmd /c C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\cloud-datastore-emulator\cloud_datastore_emulator.cmd start --host=localhost --port=8081 --store_on_disk=True --consistency=0.9 --allow_remote_shutdown j:\projects\project\datastore
[datastore] Jun 08, 2019 5:24:52 PM com.google.cloud.datastore.emulator.CloudDatastore$FakeDatastoreAction$9 apply
[datastore] INFO: Provided --allow_remote_shutdown to start command which is no longer necessary.
[datastore] Jun 08, 2019 5:24:52 PM com.google.cloud.datastore.emulator.impl.LocalDatastoreFileStub <init>
[datastore] INFO: Local Datastore initialized:
[datastore] Type: High Replication
[datastore] Storage: j:\projects\jupiter\datastore\WEB-INF\appengine-generated\local_db.bin
[datastore] Jun 08, 2019 5:24:53 PM com.google.cloud.datastore.emulator.impl.LocalDatastoreFileStub load
[datastore] INFO: The backing store, j:\projects\project\datastore\WEB-INF\appengine-generated\local_db.bin, does not exist. It will be created.
[datastore] API endpoint: http://localhost:8081
[datastore] If you are using a library that supports the DATASTORE_EMULATOR_HOST environment variable, run:
[datastore]
[datastore] export DATASTORE_EMULATOR_HOST=localhost:8081
[datastore]
[datastore] Dev App Server is now running.
[datastore]
[datastore] The previous line was printed for backwards compatibility only.
[datastore] If your tests rely on it to confirm emulator startup,
[datastore] please migrate to the emulator health check endpoint (/). Thank you!
Browsing to localhost:8081 returns Ok
In a second CMD shell I try:
gcloud beta emulators datastore env-init --data-dir=j:\projects\project\datastore > set_vars.cmd && set_vars.cmd
(Google docs don't mention --data-dir=j:\projects\project\datastore but without it the tool looks in the wrong place)
But it fails with:
ERROR: (gcloud.beta.emulators.datastore.env-init) Unable to find env.yaml in the data_dir
[C:\Users\c\AppData\Roaming\gcloud\emulators\datastore]. Please ensure you have started the appropriate emulator.
If I set-up the environment variables manually I get the same results from my CMD shells.

I have tried to reproduced your use case scenario and it worked for me. This is what I have followed:
From Running the Datastore mode Emulator documentation.
Make sure that A Java JRE (version 8 or greater) is installed.
Install the Google Cloud SDK (If already have, update it)
Run the CMD as administrator
Execute $ gcloud components install cloud-datastore-emulator
Exectue $ gcloud beta emulators datastore start --data-dir=C:\PATH\TO\datastore
If get You do not currently have this command group installed. Using it requires the installation of components: [beta]
Then execute $ gcloud components install beta
Execute again $ gcloud beta emulators datastore start --data-dir=C:\PATH\TO\datastore
Open new CMD again as administrator
Execute $ gcloud beta emulators datastore env-init --data-dir=C:\PATH\TO\datastore > set_vars.cmd && set_vars.cmd
The command executed without any issues. This is the response:
C:\Windows\system32>set DATASTORE_DATASET=[PROJECT_ID]
C:\Windows\system32>set DATASTORE_EMULATOR_HOST=localhost:8081
C:\Windows\system32>set DATASTORE_EMULATOR_HOST_PATH=localhost:8081/datastore
C:\Windows\system32>set DATASTORE_HOST=http://localhost:8081
C:\Windows\system32>set DATASTORE_PROJECT_ID=[PROJECT_ID]
C:\Windows\system32>
I would suggest to try the process again from different directory. Make sure that:
You have admin permissions to access the directory.
For testing purposes, try a directory in your desktop.
Update the Google Cloud SDK.
The directory's path don't have a space in it.
Both CMDs should open with administrative privileges.

UPDATED 2019-09-07 following comments.
If you are stuck with Google App Engine (GAE) Standard Python 2.7, and have to run local version of GDS, here is my way of doing it (thought not an answer directly to the question, but still might be relevant to).
I run dev_appengine with datastore parameters. on windows the .bat file might look like this:
set PY="c:\/python27\/python.exe"
set GAE="C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\dev_appserver.py"
set GCSEMU="C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\cloud-datastore-emulator\cloud_datastore_emulator.cmd"
%PY% %GAE% --skip_sdk_update_check=True --datastore_emulator_cmd=%GCSEMU% --port 8080 --host oe.loc .
There are parameters for store location too.

Related

Cloud Run Run/Debug has stopped working - Exited with code 127

The cloudrun debug in cloud shell was working. It is no longer working for me with the following output.
I have 1) rebooted the VM and 2) I have tried another project that was also working previously.
I suspect something has changed in my environment
Starting to debug the app using configuration 'Cloud Run: Run/Debug
Locally' from .vscode/launch.json... To view more detailed logs, go to
Output channel : "Cloud Run: Run/Debug Locally - Detailed" Dependency
check started Dependency check succeeded Unpausing minikube flag needs
an argument: --status-check See 'skaffold debug --help' for usage.
Exited with code 127. Cleaning up... Finished clean up.
This bug was introduced in the v1.17.1 release of Cloud Code for VS Code, due to a change in v1.37.0 of Skaffold related to the --status-check parameter. This bug was addressed in v1.17.2 of Cloud Code for VS Code, and upgrading to this version (or later) should fix this issue.

Broken airflow upgrade_check command. Outputs "Please install apache-airflow-upgrade-check distribution from PyPI to perform upgrade checks"

Currently running airflow 1.10.15. Wanted to perform some tests before upgrading to 2+. So installed pip install apache-airflow-upgrade-check in the scheduler pod which installed successfully. So I then run the command airflow upgrade_check but it did not return the results that I expected. It's giving me this output in terminal
[2021-06-15 21:02:38,637] {{settings.py:233}} DEBUG - Setting up DB connection pool (PID 15732)
[2021-06-15 21:02:38,637] {{settings.py:300}} DEBUG - settings.prepare_engine_args(): Using pool settings. pool_size=5, max_overflow=10, pool_recycle=1800, pid=15732
[2021-06-15 21:02:38,735] {{sentry.py:179}} DEBUG - Could not configure Sentry: No module named 'blinker', using DummySentry instead.
[2021-06-15 21:02:38,754] {{__init__.py:45}} DEBUG - Cannot import due to doesn't look like a module path
[2021-06-15 21:02:38,916] {{cli_action_loggers.py:42}} DEBUG - Adding <function default_action_log at 0x7f9a637c3a70> to pre execution callback
Please install apache-airflow-upgrade-check distribution from PyPI to perform upgrade checks
[2021-06-15 21:02:39,266] {{settings.py:310}} DEBUG - Disposing DB connection pool (PID 15732)
What am I missing?
Updated 6/16/2021: I verified if the package was installed, I did see the package in list:
...
apache-airflow 1.10.15
apache-airflow-upgrade-check 1.3.0
apispec 1.3.3
argcomplete 1.12.2
...
The problem I had was the container was running as a non-root user which was defined in Dockerfile. If I install the package in the running pod, it will install in some local directory and when executing the airflow upgrade_check command, it cannot find the package. To work-around this issue, I need to add the packages in Dockerfile so it will be included when creating the docker image.

Firebase Functions Error while deploying to emulator

I'm trying to start new Firebase Functions project using latest versions of packages.
I've followed this tutorial https://youtu.be/DYfP-UIKxH0:
Firebase login
Firebase init
Created functions project the same way as tutorial says
Uncommented index.ts content
After that I get this error:
Starting #google-cloud/functions-emulator
[2018-04-04T19:05:12.124Z] Parsing function triggers
[2018-04-04T19:05:12.404Z] Error while deploying to emulator: TypeError: Cannot read property 'call' of undefined
TypeError: Cannot read property 'call' of undefined
at Promise (/usr/local/lib/node_modules/firebase-tools/node_modules/#google-cloud/functions-emulator/src/client/rest-client.js:34:42)
at getService.then (/usr/local/lib/node_modules/firebase-tools/node_modules/#google-cloud/functions-emulator/src/client/rest-client.js:33:16)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
⚠ functions: Failed to emulate helloWorld
I have latest functions emulator *1.0.0-beta4.
All other libs are in latest versions...
I'm at point where I have no idea how to debug this better and how to resolve this
This error occurs if an existing process listens on port 5000. Double check if there's a process running, e.g. on Mac OS Sierra
sudo lsof -n -iTCP:5000 | grep LISTEN
Stop this process or run firebase on another port, e.g.
firebase -p 7777 serve --only functions
No need to reinstall packages. Error message is not helpful and we spent some time to find the root cause.
It turned out that it was due other app that was listening on port 5001!
Firebase serve stops with some weird message.
Just check as answer below if ports aren't busy

Error: Server Error. connect ETIMEDOUT 104.197.85.31:443

i am using firebase cli and i try the following
$ firebase init
the cli continue with the message :
You're about to initialize a Firebase project in this directory:
c:\Dev\Test
are you ready to proceed ?
i type y
and choose Hosting:Configure and deploy Firebase Hosting sites
after few second i get the error
Error: Server Error. connect ETIMEDOUT 104.197.85.31:443
what can be the problem ?
my Enviroment specification:
CLI Version: 3.9.1
Platform: win32
Node Version: v7.10.0
Time:Thu Jun 08 2017 18:03:14 GMT+0430 (Iran Daylight Time)
i am useing PSIPHONE also as a proxy but i have also tried without PSIPHONE .
in both cases i get connection timeout
i have pinged 104.197.85.31 the connection is ok .
any suggestion to solve the issue ?
is it because i am in IRAN ?
Just close your terminal and re-run these commands.
firebase login
and now try again
firebase deploy --only hosting
It works for me!!!
I don't know why this is the first result on google but the second result had the answer:
https://stackoverflow.com/a/63832234/3393964
npm i -g firebase-tools
Updating firebase tools worked for me. then, try again firebase deploy.
if it still didn't work try clearing the firebase cache which is the .firebase file
It's on the Firebase's end. You can check status here.
In my case, my work PC started blocking Firebase deploys (Even on my mobile network).
Deploying from my home PC worked just fine.

Publish to Azur fails with 500 internal Server Error

I have a cloud service on Windows Azure, I created a Asp.net WebAPI project and published to the cloud service, that was working fine from Visual Studio to publish before i updated visual studio to update 4 and azure SDK 2.2 to 2.6. But after updating when I publish, I got the following error messages. I tried several times, all failed. Can anyone help me?
even i am not able to publish a new created project on new azure service !
11:00:31 PM - Warning: There are package validation warnings.
11:00:31 PM - Checking for Remote Desktop certificate...
11:00:39 PM - Preparing deployment for TempAzure - 2/12/2014 10:58:23
PM with Subscription ID 'e94e9aeb-7003-4eae-be92-7b7ac0a1ba2c' using
Service Management URL 'https://management.core.windows.net/'...
11:00:39 PM - Connecting...
11:00:39 PM - Verifying storage account 'jasontest'...
11:00:41 PM - Uploading Package...
11:06:48 PM - Warning: The remote server returned an error: (500)
Internal Server Error.
11:11:50 PM - Warning: The remote server returned an error: (500)
Internal Server Error.
11:26:16 PM - Warning: The remote server returned an error: (500)
Internal Server Error.
12:00:27 AM - Warning: The remote server returned an error: (500)
Internal Server Error.
12:05:05 AM - Warning: The remote server returned an error: (500)
Internal Server Error.
12:27:54 AM - Unable to write data to the transport connection: An
existing connection was forcibly closed by the remote host.
After updating from Azure SDK2.5 to SDK 2.6, I had the same problem when trying publish to my Azure service from VS2013: Any deployment effort using Visual Studio fails after some minutes with 500 Internal server error.
As I found the reason is the very slow upload of the Azure package to the cloud - sometimes only between 30kB/s and 50kB/s. The deployment fails because of an timeout which also explains, that the Azure instance logs show no sign of any deployment...
Work around: Deploy from Azure storage
1: Package the Azure solution, either via VisualStudio or via command line:
MSBuild /t:Publish /p:TargetProfile=Cloud /P:Configuration=Release
2: Create an Azure storage container to upload your package to.
Continue using AzurePowerShell cmdlets:
3: Login
Add-AzureAccount
4: Upload the package to the your Azure storage container
$Ctx = New-AzureStorageContext -StorageAccountName "yourstoragename" -StorageAccountKey "yourkey"
Set-AzureStorageBlobContent -File "...\app.publish\yourservice.cspkg" -Container "yourazurestoragecontainer" -Blob "yourservice.cspkg" -Context $Ctx -Force
Determine the PackageURL of the uploaded package.
5: Deploy the cloud service referring to the package just uploaded to Azure cloud storage.
Set-AzureDeployment -Upgrade -Slot "Staging" -Package "PackageURL" -Configuration "PathToYourCloudConfiguration.cscfg" -label "SomeDeploymentInfo" -ServiceName "yourservicename" -Force
(Of course the entire process is scriptable. Kemp Brown wrote a great article: with a script you could adapt to explictly upload the package:
Continuous Delivery for Cloud Services in Azure)
Actually the problem was my network connection.
to identify this problem i created VM on azure with same windows 8.1 OS and same VS. i tried to deploy from there. deployment worked fine. latter I disconnected all other devices from my internet connection and i tried to publish from my machine. it worked !
so conclusion is slow internet connection or may be now we have less timeout time for publish from VS !

Resources